-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement caching system to be used for page load functions #403
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
I'll paste this comment from #400 here as well:
|
I see your comment Daniel and guess it's disputing that perhaps ALL data is user-level, because even Alerts might some day not be the same for everyone. At some point I think we have to think about performance over potential future problems (which can be solved then), and right now our landing page takes almost a second to load, every time. At least implementing user-level caching improves performance a lot for someone when you click around within the app, or go back and forth between pages. |
|
||
// COMMIT DATA | ||
const commitPromise = fetch("/api/home").then((res) => res.json()); | ||
const commitPromise = globallyCached("commitData", async () => { | ||
const res = await fetch("/api/home"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The /api/home
endpoint already caches the commit data to avoid getting rate limited by Github, should probably extract the logic and remove the endpoint entirely
I've given this a lot of thought over the past weeks and frankly I haven't really reached any sort of conclusion. But I'll put down some thoughts here while we await opinions from other DWWW-members.
This is a great point that really puts our entire model for access control into question. I pushed for the idea of database-level access control because I hoped it would make it easy to enforce access control everywhere. There's no risk of ever forgetting to enable access control on a specific server function, because it's handled at a lower layer. However, that starts to break down when we have operations that should ignore access control. We can easily spot these operations since they occur wherever
This is a very reasonable take, but not one I care much about (but that's just a matter of priorities). I think a one second loading time is fine. Obviously it's not ideal, but I think it's unlikely to affect usage of our web page. Caching (as implemented in this PR) on the other hand is famously error-prone (see especially this, or this quote) so I don't think it's worth the trade-off
It does, and I have always wanted user-level caching. However, my idea was to implement it using service workers on the client side. This seems less error-prone, but I haven't tried implementing it yet. Summary:
|
This PR adds a naive in-memory caching system which works like this:
You have an async method you would like to cache, 99% a database request but does work for stuff like Github commit data etc.
You have to decide, is this data user-specific or global? (For example, alerts are global because we want every visitor to see the same alerts, most data is user-specific due to our access system)
Depending on the answer you either wrap your async call in a
globallyCached(...)
oruserLevelCached(...)
method. What this does is save the return value in an in-memory cache on the server, and next time it's called does a cache query first.The cache system is purged every so often, and cache lifetime is set per-resource (with a default of 5 minutes IIRC)