You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey hackclub HQ. It seems really unoptimal that we are using a minutes counter stored in a prisma database to record session time. Prisma takes around 2ms to update a record (https://benchmarks.prisma.io/?active_query=update). We are doing up to 4 update queries per session per minute, let's say about 2 on average. If we consider about 10k active users this comes down to 2ms * 10_000 * 2 = 40 seconds. This clock requires about 40 seconds of processing time per minute. If we go up to about 15k active users our time to update the records becomes longer than a minute.
I don't quite get why we would even store such often accessed data in a database. Numbers are about 8 bytes of data in javascript. We have an id which we need to store as well which are a 24 char long string which is 48 bytes. Together that comes to about 56 bytes. Take that times 15k (for every session we have) and it takes about 840000 bytes of memory to store this. That is 0.8 MB, not even a single megabyte.
Furthermore, we could use timestamps to make the whole act more efficient. We just store the timestamp when a session has started + additionally an minute offset integer, in which we will store the current minutes after we pause a session.
We can then calculate the elapsed time by using basic math let elapsed = now - sessionStarted + minuteOffset.
This is trivial to implement and still it improves the performance by a ton. Database queries are very slow. We should avoid them whenever possible. Yet we still do 2 for every session every minute. Why?
The text was updated successfully, but these errors were encountered:
Hey hackclub HQ. It seems really unoptimal that we are using a minutes counter stored in a prisma database to record session time. Prisma takes around
2ms
to update a record (https://benchmarks.prisma.io/?active_query=update). We are doing up to 4 update queries per session per minute, let's say about 2 on average. If we consider about 10k active users this comes down to 2ms * 10_000 * 2 = 40 seconds. This clock requires about 40 seconds of processing time per minute. If we go up to about 15k active users our time to update the records becomes longer than a minute.I don't quite get why we would even store such often accessed data in a database. Numbers are about 8 bytes of data in javascript. We have an id which we need to store as well which are a 24 char long string which is 48 bytes. Together that comes to about 56 bytes. Take that times 15k (for every session we have) and it takes about 840000 bytes of memory to store this. That is 0.8 MB, not even a single megabyte.
Furthermore, we could use timestamps to make the whole act more efficient. We just store the timestamp when a session has started + additionally an minute offset integer, in which we will store the current minutes after we pause a session.
We can then calculate the elapsed time by using basic math
let elapsed = now - sessionStarted + minuteOffset
.This is trivial to implement and still it improves the performance by a ton. Database queries are very slow. We should avoid them whenever possible. Yet we still do 2 for every session every minute. Why?
The text was updated successfully, but these errors were encountered: