-
-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected behavior with LRU #448
Comments
Hi. Thank you for reporting the issue and providing the reproducer.
However,
In your repro, you will see the expected behavior if you call --- src/bin/main1.rs 2024-08-04 18:49:24
+++ src/bin/main2.rs 2024-08-04 18:49:33
@@ -13,6 +13,7 @@
// Insert two entries into the cache.
cache.insert(1, "a").await;
cache.insert(2, "b").await;
+ cache.run_pending_tasks().await;
// Access the second entry and then the first.
print!("{:?}", cache.get(&2).await);
@@ -32,8 +33,8 @@
// println!("{:?}", cache);
// prints {2: "b", 3: "c"} or {3: "c", 2: "b"}
assert_eq!(cache.entry_count(), 2);
- assert_eq!(cache.get(&1).await, None);
- assert_eq!(cache.get(&2).await, Some("b"));
+ assert_eq!(cache.get(&1).await, Some("a"));
+ assert_eq!(cache.get(&2).await, None);
assert_eq!(cache.get(&3).await, Some("c"));
// }
println!("Done"); This is because the cache holds pending tasks for reads and writes in separate queues (channels), and when these pending tasks are processed, the cache processes the reads first and then the writes. The cache creates a node in the LRU queue for key When I implemented v0.1 of But, I understand that this behavior is confusing. We could change the cache to process reads and writes in the order they were performed. For example, we could add the timestamp of the read or write to the read and write recordings, and process the recordings in the order of the timestamp. This will make the cache would behave as you expect(*1). I will evaluate this and maybe other options, and consider one for a future release. *1: The cache would behave as you expect as long as the read queue does not get full. When it is full, new read recordings will be discarded and this will have some impacts to the cache hit rate. Just FYI, we used to have a brief documentation about the internals of the cache: I had to remove it when we released v0.12.0 because it was outdated; the cache no longer has background threads. It mentions the separate queues for reads and writes, and what happens when one of the queues is full. But it does not mention the behavior you reported. I am hoping I can find a time to update the documentation. |
Not sure if I'm using the library incorrectly or there's an actual issue with the LRU implementation. Inserts/updates seem to be counted as U in LRU, but not gets:
With an LRU cache with capacity 2, after inserting "a" and then "b" and then getting "b" and then getting "a" and then inserting "c", only "b" and "c" exist instead of "a" and "c". This happens consistently after 10k repros.
I tried sleeping for 2 minutes and adding one more
run_pending_tasks().await
, but still the same behavior:Output:
cargo 1.80.0 (376290515 2024-07-16)
moka = { version = "0.12.8", features = ["future"] }
tokio = { version = "1.37", features = ["full"] }
The text was updated successfully, but these errors were encountered: