Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior with LRU #448

Open
ilshatahatov opened this issue Jul 30, 2024 · 1 comment
Open

Unexpected behavior with LRU #448

ilshatahatov opened this issue Jul 30, 2024 · 1 comment

Comments

@ilshatahatov
Copy link

Not sure if I'm using the library incorrectly or there's an actual issue with the LRU implementation. Inserts/updates seem to be counted as U in LRU, but not gets:

use moka::future::Cache;
use moka::policy::EvictionPolicy;

#[tokio::main]
async fn main() {
    for _ in 0..10_000 {
        // Create a cache that holds at most 2 items.
        let cache = Cache::builder()
            .eviction_policy(EvictionPolicy::lru())
            .max_capacity(2)
            .build();

        // Insert two entries into the cache.
        cache.insert(1, "a").await;
        cache.insert(2, "b").await;

        // Access the second entry and then the first.
        print!("{:?}", cache.get(&2).await);
        print!("{:?}", cache.get(&1).await);
        // prints Some("b")Some("a")

        // UNCOMMENTING THE BELOW MAKES THE CACHE WORK AS EXPECTED
        /*
        cache.insert(2, "b").await;
        cache.insert(1, "a").await;
        */

        // Insert a third entry, which should cause the eviction of the least recently used entry (2 , 'b').
        cache.insert(3, "c").await;

        cache.run_pending_tasks().await;
        // println!("{:?}", cache);
        // prints {2: "b", 3: "c"} or {3: "c", 2: "b"} 
        assert_eq!(cache.entry_count(), 2);
        assert_eq!(cache.get(&1).await, None);
        assert_eq!(cache.get(&2).await, Some("b"));
        assert_eq!(cache.get(&3).await, Some("c"));
    }
    println!("Done");
}

With an LRU cache with capacity 2, after inserting "a" and then "b" and then getting "b" and then getting "a" and then inserting "c", only "b" and "c" exist instead of "a" and "c". This happens consistently after 10k repros.

I tried sleeping for 2 minutes and adding one more run_pending_tasks().await, but still the same behavior:

use moka::future::Cache;
use moka::policy::EvictionPolicy;

#[tokio::main]
async fn main() {
    for _ in 0..1 {
        // Create a cache that holds at most 2 items.
        let cache = Cache::builder()
            .eviction_policy(EvictionPolicy::lru())
            .max_capacity(2)
            .build();

        // Insert two entries into the cache.
        cache.insert(1, "a").await;
        cache.insert(2, "b").await;

        // Access the second entry and then the first.
        print!("{:?}", cache.get(&2).await);
        println!("{:?}", cache.get(&1).await);
        // prints Some("b")Some("a")
        cache.run_pending_tasks().await;
        

        // UNCOMMENTING THE BELOW MAKES THE CACHE WORK AS EXPECTED
        /*
        cache.insert(2, "b").await;
        cache.insert(1, "a").await;
        */

        // Insert a third entry, which should cause the eviction of the least recently used entry (2 , 'b').
        tokio::time::sleep(tokio::time::Duration::from_secs(120)).await;
        cache.insert(3, "c").await;

        cache.run_pending_tasks().await;
        // println!("{:?}", cache);
        // prints {2: "b", 3: "c"} or {3: "c", 2: "b"} ßßß
        assert_eq!(cache.entry_count(), 2);
        assert_eq!(cache.get(&1).await, None);
        assert_eq!(cache.get(&2).await, Some("b"));
        assert_eq!(cache.get(&3).await, Some("c"));
    }
    println!("Done");
}

Output:

Some("b")Some("a")
Done

cargo 1.80.0 (376290515 2024-07-16)
moka = { version = "0.12.8", features = ["future"] }
tokio = { version = "1.37", features = ["full"] }

@tatsuya6502
Copy link
Member

tatsuya6502 commented Aug 4, 2024

Hi. Thank you for reporting the issue and providing the reproducer.

  • You are using moka library correctly.
  • I can reproduce the issue using the reproducer you provided.

However,

  • moka is actually working as disigned.
  • This may rather be a problem with the documentation.
  • In the future, we may consider changing the behavior to match to your expectation.

In your repro, you will see the expected behavior if you call run_pending_tasks after inserting 1 and 2 into the cache.

--- src/bin/main1.rs    2024-08-04 18:49:24
+++ src/bin/main2.rs    2024-08-04 18:49:33
@@ -13,6 +13,7 @@
         // Insert two entries into the cache.
         cache.insert(1, "a").await;
         cache.insert(2, "b").await;
+        cache.run_pending_tasks().await;

         // Access the second entry and then the first.
         print!("{:?}", cache.get(&2).await);
@@ -32,8 +33,8 @@
         // println!("{:?}", cache);
         // prints {2: "b", 3: "c"} or {3: "c", 2: "b"}
         assert_eq!(cache.entry_count(), 2);
-        assert_eq!(cache.get(&1).await, None);
-        assert_eq!(cache.get(&2).await, Some("b"));
+        assert_eq!(cache.get(&1).await, Some("a"));
+        assert_eq!(cache.get(&2).await, None);
         assert_eq!(cache.get(&3).await, Some("c"));
     // }
     println!("Done");

This is because the cache holds pending tasks for reads and writes in separate queues (channels), and when these pending tasks are processed, the cache processes the reads first and then the writes.

The cache creates a node in the LRU queue for key 2 when its write task is processed, and the node is moved to the front of the queue (the MRU position of the queue) when the read task is processed. Since, cache processes the reads first, the node for key 2 does not exist in the LRU queue. This causes the cache to ignore the read task for key 2.

When I implemented v0.1 of moka, I made the decision to have the separate queues for reads and writes to avoid writes being blocked by many reads. When the write queue is full, writes are blocked until the write queue has space. In typical use cases, a cache will receive more reads than writes, so I thought having one queue for reads and writes would be a bottleneck. I knew that this design would cause the issue you reported. But I thought it was a reasonable trade-off as this behavior would not have visible impact to overall hit rate of the cache. This is because we will never be able to predict which of keys 1 or 2 is read before another or more often in real application. So we cannot say keeping key 1 is better than keeping key 2.

But, I understand that this behavior is confusing. We could change the cache to process reads and writes in the order they were performed. For example, we could add the timestamp of the read or write to the read and write recordings, and process the recordings in the order of the timestamp. This will make the cache would behave as you expect(*1). I will evaluate this and maybe other options, and consider one for a future release.

*1: The cache would behave as you expect as long as the read queue does not get full. When it is full, new read recordings will be discarded and this will have some impacts to the cache hit rate.

Just FYI, we used to have a brief documentation about the internals of the cache:
https://docs.rs/moka/0.11.3/moka/index.html#implementation-details

I had to remove it when we released v0.12.0 because it was outdated; the cache no longer has background threads. It mentions the separate queues for reads and writes, and what happens when one of the queues is full. But it does not mention the behavior you reported.

I am hoping I can find a time to update the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants