Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] Avoid lock contention for getting count of inner dictionary size #2807

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

ms-ruochenyu
Copy link

@ms-ruochenyu ms-ruochenyu commented Sep 4, 2024

[Performance] Avoid lock contention for getting count of inner dictionary size

  • You've read the Contributor Guide and Code of Conduct.
  • You've included unit or integration tests for your change, where applicable.
  • You've included inline docs for your change, where applicable.
  • If any gains or losses in performance are possible, you've included benchmarks for your changes. More info
  • There's an open issue for the PR that you are making. If you'd like to propose a new feature or change, please open an issue to discuss the change or find an existing issue.

Summary of the changes (Less than 80 chars)
Avoid lock contention for getting count of inner dictionary size in EventBasedLRUCache.

Description

We have seen bottlenecks in some of internal cloud services where EventBasedLRUCache<TKey, TValue> calls Count() for its inner _map. This property calculation brings with heavy lock contention, espeicial in SetValue() and Compact()

This change adds a new private field int this._mapSize to be updated during entries in _map being updated. _mapSize will be updated as atomic and only used to new cache size calculation so, there should not be beheavior change from observation outside.

@ms-ruochenyu ms-ruochenyu requested a review from a team as a code owner September 4, 2024 05:25
@ms-ruochenyu
Copy link
Author

@microsoft-github-policy-service agree company="Microsoft"

@ms-ruochenyu ms-ruochenyu changed the title init [Performance] Avoid lock contention for getting count of inner dictionary size Sep 4, 2024
@keegan-caruso
Copy link
Contributor

@ms-ruochenyu Can you please open and link an issue to this?

@@ -498,6 +511,7 @@ public bool SetValue(TKey key, TValue value, DateTime expirationTime)
}

_map[key] = newCacheItem;
Interlocked.Increment(ref _mapSize);
Copy link
Member

@brentschmaltz brentschmaltz Sep 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_map[key] = newCacheItem, does not imply the count has increased.

@brentschmaltz
Copy link
Member

@ms-ruochenyu what is the performance gain?

@ms-ruochenyu
Copy link
Author

@ms-ruochenyu what is the performance gain?

@brentschmaltz, below is a sample we collected in 1 of internal workload. The CPU cost was clustered to taking bucket write lock inside ConcurrentDictionary.Count(). So does Compact() call. image

@brentschmaltz
Copy link
Member

@ms-ruochenyu i am unable to understand your CPU diagram.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants