Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use upgradable RwLocks to prevent work duplication #3

Open
michaelsproul opened this issue Feb 3, 2022 · 0 comments · May be fixed by #29
Open

Use upgradable RwLocks to prevent work duplication #3

michaelsproul opened this issue Feb 3, 2022 · 0 comments · May be fixed by #29

Comments

@michaelsproul
Copy link
Member

Currently we obtain a read lock while hashing to see if the hash is already computed. If it isn't then we drop the read lock, compute the hash and then re-obtain a write lock to store the computed hash. This leads to a race condition in which work can be duplicated if two threads try to read and compute the hash for a node at the same time.

I think using an upgradable RwLock allows us to prevent work being duplicated, using this algorithm:

  1. Obtain an upgradable read lock
  2. If the hash is already computed (!= 0), return it
  3. Upgrade the lock to a write lock
  4. Check again whether the hash is already computed, if so, return it
  5. Compute the hash
  6. Write the hash using the write lock, release

Step (4) is slightly counter-intuitive but is necessary to prevent this interleaving:

A1, A2, B1, B2, B3, B5, B6, A3, A5, A6

i.e. the case where thread B computes the hash while A is waiting for the write lock at step (3).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant