Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Race condition in BsonMapper #2536

Open
einarmo opened this issue Aug 30, 2024 · 4 comments · May be fixed by #2538
Open

[BUG] Race condition in BsonMapper #2536

einarmo opened this issue Aug 30, 2024 · 4 comments · May be fixed by #2538
Labels

Comments

@einarmo
Copy link

einarmo commented Aug 30, 2024

Version
LiteDB 5.0.21
.NET 8.0.401
Linux (Fedora 38)

Describe the bug
Specifically, what I see is an occasional stacktrace:

System.NotSupportedException : Member Id not found on BsonMapper for type MyNamespace.MyType
 Stack Trace:
at LiteDB.LinqExpressionVisitor.ResolveMember(MemberInfo member)
at LiteDB.LinqExpressionVisitor.VisitMember(MemberExpression node)
at LiteDB.LinqExpressionVisitor.ResolvePattern(String pattern, Expression obj, IList`1 args)
at LiteDB.LinqExpressionVisitor.VisitMethodCall(MethodCallExpression node)
at System.Linq.Expressions.ExpressionVisitor.VisitLambda[T](Expression`1 node)
at LiteDB.LinqExpressionVisitor.VisitLambda[T](Expression`1 node)
at LiteDB.LinqExpressionVisitor.Resolve(Boolean predicate)
at LiteDB.BsonMapper.GetExpression[T,K](Expression`1 predicate)
at LiteDB.LiteCollection`1.DeleteMany(Expression`1 predicate)

when running an expression like

collection.DeleteMany(item => idsToDelete.Contains(item.Id))

Code to Reproduce
I cannot reproduce this consistently, it only happens once in a while, likely while I am running multiple concurrent transactions not against the same collection, but using the same types.

I believe this is a race condition caused by #2493, here's the possible race I see that this PR introduced:

  • Thread A enters GetEntityMapper here, and does not find the mapper.
  • Thread A calls BuildAddEntityMapper, and goes past this line, adding an entity mapper to the _entities dictionary.
  • Thread A is preempted by thread B, which enters GetEntityMapper.
  • Thread B finds a mapper and returns, but this mapper won't work, as it has not yet been populated with fields, resulting in the stacktrace above.

This code has existed for a long time, and this issue only started after updating to the latest version, which contains this fix. There may be a different cause, though my trace above seems plausible.

@einarmo einarmo added the bug label Aug 30, 2024
einarmo added a commit to cognitedata/dotnet-extractor-utils that referenced this issue Aug 30, 2024
There is an issue here: mbdavid/LiteDB#2536,
which I believe is causing flaky behavior in the OPC-UA tests.
einarmo added a commit to cognitedata/dotnet-extractor-utils that referenced this issue Aug 30, 2024
There is an issue here: mbdavid/LiteDB#2536,
which I believe is causing flaky behavior in the OPC-UA tests.
@JKamsker
Copy link
Collaborator

JKamsker commented Sep 3, 2024

Oh i see it now. Thats pretty bad, i am sorry for that. Would you mind pr'ing a fix?

einarmo added a commit to einarmo/LiteDB that referenced this issue Sep 3, 2024
This fixes a race condition in BsonMapper, caused by a fix to a
different issue in mbdavid#2493.

It seems like the current approach of checking the dictionary twice is
deliberate. That said, I don't believe reading from a dictionary
that may be in the process of being updated is actually safe to begin
with.
einarmo added a commit to einarmo/LiteDB that referenced this issue Sep 3, 2024
This fixes a race condition in BsonMapper, caused by a fix to a
different issue in mbdavid#2493.

It seems like the current approach of checking the dictionary twice is
deliberate. That said, I don't believe reading from a dictionary
that may be in the process of being updated is actually safe to begin
with.
einarmo added a commit to einarmo/LiteDB that referenced this issue Sep 3, 2024
This fixes a race condition in BsonMapper, caused by a fix to a
different issue in mbdavid#2493.

It seems like the current approach of checking the dictionary twice is
deliberate. That said, I don't believe reading from a dictionary
that may be in the process of being updated is actually safe to begin
with.
@einarmo
Copy link
Author

einarmo commented Sep 3, 2024

I made a PR with a simple fix, though I'm not sure it is the best solution, since it will come with a performance cost that I think the original author of this code wanted to avoid.

@JKamsker
Copy link
Collaborator

JKamsker commented Sep 3, 2024

Performance < Stability

@JKamsker JKamsker linked a pull request Sep 6, 2024 that will close this issue
@JKamsker
Copy link
Collaborator

JKamsker commented Sep 6, 2024

Hey @einarmo can you have a look at my pr and if it solves your issue? My solution doesnt require locking all the time and should still be safe from race conditions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants