You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 4, 2018. It is now read-only.
It is possible, though unlikely, that unsigned int is 16 bit.
Libuv does not claim to support platforms where ints have fewer than 32 bits. (What does, these days?)
Even in a 32 bit world, hitting UINT_MAX becomes more likely as machines get faster.
I don't think that's realistic on 32 bit architectures: in order to hit UINT_MAX, you need to insert 2^32 elements - but that requires more memory than a 32 bits architecture can address.
It could become an issue on 64 bits architectures however. I see two solutions: adding that assert you suggest or changing the type of nelts to uintptr_t.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Although understandably an edge case, heap_insert should fail to insert a new node if heap->nelts == UINT_MAX.
It is possible, though unlikely, that unsigned int is 16 bit. Even in a 32 bit world, hitting UINT_MAX becomes more likely as machines get faster.
At the very least it should assert(heap->nelts < UINT_MAX). It would be better to abort() than blindly corrupt the data structure.
The text was updated successfully, but these errors were encountered: