You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 1, 2020. It is now read-only.
And i also got a error when i try to use GIST1M dataset:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/queues.py", line 266, in _feed
send(obj)
SystemError: NULL result without error in PyObject_Call
Is that the code in python folder is not for the large dataset like SIFI?
Or that is some mistake in importing data process?
Looking forward to your reply.
Thanks a lot!
The text was updated successfully, but these errors were encountered:
Regarding the sift1m dataset, I can't tell if anything is wrong from those numbers, but I think it is probably simply that the quantization is too coarse for a dataset with the size/complexity of sift1m. From my notes, I found that my recall around [~0.40 ~0.85 ~0.98 ~0.98] with V=1024, M=8, subquants=256. Fitting the quantizers in this case might take a while (an hour or two) on a single machine.
In any case, the code in python/ assumes that the full datasets fits in memory. This assumption would need to be changed if this is not the case for you (or try the Spark code instead).
Thanks a lot,I change V=16 to V=1024,and I got the result seems more correct than last time.
BTW,where is the API which can change the value of w according to the paper?
And what is the default value of w in your implement? @pumpikano
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I rewrite example.py by changing the replacing input dataset 'sift1m', and i got a result that seems like a wrong evaluation:
And i also got a error when i try to use GIST1M dataset:
Is that the code in python folder is not for the large dataset like SIFI?
Or that is some mistake in importing data process?
Looking forward to your reply.
Thanks a lot!
The text was updated successfully, but these errors were encountered: