-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermediate results of training R2B model #196
Comments
Hi, thanks for checking out Larq Zoo. @timdebruin and I went back through some old experiments to find their top-1 accuracies for you; here's the details:
Hope this helps! |
Great! I really appreciate your fast response! Do you also have the results for strong baseline? |
No problem. Sadly I couldn't find our experiment data for the strong baseline, but from my notes at the time I think we got 62% validation accuracy. |
Cool! 62% is even higher than what is reported on the paper! Do you have any thoughts on what caused this improvement? |
I can only reach about 65% training ResNet | FP weights, FP activations | 100 I used 8 cards and simply ran the code. Would you like to share some models or advice? |
Hi,
It is so nice that you almost reproduced the r2b model (65.04% top-1 accuracy).
Can you also provide the intermediate results of training process of r2b model?
For instance, the top-1 accuracy of each stage might be very helpful for me.
Thanks!
The text was updated successfully, but these errors were encountered: