-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing accuracy of KQAPro #1
Comments
Thank you for your attention to our work. For KQA Pro, we handed over the prediction results on test set to the authors of KQA Pro for evaluation. Please refer to http://thukeg.gitee.io/kqa-pro/leaderboard.html. In experiments, the accuracy of the model on the validation set and the test set is almost the same. |
@leezythu thank you for your response. Do you have the results file on validation set or the category wise accuracy numbers on validation set? If you could provide that, it would be very helpful. |
No problem, you can give me your email and I will send you the file. |
My email id is: [email protected]. Thanks a lot! |
@leezythu I am eagerly waiting for the results file on validation set or the category wise accuracy numbers on validation set. Thanks! |
hi @leezythu I think you sent it on [email protected] but my email Id is [email protected]. I think there is a typo between "M" and "RN" in my name :) could you please re-send it to the correct email address? |
The spelling has no mistakes. Maybe there are some other mistakes. I have uploaded the files to the IR_results folder. |
Hi @leezythu. Thanks for your interesting work. I was particularly interested in the KQAPro results. Since KQAPro haven't released the ground truth answers of the test set, the numbers reported in the paper are on validation set or on test set only?
The text was updated successfully, but these errors were encountered: