-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vicuna-13B results #24
Comments
Hi, Which config are you using? Vicuna and llama2 models have a 4k context window limit, which limits how many passages you can use in the context. |
Hi, thank you for your reply, the config is 2 shot, 3 ndoc |
Did you use the "light instruction" version as well? |
NO, I just use the default setting |
Can you try this config (but change the model name): https://github.com/princeton-nlp/ALCE/blob/main/configs/asqa_alpaca-7b_shot2_ndoc3_gtr_light_inst.yaml |
OK. thanks~ |
another question, when I use the setting |
Note that there is a difference between EM and QA-EM, and we report EM in the paper. Can you post the full output or |
Hi. this is the config of we used to reproduce the result on vicuna-13B |
so, how can I get the EM score of your paper reported? |
That is “str_em" |
Fine,Thanks |
Hello, when I reproduce the results on Vicuna-13B and Llams2-7B , I can not get any model output, and the code outputs the warning:"Prompt exceeds max length and return an empty string as answer. If this happens too many times, it is suggested to make the prompt shorter", How to deal with this phenomenon? Thank you~
The text was updated successfully, but these errors were encountered: