-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
embedding model前向推理时,不同batchsize,相同文本,输出embedding小数点后几位不同 #1113
Comments
试试把seed设置成0 |
还是一样有差异,而且应该和seed没关系,不是训练阶段,是模型eval()推理的时候,理论上推理的时候不应该有某个随机结果,没想通差异是为啥 |
感觉不是随机结果,更像是精度损失导致的。在用CPU推理的时候复现出了类似的情况,用GPU推理暂时没有发现这个问题。猜测有可能是硬件加载模型、数据传输或计算过程导致的误差 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
embedding model前向推理时,不同batchsize,相同文本,输出embedding小数点后几位不同,请问这是什么原因导致的?理论上相同的输入应该会有完全相同的输出才对?
使用的是示例代码:
The text was updated successfully, but these errors were encountered: