-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accuracy issue #190
Comments
@saifullah27 you probably did not save and load the word2vec model and the scaler as well. They're essential for getting reasonable performance. |
can u please take a look on my code? what wrong i m doing?
|
Looks correct 👍 You don't need to run |
i extracted some rows from my train set as unseen dataset. now i want to execute code again: |
how do you test the model i.e. how do you run it on the unseen data? |
first i wrote this code: model = Magpie()
predictions = model.predict_from_file(unseendata) now i did this: predictions = model.predict_from_file(unseendata) #same dataset as above |
that looks very weird, Magpie should give you the same answer. What happens if you run |
yes it fails. |
That doesn't seem possible. How do you compute accuracy? |
i think this is something issue with weights, the model doesnt save weights. i read about this issue on several places. |
I also confirmed that there is a problem with saved model. [('Theory-HEP', 0.48431695), ('Gravitation and Cosmology', 0.36985886), ('Phenomenology-HEP', 0.33781576), ('Astrophysics', 0.2659044), ('Experiment-HEP', 1.513857e-06)] Load saved model, it returns different result every time it re-loads and runs. [('Phenomenology-HEP', 0.48431695), ('Astrophysics', 0.36985886), ('Theory-HEP', 0.33781576), ('Experiment-HEP', 0.2659044), ('Gravitation and Cosmology', 1.513857e-06)] [('Experiment-HEP', 0.48431695), ('Theory-HEP', 0.36985886), ('Phenomenology-HEP', 0.33781576), ('Astrophysics', 0.2659044), ('Gravitation and Cosmology', 1.513857e-06)] [('Astrophysics', 0.6604629), ('Gravitation and Cosmology', 0.464918), ('Experiment-HEP', 0.22854687), ('Phenomenology-HEP', 0.20457356), ('Theory-HEP', 9.436367e-07)] Any idea why this is happening? |
you have not saved the model weights after training. it requires to save the model weights and load it and compile the model based on saved weights. |
are you getting different answers also without saving & reloading? Just train it, keep it in memory and query it 10 times. I'd be curious whether it gives deterministic responses. |
no, if i dnt save/load model. it gives almost similar answer. there is slight changes in accuracy rate. |
@vnwind please mail me directly at [email protected] |
Hello, I trained the model and it showed an top_k_accuracy going around 80%, then I predicted on my test set and got the mean average precision score and it went to 16%, do you think this is because of the different scores? |
@saifullah27 @jstypka @hzsolutions [Fix Found] |
hi i trained model and the accuracy was 80% on test dataset, i saved model.
when i load the saved model and measured accuracy again, i achieved 1%. is this issue with random seed? or something else. looking forward for your response. thanks
The text was updated successfully, but these errors were encountered: