-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't get proper prediction score from python when --boosting N option used at training time #4597
Labels
Bug
Bug in learning semantics, critical by default
Comments
Seems like a bug with boosting (and nn/loss function are not relevant here).
Output:
|
yeah seems like some binary classification logic is hardcoded there:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
I've trained a small model using the
--boosting N
option, like for example--loss_function logistic -b 18 --l1 0.1 --l2 0.0001 --nn 50 --boosting 5
using python or the command line.Then, when I try to get prediction scores out of it, I get "1.0" all the time when using python or using
-p
on the command line. When I use the-r
command line parameter with vw cli executable, I get the list of all features with their scores and a final meaninfgul score at the end.It seems impossible to get these values from within the python classes, no matter what I try (playing with the various
PredictionType
)This seems either a documentation bug or a more serious issue w.r.t. boosting reduction perhaps.
How to reproduce
--loss_function logistic -b 18 --l1 0.1 --l2 0.0001 --nn 50 --boosting 5
vw -i model.vw -t test.txt -r /dev/stdout
Version
9.2.0
OS
Linux
Language
Python
Additional context
No response
The text was updated successfully, but these errors were encountered: