Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In DeepFm.py line 102, why the second order term's size is (None, K) rather than (None, )? #78

Open
Wang-Yu-Qing opened this issue Sep 26, 2021 · 2 comments

Comments

@Wang-Yu-Qing
Copy link

From the equation (2) in the paper, the second order term should be a scalar. However, in the code, it is computed as a vector. I think the line 102 should be something like this:
self.y_second_order = 0.5 * tf.reduce_sum(tf.subtract(self.summed_features_emb_square, self.squared_sum_features_emb))

@nkqiaolin
Copy link

IMO, the original paper said each of three components should all be one scalar, then sum together to fed to sigmod() . But you can also treat the sum of scalar as sum(w1x1, w2x2...), and set w1=w2=..=1。In this git repo, the author treat all W as trainable variables, which is an extension to original paper. Although effect of this modification is unknown, this implementation is still following the core ideas of the paper.

@alpha008
Copy link

alpha008 commented Mar 10, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants