Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scale fixed #15

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def embedding(inputs,
num_units: An int. Number of embedding hidden units.
zero_pad: A boolean. If True, all the values of the fist row (id 0)
should be constant zeros.
scale: A boolean. If True. the outputs is multiplied by sqrt num_units.
scale: A boolean. If True. the outputs is divided by sqrt num_units.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Expand Down Expand Up @@ -112,7 +112,7 @@ def embedding(inputs,
outputs = tf.nn.embedding_lookup(lookup_table, inputs)

if scale:
outputs = outputs * (num_units ** 0.5)
outputs = outputs / (num_units ** 0.5)

return outputs

Expand All @@ -129,7 +129,7 @@ def positional_encoding(inputs,
inputs: A 2d Tensor with shape of (N, T).
num_units: Output dimensionality
zero_pad: Boolean. If True, all the values of the first row (id = 0) should be constant zero
scale: Boolean. If True, the output will be multiplied by sqrt num_units(check details from paper)
scale: Boolean. If True, the output will be divided by sqrt num_units(check details from paper)
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Expand Down Expand Up @@ -160,7 +160,7 @@ def positional_encoding(inputs,
outputs = tf.nn.embedding_lookup(lookup_table, position_ind)

if scale:
outputs = outputs * num_units**0.5
outputs = outputs / (num_units**0.5)

return outputs

Expand Down