-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for monodle failure #629
base: main
Are you sure you want to change the base?
Conversation
0a982bf
to
fd53e87
Compare
0aeabea
to
f9c284f
Compare
input_nid = input_[0] | ||
input_node = graph["nodes"][input_nid] | ||
if input_node["op"] == "parameter" and input_node["name"].endswith("weight"): | ||
in_channel = input_node["attrs"]["shape"][0][0][0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this handle different shape ranks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, inputs are either going to be 3d or 4d for convtranspose2d which is handled here, I have added different shape rank shape inputs in sanity now.
input_nid = input_[0] | ||
input_node = graph["nodes"][input_nid] | ||
if input_node["op"] == "parameter" and input_node["name"].endswith("weight"): | ||
in_channel = input_node["attrs"]["shape"][0][0][0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does it work for channel first and channel last cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
input_node["attrs"]["shape"][0][0][0] corresponds to the in-channels from the param being sent
In case of below example, it will be 16 and it would be the same for channelfirst and channel last cases
"in_channels, out_channels, kernel_size, stride, padding, groups, bias, dilation, padding_mode, input_shape",
[ 16, 33, (3, 3), 2, 0, 1, True, 1, "zeros", (20, 16, 50, 100)) ]
f9c284f
to
8f9f3b1
Compare
8f9f3b1
to
a5ef92b
Compare
self.padding_bottom, | ||
] | ||
|
||
assert self.padding_top == self.padding_bottom, "Padding values for top and bottom must be the same." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this is a TTNN limitation, do we need to assert this here?
Let's track this as their issues, and model it further if we see that this is by design :))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nvukobratTT , the TTNN limitation is for a different model i.e mobilenetv2 and not for this model, these changes has been added to modify the padding handled in eval function of convtranspose2d op as a fix for monodle model.
Fix #513