You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m currently extending your model and trying to better understand how multiple masks are handled during training, particularly with respect to multiple [SEG] tokens. I noticed that when batch_size=1, this line appears to only calculate a single mask, even when there are multiple [SEG] tokens predicted.
As a result, it seems that seg_token_offset would have the shape 1xM (where M is the number of predicted [SEG] tokens), and the following loop:
foriinrange(len(seg_token_offset) -1):
would only execute once.
Could you clarify whether this behavior is intended, or if there is something I’m missing regarding how multiple masks should be outputted in this case?
Thanks in advance for your insights!
Best,
Josh
The text was updated successfully, but these errors were encountered:
Hello @hanoonaR!
I’m currently extending your model and trying to better understand how multiple masks are handled during training, particularly with respect to multiple
[SEG]
tokens. I noticed that whenbatch_size=1
, this line appears to only calculate a single mask, even when there are multiple[SEG]
tokens predicted.As a result, it seems that
seg_token_offset
would have the shape1xM
(whereM
is the number of predicted[SEG]
tokens), and the following loop:would only execute once.
Could you clarify whether this behavior is intended, or if there is something I’m missing regarding how multiple masks should be outputted in this case?
Thanks in advance for your insights!
Best,
Josh
The text was updated successfully, but these errors were encountered: