Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix gradient shape error for DPMultiheadAttention (issue 650) #651

Closed
wants to merge 1 commit into from

Conversation

HuanyuZhang
Copy link
Contributor

Summary:
When batch_first = True, the activation and partial gradient for each linear layer in DPMultiheadAttention still has batch_size in the second dimension, thus causing wrong gradient shape in optimizer.step().

Details in: #650

Differential Revision: D57446245

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 16, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57446245

HuanyuZhang added a commit to HuanyuZhang/opacus that referenced this pull request May 16, 2024
…h#651)

Summary:

When batch_first = True, the activation and partial gradient for each linear layer in DPMultiheadAttention still has batch_size in the second dimension, thus causing wrong gradient shape in optimizer.step().

Details in: pytorch#650

Differential Revision: D57446245
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57446245

HuanyuZhang added a commit to HuanyuZhang/opacus that referenced this pull request May 30, 2024
…h#651)

Summary:

When batch_first = True, the activation and partial gradient for each linear layer in DPMultiheadAttention still has batch_size in the second dimension, thus causing wrong gradient shape in optimizer.step().

Details in: pytorch#650

Differential Revision: D57446245
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57446245

…h#651)

Summary:

When batch_first = True, the activation and partial gradient for each linear layer in DPMultiheadAttention still has batch_size in the second dimension, thus causing wrong gradient shape in optimizer.step().

Details in: pytorch#650

Reviewed By: EnayatUllah

Differential Revision: D57446245
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57446245

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 202c58a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants