Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue with setting of param_group for the DPOptimizer wrapper #660

Closed
wants to merge 1 commit into from

Conversation

iden-kalemaj
Copy link
Contributor

Summary:
Background: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. Param_groups stores all parameters related to the learning algorithm, including privacy-related parameters.

Issue: Previously, DPOptimizer passed param_groups simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

Fix: In this fix, we set param_groups in DPOptimizer using property, which ensures that param_groups is updated both for DPOptimizer and Optimizer.

Differential Revision: D60453849

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 30, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Jul 30, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Jul 30, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Jul 31, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Aug 1, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Aug 1, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D60453849

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in a059670.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants