-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix issue with setting of param_group for the DPOptimizer wrapper #660
Conversation
This pull request was exported from Phabricator. Differential Revision: D60453849 |
This pull request was exported from Phabricator. Differential Revision: D60453849 |
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
620356b
to
96d5c6b
Compare
This pull request was exported from Phabricator. Differential Revision: D60453849 |
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
96d5c6b
to
705038f
Compare
This pull request was exported from Phabricator. Differential Revision: D60453849 |
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
705038f
to
913bfeb
Compare
This pull request was exported from Phabricator. Differential Revision: D60453849 |
913bfeb
to
1b4c676
Compare
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
This pull request was exported from Phabricator. Differential Revision: D60453849 |
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
1b4c676
to
265f306
Compare
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
This pull request was exported from Phabricator. Differential Revision: D60453849 |
265f306
to
91c9b1b
Compare
This pull request has been merged in a059670. |
Summary:
Background: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. Param_groups stores all parameters related to the learning algorithm, including privacy-related parameters.
Issue: Previously, DPOptimizer passed param_groups simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).
Fix: In this fix, we set param_groups in DPOptimizer using property, which ensures that param_groups is updated both for DPOptimizer and Optimizer.
Differential Revision: D60453849