Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix issue with setting of param_groups/defaults/state for the DPOptim…
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
- Loading branch information