Skip to content

Commit

Permalink
Fix issue with setting of param_groups/defaults/state for the DPOptim…
Browse files Browse the repository at this point in the history
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
  • Loading branch information
iden-kalemaj authored and facebook-github-bot committed Jul 30, 2024
1 parent f1d0e02 commit 96d5c6b
Showing 1 changed file with 44 additions and 0 deletions.
44 changes: 44 additions & 0 deletions opacus/optimizers/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -376,6 +376,50 @@ def accumulated_iterations(self) -> int:
)
return vals[0]

@property
def param_groups(self) -> List[dict]:
"""
Returns a list containing a dictionary of all parameters managed by the optimizer.
"""
return self.original_optimizer.param_groups

@param_groups.setter
def param_groups(self, param_groups: List[dict]):
"""
Updates the param_groups of the optimizer, where param_groups is a list containg a dictionary
of all parameters mangaged by the optimizer.
"""
self.original_optimizer.param_groups = param_groups


@property
def state(self) -> defaultdict(dict):
"""
Returns a dictionary holding current optimization state.
"""
return self.original_optimizer.state

@state.setter
def state(self, state: defaultdict(dict)):
"""
Updates the state of the optimizer, where state is a dictionary holding current optimization state.
"""
self.original_optimizer.state = state

@property
def defaults(self) -> dict:
"""
Returns a dictionary containing default values for optimization.
"""
return self.original_optimizer.defaults

@defaults.setter
def defaults(self, defaults: dict):
"""
Updates the defaults of the optimizer, where defaults is a dictionary containing default values for optimization.
"""
self.original_optimizer.defaults = defaults

def attach_step_hook(self, fn: Callable[[DPOptimizer], None]):
"""
Attaches a hook to be executed after gradient clipping/noising, but before the
Expand Down

0 comments on commit 96d5c6b

Please sign in to comment.