Skip to content

Commit

Permalink
Fix issue with setting of param_groups/defaults/state for the DPOptim…
Browse files Browse the repository at this point in the history
…izer wrapper (#660)

Summary:
Pull Request resolved: #660

Fix for github issue # [649](#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
  • Loading branch information
iden-kalemaj authored and facebook-github-bot committed Aug 1, 2024
1 parent f1d0e02 commit 91c9b1b
Showing 1 changed file with 43 additions and 5 deletions.
48 changes: 43 additions & 5 deletions opacus/optimizers/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
from __future__ import annotations

import logging
from collections import defaultdict
from typing import Callable, List, Optional, Union

import torch
Expand Down Expand Up @@ -240,10 +241,6 @@ def __init__(
self.step_hook = None
self.generator = generator
self.secure_mode = secure_mode

self.param_groups = self.original_optimizer.param_groups
self.defaults = self.original_optimizer.defaults
self.state = self.original_optimizer.state
self._step_skip_queue = []
self._is_last_step_skipped = False

Expand Down Expand Up @@ -376,6 +373,48 @@ def accumulated_iterations(self) -> int:
)
return vals[0]

@property
def param_groups(self) -> List[dict]:
"""
Returns a list containing a dictionary of all parameters managed by the optimizer.
"""
return self.original_optimizer.param_groups

@param_groups.setter
def param_groups(self, param_groups: List[dict]):
"""
Updates the param_groups of the optimizer.
"""
self.original_optimizer.param_groups = param_groups

@property
def state(self) -> defaultdict:
"""
Returns a dictionary holding current optimization state.
"""
return self.original_optimizer.state

@state.setter
def state(self, state: defaultdict):
"""
Updates the state of the optimizer.
"""
self.original_optimizer.state = state

@property
def defaults(self) -> dict:
"""
Returns a dictionary containing default values for optimization.
"""
return self.original_optimizer.defaults

@defaults.setter
def defaults(self, defaults: dict):
"""
Updates the defaults of the optimizer.
"""
self.original_optimizer.defaults = defaults

def attach_step_hook(self, fn: Callable[[DPOptimizer], None]):
"""
Attaches a hook to be executed after gradient clipping/noising, but before the
Expand All @@ -386,7 +425,6 @@ def attach_step_hook(self, fn: Callable[[DPOptimizer], None]):
Args:
fn: hook function. Expected signature: ``foo(optim: DPOptimizer)``
"""

self.step_hook = fn

def clip_and_accumulate(self):
Expand Down

0 comments on commit 91c9b1b

Please sign in to comment.