Skip to content

Commit

Permalink
Fix DistributedDP Optimizer for Fast Gradient Clipping (pytorch#662)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#662

The step function incorrectly called "original_optimizer.original_optimizer" instead of  "original_optimizer". Fixed it now.

Differential Revision: D60484128
  • Loading branch information
EnayatUllah authored and facebook-github-bot committed Aug 1, 2024
1 parent f1d0e02 commit 26b4f71
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
2 changes: 2 additions & 0 deletions opacus/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,15 @@

from . import utils
from .grad_sample import GradSampleModule
from .grad_sample_fast_gradient_clipping import GradSampleModuleFastGradientClipping
from .privacy_engine import PrivacyEngine
from .version import __version__


__all__ = [
"PrivacyEngine",
"GradSampleModule",
"GradSampleModuleFastGradientClipping",
"utils",
"__version__",
]
2 changes: 1 addition & 1 deletion opacus/optimizers/ddpoptimizer_fast_gradient_clipping.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,6 @@ def step(

if self.pre_step():
self.reduce_gradients()
return self.original_optimizer.original_optimizer.step()
return self.original_optimizer.step()
else:
return None

0 comments on commit 26b4f71

Please sign in to comment.