GlobalPowerLimitOptimizer
for distributed data parallel training
#43
Labels
enhancement
New feature or request
GlobalPowerLimitOptimizer
works well for single node data parallel training, but in case of distributed data parallel, GPUs in different nodes should make the same final GPU power limit choice. Assuming homogeneous GPUs this is still very likely to happen, but we should make it more robust just in case.GlobalPowerLimitOptimizer
in a distributed training settingtorch.distributed.all_reduce
(for AllReduce) if PyTorch is available and something else for JAX. (The time and energy measurements from eachZeusMonitor
instance in each node should gather results, and node rank 0 will make global decisions.)The text was updated successfully, but these errors were encountered: