Skip to content

Commit

Permalink
Fix Whisper test by adding --num-workers to hf benchmark suite (#137) (
Browse files Browse the repository at this point in the history
…#139)

Co-authored-by: Olivier Breuleux <[email protected]>
  • Loading branch information
Delaunay and breuleux authored Jun 2, 2023
1 parent fc9a141 commit 93862b1
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 1 deletion.
10 changes: 9 additions & 1 deletion benchmarks/huggingface/bench/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,9 @@ def __init__(self, args):
repeat=100000,
generators=generators[info.category](info),
)
self.loader = DataLoader(self.data, batch_size=args.batch_size)
self.loader = DataLoader(
self.data, batch_size=args.batch_size, num_workers=args.num_workers
)

self.amp_scaler = torch.cuda.amp.GradScaler(enabled=is_fp16_allowed(args))
if is_fp16_allowed(args):
Expand Down Expand Up @@ -130,6 +132,12 @@ def parser():
default="fp32",
help="Precision configuration",
)
parser.add_argument(
"--num-workers",
type=int,
default=8,
help="number of workers for data loading",
)
# parser.add_argument(
# "--no-stdout",
# action="store_true",
Expand Down
1 change: 1 addition & 0 deletions config/base.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ _hf:
install_group: torch
argv:
--precision: 'tf32-fp16'
--num-workers: 8

plan:
method: per_gpu
Expand Down

0 comments on commit 93862b1

Please sign in to comment.