Skip to content

Commit

Permalink
add hint about increasing number of processes used for dataloading
Browse files Browse the repository at this point in the history
  • Loading branch information
FabianIsensee committed Aug 30, 2024
1 parent 80cd16c commit bb1b809
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions documentation/competitions/Toothfairy2/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,9 @@ Models are trained from scratch.

Note how in the second line we overwrite the nnUNet_results variable in order to be able to train the same model twice without overwriting the results

We recommend to increase the number of processes used for data augmentation. Otherwise you can run into CPU bottlenecks.
Use `export nnUNet_n_proc_DA=32` or higher (if your system permits!).

# Inference
We ensemble the two models from above. On a technical level we copy the two fold_all folders into one training output
directory and rename them to fold_0 and fold_1. This lets us use nnU-Net's cross-validation ensembling strategy which
Expand Down

0 comments on commit bb1b809

Please sign in to comment.