You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With one GPU, with around 1600 steps, I get a good LoRA trained with the parameters below (taken from output json). However, this takes around 20 minutes or so, so I wanted to cut this time down as much as much as possible using multiple GPUs. I thought simply increasing the number of GPUs in the accelerate command num_processes 4 would do the trick but from multiple sources online, this doesn't seem to work that way and it takes similar time.
I saw a source ( https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/ ) which claimed if I put max_train_epochs = 1 (in the toml) with num_processes 4 (4 GPUs) in the accelerate command, I could achieve the effect I wanted, but all this does is reduce the number of train_steps from 1600 to 275. Also, the trained model's quality is bad since it doesn't get me my desired level of LoRA quality (checked with sample images).
Is it possible to cut down LoRA training time while maintaining quality using multiple GPUs? If so, what settings should I change to make this possible?
Note: Sorry for the long post, I wanted to provide all the required context.
Note the img folder contains the folder "50_redactedname man" with 22 captioned images and the regularization folder contains the folder "1_man" with 1100 images.
I run with the standard single GPU command kohya_ss/venv/bin/accelerate launch --dynamo_backend "no" --dynamo_mode "default" --mixed_precision "fp16" --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 kohya_ss/sd-scripts/train_network.py --config_file redactedname.toml, I get a LoRA trained on my person (redactedname).
Hi, I'm trying to test training a dreambooth LoRA for SD1.5 faster on 4 GPUs as compared to 1 GPU. Would love any and all help!
I'm using code from the commit https://github.com/agarwalml/kohya_ss/commit/6c69b893e131dc428a21411f6212ee58a0819d30
With one GPU, with around 1600 steps, I get a good LoRA trained with the parameters below (taken from output json). However, this takes around 20 minutes or so, so I wanted to cut this time down as much as much as possible using multiple GPUs. I thought simply increasing the number of GPUs in the accelerate command num_processes 4 would do the trick but from multiple sources online, this doesn't seem to work that way and it takes similar time.
I saw a source ( https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/ ) which claimed if I put max_train_epochs = 1 (in the toml) with num_processes 4 (4 GPUs) in the accelerate command, I could achieve the effect I wanted, but all this does is reduce the number of train_steps from 1600 to 275. Also, the trained model's quality is bad since it doesn't get me my desired level of LoRA quality (checked with sample images).
Is it possible to cut down LoRA training time while maintaining quality using multiple GPUs? If so, what settings should I change to make this possible?
Note: Sorry for the long post, I wanted to provide all the required context.
Parameters:
Note the img folder contains the folder "50_redactedname man" with 22 captioned images and the regularization folder contains the folder "1_man" with 1100 images.
I run with the standard single GPU command kohya_ss/venv/bin/accelerate launch --dynamo_backend "no" --dynamo_mode "default" --mixed_precision "fp16" --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 kohya_ss/sd-scripts/train_network.py --config_file redactedname.toml, I get a LoRA trained on my person (redactedname).
Note: I'm using code from the commit https://github.com/agarwalml/kohya_ss/commit/6c69b893e131dc428a21411f6212ee58a0819d30
Haven't upgraded yet since this took some effort to run on my local and I didn't want to break stuff unnecessarily
Judging the latest version of train_network.py, I don't see much difference but I may be wrong.
Thanks for all your help!
The text was updated successfully, but these errors were encountered: