You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some questions regarding the parameterization "v" in the context of training and using LoRA (Low-Rank Adaptation) with a 512-base model. Here are my observations:
Whether I use the "v_parameterization" or not during LoRA training seems to have no impact when loading the LoRA.
When training the 512-base model without loading LoRA, it requires a non-"v_parameterization" configuration to function correctly. But when loading LoRA into the 512-base model, it must use the "v_parameterization"; otherwise, the generated images are just noise.
Could you please provide some insights or explanations for these observations?
Thank you for your assistance!
The text was updated successfully, but these errors were encountered:
Hello,
I have some questions regarding the parameterization "v" in the context of training and using LoRA (Low-Rank Adaptation) with a 512-base model. Here are my observations:
Whether I use the "v_parameterization" or not during LoRA training seems to have no impact when loading the LoRA.
When training the 512-base model without loading LoRA, it requires a non-"v_parameterization" configuration to function correctly. But when loading LoRA into the 512-base model, it must use the "v_parameterization"; otherwise, the generated images are just noise.
Could you please provide some insights or explanations for these observations?
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: