Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: FLUX diffusers LoRA models without .proj_mlp fail #7129

Open
1 task done
RyanJDick opened this issue Oct 16, 2024 · 0 comments
Open
1 task done

[bug]: FLUX diffusers LoRA models without .proj_mlp fail #7129

RyanJDick opened this issue Oct 16, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@RyanJDick
Copy link
Collaborator

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

No response

GPU VRAM

No response

Version number

5.1.1

Browser

N/A

Python dependencies

No response

What happened

Relevant Discord discussion: https://discord.com/channels/1020123559063990373/1149510134058471514/1295829659883278388

The model referenced in that discussion can be imported successfully, but fails at runtime during conversion from diffusers to BFL format. Specifically, the fused linear layers fail because the LoRA does not contain the expected .proj_mlp. We should probably populate the missing tensor with an identity matrix.

Runtime error:

[2024-10-16 18:31:16,048]::[InvokeAI]::ERROR --> Error while invoking session 5bf2b49b-3aff-4c30-8ef2-39b6fa3d0554, invocation 3c7bf078-d4c0-494f-9a61-532d0fc9c9b8 (flux_text_encoder): 
[2024-10-16 18:31:16,049]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/ryan/src/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "/home/ryan/src/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 290, in invoke_internal
    output = self.invoke(context)
  File "/home/ryan/.pyenv/versions/3.10.14/envs/InvokeAI_3.10.14/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/ryan/src/InvokeAI/invokeai/app/invocations/flux_text_encoder.py", line 51, in invoke
    clip_embeddings = self._clip_encode(context)
  File "/home/ryan/src/InvokeAI/invokeai/app/invocations/flux_text_encoder.py", line 100, in _clip_encode
    exit_stack.enter_context(
  File "/home/ryan/.pyenv/versions/3.10.14/lib/python3.10/contextlib.py", line 492, in enter_context
    result = _cm_type.__enter__(cm)
  File "/home/ryan/.pyenv/versions/3.10.14/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/home/ryan/src/InvokeAI/invokeai/backend/lora/lora_patcher.py", line 42, in apply_lora_patches
    for patch, patch_weight in patches:
  File "/home/ryan/src/InvokeAI/invokeai/app/invocations/flux_text_encoder.py", line 121, in _clip_lora_iterator
    lora_info = context.models.load(lora.lora)
  File "/home/ryan/src/InvokeAI/invokeai/app/services/shared/invocation_context.py", line 370, in load
    return self._services.model_manager.load.load_model(model, _submodel_type)
  File "/home/ryan/src/InvokeAI/invokeai/app/services/model_load/model_load_default.py", line 70, in load_model
    ).load_model(model_config, submodel_type)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/load_default.py", line 56, in load_model
    locker = self._load_and_cache(model_config, submodel_type)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/load_default.py", line 77, in _load_and_cache
    loaded_model = self._load_model(config, submodel_type)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/model_loaders/lora.py", line 76, in _load_model
    model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict, alpha=None)
  File "/home/ryan/src/InvokeAI/invokeai/backend/lora/conversions/flux_diffusers_lora_conversion_utils.py", line 171, in lora_model_from_flux_diffusers_state_dict
    add_qkv_lora_layer_if_present(
  File "/home/ryan/src/InvokeAI/invokeai/backend/lora/conversions/flux_diffusers_lora_conversion_utils.py", line 71, in add_qkv_lora_layer_if_present
    assert all(keys_present) or not any(keys_present)
AssertionError

What you expected to happen

It should work.

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

@RyanJDick RyanJDick added the bug Something isn't working label Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant