Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: MacOSX/M1 Flux workflow 5.0.0: TypeError: BFloat16 is not supported on MPS #6991

Closed
1 task done
michaeljanich opened this issue Sep 30, 2024 · 4 comments · Fixed by #7063
Closed
1 task done
Labels
bug Something isn't working

Comments

@michaeljanich
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

macOS

GPU vendor

Apple Silicon (MPS)

GPU model

Mac M1PRO

GPU VRAM

64GB

Version number

5.0.0

Browser

Safari

Python dependencies

2024-09-30 18:17:29,099]::[InvokeAI]::ERROR --> Error while invoking session d89a3487-6d47-47c4-a615-81c1b00afe3f, invocation 334ced98-d3a4-4e20-b1e6-5130a4af608a (flux_denoise): BFloat16 is not supported on MPS
[2024-09-30 18:17:29,099]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/Users/massa/AI/.venv/lib/python3.11/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/massa/AI/.venv/lib/python3.11/site-packages/invokeai/app/invocations/baseinvocation.py", line 290, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "/Users/massa/AI/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/massa/AI/.venv/lib/python3.11/site-packages/invokeai/app/invocations/flux_denoise.py", line 98, in invoke
latents = self._run_diffusion(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/massa/AI/.venv/lib/python3.11/site-packages/invokeai/app/invocations/flux_denoise.py", line 115, in _run_diffusion
flux_conditioning = flux_conditioning.to(dtype=inference_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/massa/AI/.venv/lib/python3.11/site-packages/invokeai/backend/stable_diffusion/diffusion/conditioning_data.py", line 47, in to
self.clip_embeds = self.clip_embeds.to(device=device, dtype=dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: BFloat16 is not supported on MPS

[2024-09-30 18:17:29,107]::[InvokeAI]::INFO --> Graph stats: d89a3487-6d47-47c4-a615-81c1b00afe3f
Node Calls Seconds VRAM Used
flux_model_loader 1 0.001s 0.000G
flux_text_encoder 1 0.000s 0.000G
rand_int 1 0.000s 0.000G
flux_denoise 1 0.004s 0.000G
TOTAL GRAPH EXECUTION TIME: 0.005s
TOTAL GRAPH WALL TIME: 0.006s
RAM used by InvokeAI process: 4.11G (+0.000G)
RAM used to load models: 0.00G
RAM cache statistics:
Model cache hits: 0
Model cache misses: 0
Models cached: 0
Models cleared from cache: 0
Cache high water mark: 0.00/0.00G

What happened

TypeError: BFloat16 is not supported on MPS

What you expected to happen

Render image

How to reproduce the problem

Any standard FLUX1.0 workflow (Flux Dev Quant, Flux Schnell Quant) to standard invoke with prompt.

Additional context

Happens every time.

Discord username

No response

@michaeljanich michaeljanich added the bug Something isn't working label Sep 30, 2024
@michaeljanich
Copy link
Author

5.0.2 has the same problems.

@Vargol
Copy link
Contributor

Vargol commented Oct 4, 2024

If we assume the endpoint of this is to get flux working, then you'll need to upgrade torch to a nightly version, that should resolve the error your getting, but you'll bump into another error about float64 not being supported, you'll need to edit
that reference in invokeai/backend/flux/math.py from float64 to float32.

That should get the full versions of Flux.1 [dev] and Flux.1 [schnell] working.

Quantised versions, you're out of luck, the library used to support those isn't compatible with MacOS (and I believe Windows, AMD GPU's and Intel GPU's too ,some of that might be out of date)

@Aedant13
Copy link

I'm getting this error when trying to use flux : RuntimeError: "arange_mps" not implemented for 'BFloat16'

Is there a fil I should modify? I updated to torch nightly.

@Vargol
Copy link
Contributor

Vargol commented Oct 18, 2024

Torch nightly should work, unless the release of 2.5.0 has changed what's in the nightly release, the fix was merged into the repository as pytorch/pytorch#136754 and I've used nightly to work around this for a few weeks now.

Having said that, I created a pull release for working around that issue using PyTorch 2.4.1 which was approved a few minutes ( #7113 ) ago so hopefully the next release will have that in.

I suggest that you grab the code from #7140 too which is currently waiting approval as that'll stop the t5encoder being loaded in as float32 on Macs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants