-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph. #9704
Comments
Could you try the 2.4.0 stable release and see if the problem persists? |
If you're running in a notebook, make sure to restart it and please do a clean reinstall of v0.30.3. Auraflow was released in v0.30.0, so this should not lead to any errors. Just to be sure that there are no longer any environment errors, could you paste the output of |
Hello, the problem is now solved, thank you for your time and consideration. Here are the version that worked for me |
I am facing the same error on
I'm on an H100, I'm guessing this has to do with the new cuDNN SDPA backend introduced in PyTorch 2.5 |
Yes, this seems like a problem with torch 2.5.0, and I've been able to reproduce this now as well. We'll need to take a look into how best to fix this (either on our end or we could talk with the pytorch folks) cc @sayakpaul @DN6 @yiyixuxu. Re-opening the issue for now |
As a work around you can disable the cudnn backend via https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.enable_cudnn_sdp Would you mind opening an issue on PyTorch with a smallish repro, I can then forward to the Nvidia folks |
yes, imo, this should be reported to |
Can you link the frontend issue |
Seems to be NVIDIA/cudnn-frontend#75 and NVIDIA/cudnn-frontend#78 |
Having this issue as well but only on linux. no problems with cuda on windows. |
performance deg is from 6its to 2.5its using sdxl and having everything the same expect that one param. links to issues are already posted below. |
The cuDNN issues linked are generic across any unsupported config and may not correspond to this particular issue. Would it be possible to link a shorter repro as I'm currently trying to clone |
here's the shortest reproduction, like i said its when transformers uses sdp to process clip: import torch
from transformers import CLIPTextModel, AutoTokenizer
device = torch.device('cuda')
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
encoder = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32", cache_dir='/mnt/models/huggingface').to(device=device, dtype=torch.float16)
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
print(inputs)
outputs = encoder(**inputs)
print(outputs)
btw, i just noticed that there is no issue when using torch.float32. but nobody uses torch.float32 anymore. |
Thanks, and it not happening with |
@vladmandic I am not seeing the same error locally with cuDNN 9.3. Which GPU are you on? I will try 9.1.7 in the meantime |
print(f'torch={torch.__version__} cuda={torch.version.cuda} cuDNN={torch.backends.cudnn.version()} device={torch.cuda.get_device_name(0)} cap={torch.cuda.get_device_capability(0)}')
note that cuda and cudnn are ones that come with torch. if torch 2.5 requires newer cudnn, it should handle its installation. |
Yes, 9.1.0.70 is what comes with cuDNN and I didn't see the failure on L40, L4, or RTX 6000 Ada which are also sm89 (it is able to generate and run a kernel). I'm thinking that maybe the issue is the CUDA version, will also try that later. |
Even a clean environment didn't help me. I had to install torch=2.4.0 to get rid of the issue. |
Hmm. How much speedup does one get when using CLIP in SDPA? I remember when we incorporated SDPA in CLIP the speedup wasn't that significant. We could verify this by instantiating the CLIP with: text_encoder = CLIPTextModel.from_pretrained(..., attn_implementation="eager", ...)
pipeline = DiffusionPipeline.from_prertrained(..., text_encoder=text_encoder) Cc: @ArthurZucker |
i tried using |
You mean changing the CLIP (and potentially other models from I guess we have a couple of ways but I think we could pass this info to
Something like (pseudo-code): if is_transformers_model:
if is_transformers_version(...):
if is_torch_version(">=", "2.5"):
loading_kwargs.update({"attn_implementation": "eager"}) @DN6 WDYT? Or maybe @ArthurZucker from |
Describe the bug
Hello. I tried the Img2Img Pipeline and encountered the error in the images. Could you please check it for me? Thank you
Reproduction
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image
pipeline = AutoPipelineForImage2Image.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5/", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipeline(prompt, image=init_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
Logs
No response
System Info
diffusers 0.30.3
Python 3.9.20
Who can help?
No response
The text was updated successfully, but these errors were encountered: