Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to upload images #73

Open
pribeh opened this issue Oct 11, 2024 · 2 comments
Open

Unable to upload images #73

pribeh opened this issue Oct 11, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@pribeh
Copy link

pribeh commented Oct 11, 2024

Describe the bug

I'm running into an issue when passing images to ComfyUI with the Flux.1 Schnell model. When I pass the following request all indicators are that the image is indeed uploaded to the Comfy worker but the resulting output does not reflect the uploaded image at all. Am I doing something wrong with the below configuration or does image to image workflow not properly work with the Flux model.

  useEffect(() => {
    if (selectedImages.length > 0) {
      const formattedImages = selectedImages.map(image => ({
        name: image.name || 'uploaded_image.png',  // Default to a generic name if none is provided
        image: image.base64.replace(/^data:image\/\w+;base64,/, "")  // Strip the base64 prefix
      }));
  
      setRequestBody(prev => ({
        ...prev,
        input: {
          ...prev.input,
          images: formattedImages  // Add the formatted images to the request
        }
      }));
    }
  }, [selectedImages]);
  
    const requestRunPod = {
      input: {
        workflow: {
          5: {
            inputs: {
              width: requestBody.input.width || 1024,  // Default to 1024 if not set
              height: requestBody.input.height || 1024,  // Default to 1024 if not set
              batch_size: 1
            },
            class_type: "EmptyLatentImage",
            _meta: {
              title: "Empty Latent Image"
            }
          },
          6: {
            inputs: {
              text: `${textPrompt}`,  // Use dynamic prompt input
              clip: ["11", 0]  // The encoded text input for CLIP model
            },
            class_type: "CLIPTextEncode",
            _meta: {
              title: "CLIP Text Encode (Prompt)"
            }
          },
          8: {
            inputs: {
              samples: ["13", 0],  // Latent space samples
              vae: ["10", 0]
            },
            class_type: "VAEDecode",
            _meta: {
              title: "VAE Decode"
            }
          },
          9: {
            inputs: {
              filename_prefix: "ComfyUI",
              images: ["8", 0]  // Resulting images from the generation process
            },
            class_type: "SaveImage",
            _meta: {
              title: "Save Image"
            }
          },
          10: {
            inputs: {
              vae_name: "ae.safetensors"
            },
            class_type: "VAELoader",
            _meta: {
              title: "Load VAE"
            }
          },
          11: {
            inputs: {
              clip_name1: "t5xxl_fp8_e4m3fn.safetensors",
              clip_name2: "clip_l.safetensors",
              type: "flux"
            },
            class_type: "DualCLIPLoader",
            _meta: {
              title: "DualCLIPLoader"
            }
          },
          12: {
            inputs: {
              unet_name: "flux1-schnell.safetensors",
              weight_dtype: "fp8_e4m3fn"
            },
            class_type: "UNETLoader",
            _meta: {
              title: "Load Diffusion Model"
            }
          },
          13: {
            inputs: {
              noise: ["25", 0],
              guider: ["22", 0],
              sampler: ["16", 0],
              sigmas: ["17", 0],
              latent_image: ["5", 0]  // Use latent image generated from selected images or noise
            },
            class_type: "SamplerCustomAdvanced",
            _meta: {
              title: "SamplerCustomAdvanced"
            },
            advanced_params: {
              disable_intermediate_results: true,  // Disable intermediate results
              disable_preview: true  // Turn off preview generation
            }
          },
          16: {
            inputs: {
              sampler_name: "euler"
            },
            class_type: "KSamplerSelect",
            _meta: {
              title: "KSamplerSelect"
            }
          },
          17: {
            inputs: {
              scheduler: "sgm_uniform",
              steps: 4,
              denoise: 1,
              model: ["12", 0]
            },
            class_type: "BasicScheduler",
            _meta: {
              title: "BasicScheduler"
            }
          },
          22: {
            inputs: {
              model: ["12", 0],
              conditioning: ["6", 0]
            },
            class_type: "BasicGuider",
            _meta: {
              title: "BasicGuider"
            }
          },
          25: {
            inputs: {
              noise_seed: randomSeed
            },
            class_type: "RandomNoise",
            _meta: {
              title: "RandomNoise"
            }
          },
        },
        images: requestBody.input.images  // Pass the selected images to the workflow
      }
    };    
@pribeh pribeh added the bug Something isn't working label Oct 11, 2024
@pribeh pribeh changed the title [BUG]: … Unable to upload images Oct 11, 2024
@billyberkouwer
Copy link

billyberkouwer commented Oct 11, 2024

I think your issue is that you're not loading the image in the ComfyUI workflow, you're working from an empty latent image. In my workflow, I have a node like this:

"1277": {
    inputs: {
      image: "image.png",
      upload: "image",
    },
    class_type: "LoadImage",
    _meta: {
      title: "Load Image",
    },
  },

which refers to the uploaded image, and the output corresponds to this loaded image

@pribeh
Copy link
Author

pribeh commented Oct 11, 2024

Thanks for sharing @billyberkouwer ! I can't seemingly get that working in my workflow though. I'm new to ComfyUI and not sure how to get the json outputted workflow for things like image to image. Could you or anyone share a simple workflow for image to image?

if I try something like this:

      "5": {
        inputs: {
          image: selectedImages[0]?.name || 'image_0.png',  // Use the first selected image
          upload: selectedImages[0]?.base64
            ? selectedImages[0].base64.replace(/^data:image\/\w+;base64,/, "")  // Strip the base64 prefix
            : "",  // Fallback in case there's no image
        },
        class_type: "LoadImage", 
        _meta: { title: "Load Image" }
      },

I always get invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because a node is missing the class_type property.', 'details': "Node ID '#5'", 'extra_info': {}}

Here's a more complete version

  const filteredPrompt = customFilter.clean(aiTextInput);
   const randomSeed = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);  // Generate a random seed
 
   const requestRunPod = {
     input: {
       workflow: {
         5: selectedImages.length > 0 
           ? selectedImages.map((image, index) => ({
               inputs: {
                 image: image.name || `image_${index}.png`,
                 upload: image.base64 ? image.base64.replace(/^data:image\/\w+;base64,/, "") : "",
               },
               class_type: "LoadImage",
               _meta: { title: "Load Image" }
             }))
           : {
               inputs: {
                 width: requestBody.input.aspect_ratios_selection.split('*')[0],
                 height: requestBody.input.aspect_ratios_selection.split('*')[1],
                 batch_size: 1
               },
               class_type: "EmptyLatentImage",
               _meta: { title: "Empty Latent Image" }
             },
         6: {
           inputs: {
             samples: ["5", 0], // Pass image or latent samples to the encoder
           },
           class_type: "ImageToLatent",  // New node for encoding image into latent space
           _meta: { title: "Encode to Latent" }
         },
   
         7: {
           inputs: {
             samples: ["6", 0],  // Now passing encoded latent samples instead of direct images
             vae: ["10", 0]
           },
           class_type: "VAEDecode",
           _meta: { title: "VAE Decode" }
         },
         8: {
           inputs: {
             filename_prefix: "ComfyUI",
             images: ["7", 0]  // Resulting images from the VAE decode
           },
           class_type: "SaveImage",
           _meta: { title: "Save Image" }
         },
         10: {
           inputs: {
             vae_name: "ae.safetensors"
           },
           class_type: "VAELoader",
           _meta: { title: "Load VAE" }
         },
         11: {
           inputs: {
             clip_name1: "t5xxl_fp8_e4m3fn.safetensors",
             clip_name2: "clip_l.safetensors",
             type: "flux"
           },
           class_type: "DualCLIPLoader",
           _meta: { title: "DualCLIPLoader" }
         },
         12: {
           inputs: {
             unet_name: "flux1-schnell.safetensors",
             weight_dtype: "fp8_e4m3fn"
           },
           class_type: "UNETLoader",
           _meta: { title: "Load Diffusion Model" }
         },
         13: {
           inputs: {
             noise: ["25", 0],
             guider: ["22", 0],
             sampler: ["16", 0],
             sigmas: ["17", 0],
             latent_image: ["5", 0]  // Latent image comes from either loaded image or latent space
           },
           class_type: "SamplerCustomAdvanced",
           _meta: { title: "SamplerCustomAdvanced" },
           advanced_params: {
             disable_intermediate_results: true,  // Disable intermediate results
             disable_preview: true  // Turn off preview generation
           }
         },
         16: {
           inputs: {
             sampler_name: "euler"
           },
           class_type: "KSamplerSelect",
           _meta: { title: "KSamplerSelect" }
         },
         17: {
           inputs: {
             scheduler: "sgm_uniform",
             steps: 4,
             denoise: selectedImages.length > 0 ? 0.6 : 1,  // Use lower denoise for img2img
             model: ["12", 0]
           },
           class_type: "BasicScheduler",
           _meta: { title: "BasicScheduler" }
         },
         22: {
           inputs: {
             model: ["12", 0],
             conditioning: ["6", 0]
           },
           class_type: "BasicGuider",
           _meta: { title: "BasicGuider" }
         },
         25: {
           inputs: {
             noise_seed: randomSeed
           },
           class_type: "RandomNoise",
           _meta: { title: "RandomNoise" }
         }
       }
     }
   };

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants