Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Image generation fails with a server error when starting InvokeAI offline. #6623

Open
1 task done
StellarBeing25 opened this issue Jul 15, 2024 · 17 comments · May be fixed by #6740
Open
1 task done

[bug]: Image generation fails with a server error when starting InvokeAI offline. #6623

StellarBeing25 opened this issue Jul 15, 2024 · 17 comments · May be fixed by #6740
Labels
bug Something isn't working

Comments

@StellarBeing25
Copy link

StellarBeing25 commented Jul 15, 2024

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

GTX 1660

GPU VRAM

6GB

Version number

4.2.6

Browser

Firefox

Python dependencies

No response

What happened

SC

I am using SD 1.5 models in safetensor format without converting them to diffusers.

Image generation fails with a server error when starting InvokeAI offline.
Image generation starts only when connected to the internet for the first generation however subsequent generations can work offline until switching models.
The error reappears when switching models while offline.

What you expected to happen

InvokeAI should work offline.

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

@StellarBeing25 StellarBeing25 added the bug Something isn't working label Jul 15, 2024
@psychedelicious
Copy link
Collaborator

With single-file (checkpoint) loading, diffusers still needs access to the models' configuration files. Previously, when we converted models, we used a local copy of these config files. With single-file loading, we are no longer referencing the local config files so diffusers is downloading them.

The latest diffusers release revises the single-file loading logic. I think we'll need to upgrade to the latest version, then review the new API to see what our options are.

@someaccount1234
Copy link

This makes it completely ONLINE ONLY!
The configs folder is right there local, ready to be used lol!
Wasted a good hour+ trying to fix it.
Please fix this, unusable until then!

@psychedelicious
Copy link
Collaborator

@lstein Forgot to tag you - I think we should be able to fix this up pretty easily.

@someaccount1234
Copy link

someaccount1234 commented Jul 28, 2024

Still the same error!
Cannot use offline at all!

InvokeAI demands internet connection to download config files that are already local! Every time you change the model!

setting 'legacy_config_dir' in 'invokeai.yaml' doesn't help, it still demands internet.

This big should be retitled to 'redundant yaml downloads , internet required'

@jameswan

This comment was marked as off-topic.

@psychedelicious
Copy link
Collaborator

@jameswan That's an entirely different problem. Please create your own issue. Note that Invoke v2 is ancient.

@psychedelicious
Copy link
Collaborator

@someaccount1234 Yes, this is still a problem. We will close this issue when it is resolved.

@TobiasReich
Copy link

If some more StackTrace is needed, I provided it in my (duplicated) issue:
#6702

@psychedelicious
Copy link
Collaborator

Thanks @TobiasReich I saw that.

This isn't a mysterious issue, the cause is very clear.

I experimented the other day with providing the config files we already have on disc but diffusers couldn't load the tokenizer or text encoder. It's not obvious to me why.

It doesn't matter anyways, though, because diffusers just refactored the API we are using to load models. Whatever issue I'm running into may well no longer exist. So we need to update the diffusers dependency (one of our core deps), adapt to some other changes they have made, and then figure out how to provide the config files required.

@MOzhi327
Copy link

MOzhi327 commented Aug 7, 2024

Now, in the infinite canvas, every time a new image is uploaded, an internet connection is required. Maybe it's time to go back to the old version.

@psychedelicious
Copy link
Collaborator

@MOzhi327 No, that's not how it works. There's no internet connection needed when using canvas. What makes you think an internet connection is required?

@MOzhi327
Copy link

MOzhi327 commented Aug 7, 2024

@psychedelicious Thank you for the reply. On my side, if the VPN is turned off, there is no way to load the model, as follows.
tmp8EC1
When I turned on the VPN and generated the image, I could continue to generate without relying on the VPN. Today, I once turned off the VPN. Without adjusting any parameters, when generating the image, a prompt of network connection failure appeared again. (I just tested it again. I can still continue to generate after turning off the VPN. Maybe it was caused by other reasons before. Sorry.)
Anyway, every time the model is loaded, an internet connection is required. This is very inconvenient for me. Because using a VPN will cause problems with the use of my other software, so I temporarily consider using the old version first.

@psychedelicious
Copy link
Collaborator

@MOzhi327 Ok, thanks for clarifying. Yes, we know about the internet connectivity issue and will fix it.

@MOzhi327
Copy link

MOzhi327 commented Aug 7, 2024

@psychedelicious Thank you very much

@psychedelicious
Copy link
Collaborator

The problem was introduced when we implemented single-file loading in v4.2.6 on 15 July 2024. We have a few large projects that are taking all contributors' time and which are both resource and technical blockers to resolving this issue.

You do not need to use single file loading in the first place. You can convert your checkpoint/safetensors models to diffusers before going offline (button in the model manager) and then there's no internet connection needed to generate.

@MOzhi327
Copy link

@psychedelicious Thank you! It is useful, but if it is an external directory model, it will cost more additional disk space. For someone like me who mainly uses one or two models, it may not be a useful way for others who need to use many models.

@webtv123
Copy link

The problem was introduced when we implemented single-file loading in v4.2.6 on 15 July 2024. We have a few large projects that are taking all contributors' time and which are both resource and technical blockers to resolving this issue.

You do not need to use single file loading in the first place. You can convert your checkpoint/safetensors models to diffusers before going offline (button in the model manager) and then there's no internet connection needed to generate.

THANK YOU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants