Now with SDXL support.
- Ubuntu 22.04 LTS
- CUDA 11.8
- Python 3.10.12
- Torch 2.1.2
- xformers 0.0.23.post1
- Jupyter Lab
- Automatic1111 Stable Diffusion Web UI 1.9.3
- Dreambooth extension 1.0.14
- ControlNet extension v1.1.445
- After Detailer extension v24.4.2
- Locon extension
- ReActor extension (replaces roop)
- Deforum extension
- Inpaint Anything extension
- Infinite Image Browsing extension
- CivitAI extension
- CivitAI Browser+ extension
- TensorRT extension
- Kohya_ss v24.1.3
- ComfyUI
- ComfyUI Manager
- InvokeAI v4.2.0
- sd_xl_base_1.0.safetensors
- sd_xl_refiner_1.0.safetensors
- sdxl_vae.safetensors
- inswapper_128.onnx
- runpodctl
- OhMyRunPod
- RunPod File Uploader
- croc
- rclone
- Application Manager
- CivitAI Downloader
This image is designed to work on RunPod. You can use my custom RunPod template to launch it on RunPod.
Note
You will need to edit the docker-bake.hcl
file and update REGISTRY_USER
,
and RELEASE
. You can obviously edit the other values too, but these
are the most important ones.
Important
In order to cache the models, you will need at least 32GB of CPU/system
memory (not VRAM) due to the large size of the models. If you have less
than 32GB of system memory, you can comment out or remove the code in the
Dockerfile
that caches the models.
# Clone the repo
git clone https://github.com/ashleykleynhans/stable-diffusion-docker.git
# Download the models
cd stable-diffusion-docker
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors
wget https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl_vae.safetensors
# Log in to Docker Hub
docker login
# Build the image, tag the image, and push the image to Docker Hub
docker buildx bake -f docker-bake.hcl --push
# Same as above but customize registry/user/release:
REGISTRY=ghcr.io REGISTRY_USER=myuser RELEASE=my-release docker buildx \
bake -f docker-bake.hcl --push
docker run -d \
--gpus all \
-v /workspace \
-p 2999:2999 \
-p 3000:3001 \
-p 3010:3011 \
-p 3020:3021 \
-p 6006:6066 \
-p 8000:8000 \
-p 8888:8888 \
-p 9090:9090 \
-e JUPYTER_PASSWORD=Jup1t3R! \
-e ENABLE_TENSORBOARD=1 \
ashleykza/stable-diffusion-webui:latest
You can obviously substitute the image name and tag with your own.
Connect Port | Internal Port | Description |
---|---|---|
3000 | 3001 | A1111 Stable Diffusion Web UI |
3010 | 3011 | Kohya_ss |
3020 | 3021 | ComfyUI |
9090 | 9090 | InvokeAI |
6006 | 6066 | Tensorboard |
8000 | 8000 | Application Manager |
8888 | 8888 | Jupyter Lab |
2999 | 2999 | RunPod File Uploader |
Variable | Description | Default |
---|---|---|
VENV_PATH | Set the path for the Python venv for the app | /workspace/venvs/stable-diffusion-webui |
JUPYTER_LAB_PASSWORD | Set a password for Jupyter lab | not set - no password |
DISABLE_AUTOLAUNCH | Disable Web UIs from launching automatically | enabled |
ENABLE_TENSORBOARD | Enables Tensorboard on port 6006 | enabled |
Stable Diffusion Web UI, Kohya SS, and ComfyUI each create log files, and you can tail the log files instead of killing the services to view the logs
Application | Log file |
---|---|
Stable Diffusion Web UI | /workspace/logs/webui.log |
Kohya SS | /workspace/logs/kohya_ss.log |
ComfyUI | /workspace/logs/comfyui.log |
InvokeAI | /workspace/logs/invokeai.log |
Pull requests and issues on GitHub are welcome. Bug fixes and new features are encouraged.