Go to file
2022-10-09 18:35:25 +03:00
.github/ISSUE_TEMPLATE Change default bug report template label to bug-report 2022-09-23 08:30:54 +03:00
embeddings add embeddings dir 2022-09-30 14:16:26 +03:00
javascript Update lightbox to change displayed image as soon as generation is complete (#1933) 2022-10-09 16:19:33 +01:00
models deepdanbooru interrogator 2022-10-05 20:55:26 +02:00
modules fix missing png info when Extras Batch Process 2022-10-09 18:35:25 +03:00
scripts fix broken samplers in XY plot 2022-10-09 15:01:42 +03:00
textual_inversion_templates initial support for training textual inversion 2022-10-02 15:03:39 +03:00
.gitignore initial support for training textual inversion 2022-10-02 15:03:39 +03:00
artists.csv artists.csv: remove duplicated artists 2022-09-30 07:34:11 +03:00
environment-wsl2.yaml Update readme.md to use environment-wsl2.yaml 2022-09-10 00:27:54 +03:00
launch.py reshuffle the code a bit in launcher to keep functions in one place for #2069 2022-10-09 15:22:51 +03:00
README.md Merge pull request #1752 from Greendayle/dev/deepdanbooru 2022-10-09 10:52:21 +03:00
requirements_versions.txt made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
requirements.txt made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
screenshot.png updated interface to use Blocks 2022-08-30 21:51:30 +03:00
script.js refactored image paste handling to fill unset images successively, then replace last existing image (fixes #981) 2022-09-27 08:59:40 +03:00
style.css fix conflicts 2022-10-08 16:27:48 +02:00
txt2img_Screenshot.png Add files via upload 2022-09-24 07:58:30 +03:00
webui-user.bat Revert "Update webui-user.bat" 2022-09-14 08:58:13 +03:00
webui-user.sh Uppercase for env var LAUNCH_SCRIPT 2022-09-30 12:59:47 +03:00
webui.bat Update webui.bat 2022-09-26 16:19:47 +03:00
webui.py Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
webui.sh Uppercase for env var LAUNCH_SCRIPT 2022-09-30 12:59:47 +03:00

Stable Diffusion web UI

A browser interface based on Gradio library for Stable Diffusion.

Check the custom scripts wiki page for extra scripts developed by users.

Features

Detailed feature showcase with images:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Prompt Matrix
  • Stable Diffusion Upscale
  • Attention, specify parts of text that the model should pay more attention to
    • a man in a ((tuxedo)) - will pay more attention to tuxedo
    • a man in a (tuxedo:1.21) - alternative syntax
    • select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
  • Loopback, run img2img processing multiple times
  • X/Y plot, a way to draw a 2 dimensional plot of images with different parameters
  • Textual Inversion
    • have as many embeddings as you want and use any names you like for them
    • use multiple embeddings with different numbers of vectors per token
    • works with half precision floating point numbers
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network upscaler with a lot of third party models
    • SwinIR, neural network upscaler
    • LDSR, Latent diffusion super resolution upscaling
  • Resizing aspect ratio options
  • Sampling method selection
  • Interrupt processing at any time
  • 4GB video card support (also reports of 2GB working)
  • Correct seeds for batches
  • Prompt length validation
    • get length of prompt in tokens as you type
    • get a warning after generation if some text was truncated
  • Generation parameters
    • parameters you used to generate images are saved with that image
    • in PNG chunks for PNG, in EXIF for JPEG
    • can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
    • can be disabled in settings
  • Settings page
  • Running arbitrary python code from UI (must run with --allow-code to enable)
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Random artist button
  • Tiling support, a checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
  • Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
  • Styles, a way to save part of prompt and easily apply them via dropdown later
  • Variations, a way to generate same image but with tiny differences
  • Seed resizing, a way to generate same image but at slightly different resolution
  • CLIP interrogator, a button that tries to guess prompt from an image
  • Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
  • Batch Processing, process a group of files using img2img
  • Img2img Alternative
  • Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
  • Reloading checkpoints on the fly
  • Checkpoint Merger, a tab that allows you to merge two checkpoints into one
  • Custom scripts with many extensions from community
  • Composable-Diffusion, a way to use multiple prompts at once
    • separate prompts using uppercase AND
    • also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
  • No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
  • DeepDanbooru integration, creates danbooru style tags for anime prompts (add --deepdanbooru to commandline args)

Installation and Running

Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.

Alternatively, use Google Colab:

Automatic Installation on Windows

  1. Install Python 3.10.6, checking "Add Python to PATH"
  2. Install git.
  3. Download the stable-diffusion-webui repository, for example by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git.
  4. Place model.ckpt in the models directory (see dependencies for where to get it).
  5. (Optional) Place GFPGANv1.4.pth in the base directory, alongside webui.py (see dependencies for where to get it).
  6. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Automatic Installation on Linux

  1. Install the dependencies:
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
  1. To install in /home/$(whoami)/stable-diffusion-webui/, run:
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)

Installation on Apple Silicon

Find the instructions here.

Contributing

Here's how to add code to this repo: Contributing

Documentation

The documentation was moved from this README over to the project's wiki.

Credits