Checkpoint and Diffusers Models#
The model checkpoint files ('*.ckpt') are the Stable Diffusion "secret sauce". They are the product of training the AI on millions of captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named
model.ckpt. Now there are dozens or more
that have been fine tuned to provide particulary styles, genres, or
other features. In addition, there are several new formats that
improve on the original checkpoint format: a
which prevents malware from masquerading as a model, and
models, the most recent innovation.
InvokeAI supports all three formats but strongly prefers the
diffusers format. These are distributed as directories containing
multiple subfolders, each of which contains a different aspect of the
model. The advantage of this is that the models load from disk really
fast. Another advantage is that
diffusers models are supported by a
large and active set of open source developers working at and with
HuggingFace organization, and improvements in both rendering quality
and performance are being made at a rapid pace. Among other features
is the ability to download and install a
diffusers model just by
providing its HuggingFace repository ID.
While InvokeAI will continue to support
models for the near future, these are deprecated and support will
likely be withdrawn at some point in the not-too-distant future.
This manual will guide you through installing and configuring model
weight files and converting legacy
InvokeAI comes with support for a good set of starter models. You'll
find them listed in the master models file
configs/INITIAL_MODELS.yaml in the InvokeAI root directory. The
subset that are currently installed are found in
configs/models.yaml. As of v2.3.1, the list of starter models is:
|Model Name||HuggingFace Repo ID||Description||URL|
|stable-diffusion-1.5||runwayml/stable-diffusion-v1-5||Stable Diffusion version 1.5 diffusers model (4.27 GB)||https://huggingface.co/runwayml/stable-diffusion-v1-5|
|sd-inpainting-1.5||runwayml/stable-diffusion-inpainting||RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)||https://huggingface.co/runwayml/stable-diffusion-inpainting|
|stable-diffusion-2.1||stabilityai/stable-diffusion-2-1||Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)||https://huggingface.co/stabilityai/stable-diffusion-2-1|
|sd-inpainting-2.0||stabilityai/stable-diffusion-2-inpainting||Stable Diffusion version 2.0 inpainting model (5.21 GB)||https://huggingface.co/stabilityai/stable-diffusion-2-inpainting|
|analog-diffusion-1.0||wavymulder/Analog-Diffusion||An SD-1.5 model trained on diverse analog photographs (2.13 GB)||https://huggingface.co/wavymulder/Analog-Diffusion|
|deliberate-1.0||XpucT/Deliberate||Versatile model that produces detailed images up to 768px (4.27 GB)||https://huggingface.co/XpucT/Deliberate|
|d&d-diffusion-1.0||0xJustin/Dungeons-and-Diffusion||Dungeons & Dragons characters (2.13 GB)||https://huggingface.co/0xJustin/Dungeons-and-Diffusion|
|dreamlike-photoreal-2.0||dreamlike-art/dreamlike-photoreal-2.0||A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)||https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0|
|inkpunk-1.0||Envvi/Inkpunk-Diffusion||Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)||https://huggingface.co/Envvi/Inkpunk-Diffusion|
|openjourney-4.0||prompthero/openjourney||An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)||https://huggingface.co/prompthero/openjourney|
|portrait-plus-1.0||wavymulder/portraitplus||An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)||https://huggingface.co/wavymulder/portraitplus|
|seek-art-mega-1.0||coreco/seek.art_MEGA||A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)||https://huggingface.co/coreco/seek.art_MEGA|
|trinart-2.0||naclbit/trinart_stable_diffusion_v2||An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)||https://huggingface.co/naclbit/trinart_stable_diffusion_v2|
|waifu-diffusion-1.4||hakurei/waifu-diffusion||An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)||https://huggingface.co/hakurei/waifu-diffusion|
Note that these files are covered by an "Ethical AI" license which forbids certain uses. When you initially download them, you are asked to accept the license terms. In addition, some of these models carry additional license terms that limit their use in commercial applications or on public servers. Be sure to familiarize yourself with the model terms by visiting the URLs in the table above.
There are too many to list here and more are being contributed every
is a great resource for diffusers models, and is also the home of a
of embedding (".bin") models that add subjects and/or styles to your
images. The latter are automatically installed on the fly when you
include the text
<concept-name> in your prompt. See Concepts
Library for more information.
Another popular site for community-contributed models is
CIVITAI. This extensive site currently supports
.ckpt models, but they can be easily loaded
into InvokeAI and/or converted into optimized
diffusers models. Be
aware that CIVITAI hosts many models that generate NSFW content.
InvokeAI 2.3.x does not support directly importing and
running Stable Diffusion version 2 checkpoint models. You may instead
convert them into
diffusers models using the conversion methods
There are multiple ways to install and manage models:
invokeai-configurescript which will download and install them for you.
The command-line tool (CLI) has commands that allows you to import, configure and modify models files.
The web interface (WebUI) has a GUI for importing and managing models.
invoke launcher, choose option (6) "re-run the configure
script to download new models." This will launch the same script that
prompted you to select models at install time. You can use this to add
models that you skipped the first time around. It is all right to
specify a model that was previously downloaded; the script will just
confirm that the files are complete.
Installation via the CLI#
You can install a new model, including any of the community-supported ones, via
the command-line client's
If the model is already downloaded to your local disk, use
!import_model /path/to/file.ckpt to load it. For example:
On Windows systems, use forward slashes rather than backslashes
in your file paths.
If you do use backslashes,
you must double them like this:
Alternatively you can directly import the file using its URL:
For this to work, the URL must not be password-protected. Otherwise you will receive a 404 error.
When you import a legacy model, the CLI will first ask you what type
of model this is. You can indicate whether it is a model based on
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
or a 1.x inpainting model. Be careful to indicate the correct model
type, or it will not load correctly. You can correct the model type
after the fact using the
The system will then ask you a few other questions about the model, including what size image it was trained on (usually 512x512), what name and description you wish to use for it, and whether you would like to install a custom VAE (variable autoencoder) file for the model. For recent models, the answer to the VAE question is usually "no," but it won't hurt to answer "yes".
After importing, the model will load. If this is successful, you will
be asked if you want to keep the model loaded in memory to start
generating immediately. You'll also be asked if you wish to make this
the default model on startup. You can change this later using
Importing a batch of
.safetensors models from a directory#
You may also point
!import_model to a directory containing a set of
.safetensors files. They will be imported en masse.
You will be given the option to import all models found in the directory, or select which ones to import. If there are subfolders within the directory, they will be searched for models to import.
You can install a
diffusers model from the HuggingFace site using
!import_model and the HuggingFace repo_id for the model:
Alternatively, you can download the model to disk and import it from there. The model may be distributed as a ZIP file, or as a Git repository:
The CLI supports file path autocompletion
Type a bit of the path name and hit Tab in order to get a choice of possible completions.
On Windows, you can drag model files onto the command-line
Once you have typed in
!import_model, you can drag the
model file or directory onto the command-line to insert the model path. This way, you don't need to
type it or copy/paste. However, you will need to reverse or
double backslashes as noted above.
Before installing, the CLI will ask you for a short name and description for the model, whether to make this the default model that is loaded at InvokeAI startup time, and whether to replace its VAE. Generally the answer to the latter question is "no".
Converting legacy models into
!convert_model will convert a
models file into
diffusers and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to
!import_model. You may point the command
to either a downloaded model file on disk, or to a (non-password
After a successful conversion, the CLI will offer you the option of
deleting the original
Optimizing a previously-installed model#
Lastly, if you have previously installed a
file and wish to convert it into a
diffusers model, you can do this
without re-downloading and converting the original file using the
!optimize_model command. Simply pass the short name of an existing
The model will be converted into
diffusers format and replace the
previously installed version. You will again be offered the
opportunity to delete the original
Related CLI Commands#
There are a whole series of additional model management commands in the CLI that you can read about in Command-Line Interface. These include:
!models- List all installed models
!switch <model name>- Switch to the indicated model
!edit_model <model name>- Edit the indicated model to change its name, description or other properties
!del_model <model name>- Delete the indicated model
If you are comfortable with a text editor then you may simply edit
You will need to download the desired
.ckpt/.safetensors file and
place it somewhere on your machine's filesystem. Alternatively, for a
diffusers model, record the repo_id or download the whole model
directory. Then using a text editor (e.g. the Windows Notepad
application), open the file
configs/models.yaml, and add a new
stanza that follows this model:
A legacy model#
.safetensors entry will look like this:
arabian-nights-1.0: description: A great fine-tune in Arabian Nights style weights: ./path/to/arabian-nights-1.0.ckpt config: ./configs/stable-diffusion/v1-inference.yaml format: ckpt width: 512 height: 512 default: false
ckpt for both
A diffusers model#
A stanza for a
diffusers model will look like this for a HuggingFace
model with a repository ID:
arabian-nights-1.1: description: An even better fine-tune of the Arabian Nights repo_id: captahab/arabian-nights-1.1 format: diffusers default: true
And for a downloaded directory:
arabian-nights-1.1: description: An even better fine-tune of the Arabian Nights path: /path/to/captahab-arabian-nights-1.1 format: diffusers default: true
There is additional syntax for indicating an external VAE to use with
this model. See
models.yaml for examples.
After you save the modified
models.yaml file relaunch
invokeai. The new model will now be available for your use.
Installation via the WebUI#
To access the WebUI Model Manager, click on the button that looks like a cube in the upper right side of the browser screen. This will bring up a dialogue that lists the models you have already installed, and allows you to load, delete or edit them:
To add a new model, click on + Add New and select to either a checkpoint/safetensors model, or a diffusers model:
In this example, we chose Add Diffusers. As shown in the figure
below, a new dialogue prompts you to enter the name to use for the
model, its description, and either the location of the
model on disk, or its Repo ID on the HuggingFace web site. If you
choose to enter a path to disk, the system will autocomplete for you
as you type:
Press Add Model at the bottom of the dialogue (scrolled out of
site in the figure), and the model will be downloaded, imported, and
The Add Checkpoint/Safetensor Model option is similar, except that
in this case you can choose to scan an entire folder for
checkpoint/safetensors files to import. Simply type in the path of the
directory and press the "Search" icon. This will display the
.safetensors found inside the directory and its
subfolders, and allow you to choose which ones to import:
Model Management Startup Options#
invoke launcher and the
invokeai script accept a series of
command-line arguments that modify InvokeAI's behavior when loading
models. These can be provided on the command line, or added to the
InvokeAI root directory's
invokeai.init initialization file.
The arguments are:
--model <model name>-- Start up with the indicated model loaded
--ckpt_convert-- When a checkpoint/safetensors model is loaded, convert it into a
diffusersmodel in memory. This does not permanently save the converted model to disk.
--autoconvert <path/to/directory>-- Scan the indicated directory path for new checkpoint/safetensors files, convert them into
diffusersmodels, and import them into InvokeAI.
Here is an example of providing an argument on the command line using
invoke.sh launch script:
And here is what the same argument looks like in
--outdir="/home/fred/invokeai/outputs --no-nsfw_checker --autoconvert /home/fred/stable-diffusion-checkpoints
Created: October 29, 2022