This is for Advanced Users
Python experience is mandatory
As of InvokeAI v2.3.0 installation using the
conda package manager is no longer being supported. It will likely still work, but we are not testing this installation method.
On Windows systems, you are encouraged to install and use the PowerShell, which provides compatibility with Linux and Mac shells and nice features such as command-line completion.
Before you start, make sure you have the following preqrequisites installed. These are described in more detail in Automated Installation, and in many cases will already be installed (if, for example, you have used your system for gaming):
version 3.10 through 3.11
For those with NVidia GPUs, you will need to install the CUDA toolkit and optionally the XFormers library.
For Linux users with AMD GPUs, you will need to install the ROCm toolkit. Note that InvokeAI does not support AMD GPUs on Windows systems due to lack of a Windows ROCm library.
Visual C++ Libraries
Windows users must install the free Visual C++ libraries from Microsoft
The Xcode command line tools
for Macintosh users. Instructions are available at Free Code Camp
- Macintosh users may also need to run the
Install Certificatescommand if model downloads give lots of certificate errors. Run:
/Applications/Python\ 3.10/Install\ Certificates.command
- Macintosh users may also need to run the
To install InvokeAI with virtual environments and the PIP package manager, please follow these steps:
Please make sure you are using Python 3.10 through 3.11. The rest of the install procedure depends on this and will not work with other versions:
Create a directory to contain your InvokeAI library, configuration files, and models. This is known as the "runtime" or "root" directory, and often lives in your home directory under the name
Please keep in mind the disk space requirements - you will need at least 20GB for the models and the virtual environment. From now on we will refer to this directory as
INVOKEAI_ROOT. For convenience, the steps below create a shell variable of that name which contains the path to
Enter the root (invokeai) directory and create a virtual Python environment within it named
.venv. If the command
pythondoesn't work, try
python3. Note that while you may create the virtual environment anywhere in the file system, we recommend that you create it within the root directory as shown here. This makes it possible for the InvokeAI applications to find the model data and configuration. If you do not choose to install the virtual environment inside the root directory, then you must set the
INVOKEAI_ROOTenvironment variable in your shell environment, for example, by editing
~/.zshrcfiles, or setting the Windows environment variable using the Advanced System Settings dialogue. Refer to your operating system documentation for details.
Activate the new environment:
The command-line prompt should change to to show
(InvokeAI)at the beginning of the prompt. Note that all the following steps should be run while inside the INVOKEAI_ROOT directory
Make sure that pip is installed in your virtual environment and up to date:
Install the InvokeAI Package. The
--extra-index-urloption is used to select among CUDA, ROCm and CPU/MPS drivers as shown below:
Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment
Set up the runtime directory
In this step you will initialize your runtime directory with the downloaded models, model config files, directory for textual inversion embeddings, and your outputs.
Don't miss the dot at the end of the command!
invokeai-configurewill interactively guide you through the process of downloading and installing the weights files needed for InvokeAI. Note that the main Stable Diffusion weights file is protected by a license agreement that you have to agree to. The script will list the steps you need to take to create an account on the site that hosts the weights files, accept the agreement, and provide an access token that allows InvokeAI to legally download and install the weights files.
If you get an error message about a module not being installed, check that the
invokeaienvironment is active and if not, repeat step 5.
If you have already downloaded the weights file(s) for another Stable Diffusion distribution, you may skip this step (by selecting "skip" when prompted) and configure InvokeAI to use the previously-downloaded files. The process for this is described in Installing Models.
Run the command-line- or the web- interface:
From within INVOKEAI_ROOT, activate the environment (with
.venv\scripts\activate), and then run the script
invokeai. If the virtual environment you selected is NOT inside INVOKEAI_ROOT, then you must specify the path to the root directory by adding
--root_dir \path\to\invokeaito the commands below:
Make sure that the virtual environment is activated, which should create
(.venv)in front of your prompt!
If you choose the run the web interface, point your browser at http://localhost:9090 in order to load the GUI.
You can permanently set the location of the runtime directory by setting the environment variable
INVOKEAI_ROOTto the path of the directory. As mentioned previously, this is highly recommended* if your virtual environment is located outside of your runtime directory.
On linux, it is recommended to run invokeai with the following env var:
MALLOC_MMAP_THRESHOLD_=1048576. For example:
MALLOC_MMAP_THRESHOLD_=1048576 invokeai --web. This helps to prevent memory fragmentation that can lead to memory accumulation over time. This env var is set automatically when running via
Browse the features section to learn about all the things you can do with InvokeAI.
Subsequently, to relaunch the script, activate the virtual environment, and then launch
invokeaicommand. If you forget to activate the virtual environment you will most likeley receive a
command not founderror.
Do not move the runtime directory after installation. The virtual environment will get confused if the directory is moved.
The Textual Inversion script can be launched with the command:
Similarly, the Model Merging script can be launched with the command:
Leave off the
--guioption to run the script using command-line arguments. Pass the
--helpargument to get usage instructions.
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
code for InvokeAI. For this to work, you will need to install the
git source code management program. If it is not already installed
on your system, please see the Git Installation
You will also need to install the frontend development toolchain.
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
Why do I need the frontend toolchain?
The InvokeAI project uses trunk-based development. That means our
mainbranch is the development branch, and releases are tags on that branch. Because development is very active, we don't keep an updated build of the UI in
main- we only build it for production releases.
That means that between releases, to have a functioning application when running directly from the repo, you will need to run the UI in dev mode or build it regularly (any time the UI code changes).
- Create a fork of the InvokeAI repository through the GitHub UI or this link
From the command line, run this command:
This will create a directory named
InvokeAIand populate it with the full source code from your fork of the InvokeAI repository.
Activate the InvokeAI virtual environment as per step (4) of the manual installation protocol (important!)
Enter the InvokeAI repository directory and run one of these commands, based on your GPU:
Be sure to pass
-e(for an editable install) and don't forget the dot ("."). It is part of the command.
Install the frontend toolchain and do a production build of the UI as described.
You can now run
invokeaiand its related commands. The code will be read from the repository, so that you can edit the .py source files and watch the code's behavior change.
When you pull in new changes to the repo, be sure to re-build the UI.
If you wish to contribute to the InvokeAI project, you are encouraged to establish a GitHub account and "fork" https://github.com/invoke-ai/InvokeAI into your own copy of the repository. You can then use GitHub functions to create and submit pull requests to contribute improvements to the project.
Please see Contributing for hints on getting started.
Unsupported Conda Install#
Congratulations, you found the "secret" Conda installation instructions. If you really really want to use Conda with InvokeAI you can do so using this unsupported recipe:
pip install command shown in this recipe is for Linux/Windows
systems with an NVIDIA GPU. See step (6) above for the command to use
with other platforms/GPU combinations. If you don't wish to pass the
--root argument to
invokeai with each launch, you may set the
environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI staff will not be able to help you out. Caveat Emptor!
Created: November 12, 2022