Skip to content

InvokeAI Configuration#


Runtime settings, including the location of files and directories, memory usage, and performance, are managed via the invokeai.yaml config file or environment variables. A subset of settings may be set via commandline arguments.

Settings sources are used in this order:

  • CLI args
  • Environment variables
  • invokeai.yaml settings
  • Fallback: defaults

InvokeAI Root Directory#

On startup, InvokeAI searches for its "root" directory. This is the directory that contains models, images, the database, and so on. It also contains a configuration file called invokeai.yaml.

InvokeAI searches for the root directory in this order:

  1. The --root <path> CLI arg.
  2. The environment variable INVOKEAI_ROOT.
  3. The directory containing the currently active virtual environment.
  4. Fallback: a directory in the current user's home directory named invokeai.

InvokeAI Configuration File#

Inside the root directory, we read settings from the invokeai.yaml file.

It has two sections - one for internal use and one for user settings:

# Internal metadata - do not edit:
schema_version: 4

# Put user settings here - see
host: # serve the app on your local network
models_dir: D:\invokeai\models # store models on an external drive
precision: float16 # always use fp16 precision

The settings in this file will override the defaults. You only need to change this file if the default for a particular setting doesn't work for you.

Some settings, like Model Marketplace API Keys, require the YAML to be formatted correctly. Here is a basic guide to YAML files.

You can fix a broken invokeai.yaml by deleting it and running the configuration script again -- option [6] in the launcher, "Re-run the configure script".

Custom Config File Location#

You can use any config file with the --config CLI arg. Pass in the path to the invokeai.yaml file you want to use.

Note that environment variables will trump any settings in the config file.

Environment Variables#

All settings may be set via environment variables by prefixing INVOKEAI_ to the variable name. For example, INVOKEAI_HOST would set the host setting.

For non-primitive values, pass a JSON-encoded string:

export INVOKEAI_REMOTE_API_TOKENS='[{"url_regex":"modelmarketplace", "token": "12345"}]'

We suggest using invokeai.yaml, as it is more user-friendly.

CLI Args#

A subset of settings may be specified using CLI args:

  • --root: specify the root directory
  • --config: override the default invokeai.yaml file location

All Settings#

Following the table are additional explanations for certain settings.

InvokeAIAppConfig #


Name Type Description
host str

IP address to bind to. Use to serve to your local network.

port int

Port to bind to.

allow_origins list[str]

Allowed CORS origins.

allow_credentials bool

Allow CORS credentials.

allow_methods list[str]

Methods allowed for CORS.

allow_headers list[str]

Headers allowed for CORS.

ssl_certfile Optional[Path]

SSL certificate file for HTTPS. See

ssl_keyfile Optional[Path]

SSL key file for HTTPS. See

log_tokenization bool

Enable logging of parsed prompt tokens.

patchmatch bool

Enable patchmatch inpaint code.

models_dir Path

Path to the models directory.

convert_cache_dir Path

Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.

legacy_conf_dir Path

Path to directory of legacy checkpoint config files.

db_dir Path

Path to InvokeAI databases directory.

outputs_dir Path

Path to directory for outputs.

custom_nodes_dir Path

Path to directory for custom nodes.

log_handlers list[str]

Log handler. Valid options are "console", "file=", "syslog=path|address:host:port", "http=".

log_format LOG_FORMAT

Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.
Valid values: plain, color, syslog, legacy

log_level LOG_LEVEL

Emit logging messages at this level or higher.
Valid values: debug, info, warning, error, critical

log_sql bool

Log SQL queries. log_level must be debug for this to do anything. Extremely verbose.

use_memory_db bool

Use in-memory database. Useful for development.

dev_reload bool

Automatically reload when Python sources are changed. Does not reload node definitions.

profile_graphs bool

Enable graph profiling using cProfile.

profile_prefix Optional[str]

An optional prefix for profile output files.

profiles_dir Path

Path to profiles output directory.

ram float

Maximum memory amount used by memory model cache for rapid switching (GB).

vram float

Amount of VRAM reserved for model storage (GB).

convert_cache float

Maximum size of on-disk converted models cache (GB).

lazy_offload bool

Keep models in VRAM until their space is needed.

log_memory_usage bool

If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.

device DEVICE

Preferred execution device. auto will choose the device depending on the hardware platform and the installed torch capabilities.
Valid values: auto, cpu, cuda, cuda:1, mps

precision PRECISION

Floating point precision. float16 will consume half the memory of float32 but produce slightly lower-quality images. The auto setting will guess the proper precision based on your video card and operating system.
Valid values: auto, float16, bfloat16, float32, autocast

sequential_guidance bool

Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.

attention_type ATTENTION_TYPE

Attention type.
Valid values: auto, normal, xformers, sliced, torch-sdp

attention_slice_size ATTENTION_SLICE_SIZE

Slice size, valid when attention_type=="sliced".
Valid values: auto, balanced, max, 1, 2, 3, 4, 5, 6, 7, 8

force_tiled_decode bool

Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).

pil_compress_level int

The compress_level setting of, used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.

max_queue_size int

Maximum number of items in the session queue.

allow_nodes Optional[list[str]]

List of nodes to allow. Omit to allow all.

deny_nodes Optional[list[str]]

List of nodes to deny. Omit to deny none.

node_cache_size int

How many cached nodes to keep in memory.

hashing_algorithm HASHING_ALGORITHMS

Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.
Valid values: blake3_multi, blake3_single, random, md5, sha1, sha224, sha256, sha384, sha512, blake2b, blake2s, sha3_224, sha3_256, sha3_384, sha3_512, shake_128, shake_256

remote_api_tokens Optional[list[URLRegexTokenPair]]

List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.

scan_models_on_startup bool

Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with use_memory_db for testing purposes.

Model Marketplace API Keys#

Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your invokeai.yaml file to provide that API key.

The pattern can be any valid regex (you may need to surround the pattern with quotes):

  # Any URL containing `` will automatically use `your_models_com_token`
  - url_regex:
    token: your_models_com_token
  # Any URL matching this contrived regex will use `some_other_token`
  - url_regex: '^[a-z]{3}whatever.*\.com$'
    token: some_other_token

The provided token will be added as a Bearer token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization.

Model Hashing#

Models are hashed during installation, providing a stable identifier for models across all platforms. Hashing is a one-time operation.

hashing_algorithm: blake3_single # default value

You might want to change this setting, depending on your system:

  • blake3_single (default): Single-threaded - best for spinning HDDs, still OK for SSDs
  • blake3_multi: Parallelized, memory-mapped implementation - best for SSDs, terrible for spinning disks
  • random: Skip hashing entirely - fastest but of course no hash

During the first startup after upgrading to v4, all of your models will be hashed. This can take a few minutes.

Most common algorithms are supported, like md5, sha256, and sha512. These are typically much, much slower than either of the BLAKE3 variants.

Path Settings#

These options set the paths of various directories and files used by InvokeAI. Any user-defined paths should be absolute paths.


Several different log handler destinations are available, and multiple destinations are supported by providing a list:

  - console
  - syslog=localhost
  - file=/var/log/invokeai.log
  • console is the default. It prints log messages to the command-line window from which InvokeAI was launched.

  • syslog is only available on Linux and Macintosh systems. It uses the operating system's "syslog" facility to write log file entries locally or to a remote logging machine. syslog offers a variety of configuration options:

  syslog=/dev/log`      - log to the /dev/log device
  syslog=localhost`     - log to the network logger running on the local machine
  syslog=localhost:512` - same as above, but using a non-standard port
                        - Log to LAN-connected server "fredserver" using the facility LOG_USER and datagram packets.
  • http can be used to log to a remote web server. The server must be properly configured to receive and act on log messages. The option accepts the URL to the web server, and a method argument indicating whether the message should be submitted using the GET or POST method.

The log_format option provides several alternative formats:

  • color - default format providing time, date and a message, using text colors to distinguish different log severities
  • plain - same as above, but monochrome text only
  • syslog - the log level and error message only, allowing the syslog system to attach the time and date
  • legacy - a format similar to the one used by the legacy 2.3 InvokeAI releases.