SdTextualInversionConfig
seed
A randomization seed for reproducible training. Set to any constant integer for consistent training results. If
set to null
, training will be non-deterministic.
base_output_dir
The output directory where the training outputs (model checkpoints, logs, intermediate predictions) will be written. A subdirectory will be created with a timestamp for each new training run.
report_to
The integration to report results and logs to. This value is passed to Hugging Face Accelerate. See
accelerate.Accelerator.log_with
for more details.
max_train_steps
Total number of training steps to perform. One training step is one gradient update.
One of max_train_steps
or max_train_epochs
should be set.
max_train_epochs
Total number of training epochs to perform. One epoch is one pass over the entire dataset.
One of max_train_steps
or max_train_epochs
should be set.
save_every_n_epochs
The interval (in epochs) at which to save checkpoints.
One of save_every_n_epochs
or save_every_n_steps
should be set.
save_every_n_steps
The interval (in steps) at which to save checkpoints.
One of save_every_n_epochs
or save_every_n_steps
should be set.
validate_every_n_epochs
The interval (in epochs) at which validation images will be generated.
One of validate_every_n_epochs
or validate_every_n_steps
should be set.
validate_every_n_steps
The interval (in steps) at which validation images will be generated.
One of validate_every_n_epochs
or validate_every_n_steps
should be set.
model
Name or path of the base model to train. Can be in diffusers format, or a single stable diffusion checkpoint
file. (E.g. "runwayml/stable-diffusion-v1-5"
, "stabilityai/stable-diffusion-xl-base-1.0"
,
"/path/to/local/model.safetensors"
, etc.)
The model architecture must match the training pipeline being run. For example, if running a
Textual Inversion SDXL pipeline, then model
must refer to an SDXL model.
hf_variant
The Hugging Face Hub model variant to use. Only applies if model
is a Hugging Face Hub model name.
num_vectors
Note: num_vectors
can be overridden by initial_phrase
.
The number of textual inversion embedding vectors that will be used to learn the concept.
Increasing the num_vectors
enables the model to learn more complex concepts, but has the following drawbacks:
- greater risk of overfitting
- increased size of the resulting output file
- consumes more of the prompt capacity at inference time
Typical values for num_vectors
are in the range [1, 16].
As a rule of thumb, num_vectors
can be increased as the size of the dataset increases (without overfitting).
placeholder_token
The special word to associate the learned embeddings with. Choose a unique token that is unlikely to already exist in the tokenizer's vocabulary.
initializer_token
Note: Exactly one of initializer_token
, initial_embedding_file
, or initial_phrase
should be set.
A vocabulary token to use as an initializer for the placeholder token. It should be a single word that roughly describes the object or style that you're trying to train on. Must map to a single tokenizer token.
For example, if you are training on a dataset of images of your pet dog, a good choice would be dog
.
initial_embedding_file
Note: Exactly one of initializer_token
, initial_embedding_file
, or initial_phrase
should be set.
Path to an existing TI embedding that will be used to initialize the embedding being trained. The placeholder
token in the file must match the placeholder_token
field.
Either initializer_token
or initial_embedding_file
should be set.
initial_phrase
Note: Exactly one of initializer_token
, initial_embedding_file
, or initial_phrase
should be set.
A phrase that will be used to initialize the placeholder token embedding. The phrase will be tokenized, and the
corresponding embeddings will be used to initialize the placeholder tokens. The number of embedding vectors will be
inferred from the length of the tokenized phrase, so keep the phrase short. The consequences of training a large
number of embedding vectors are discussed in the num_vectors
field documentation.
For example, if you are training on a dataset of images of pokemon, you might use pokemon sketch white background
.
lr_scheduler
lr_scheduler: Literal['linear', 'cosine', 'cosine_with_restarts', 'polynomial', 'constant', 'constant_with_warmup'] = 'constant'
lr_warmup_steps
The number of warmup steps in the learning rate scheduler. Only applied to schedulers that support warmup. See lr_scheduler.
min_snr_gamma
Min-SNR weighting for diffusion training was introduced in https://arxiv.org/abs/2303.09556. This strategy improves the speed of training convergence by adjusting the weight of each sample.
min_snr_gamma
acts like an an upper bound on the weight of samples with low noise levels.
If None
, then Min-SNR weighting will not be applied. If enabled, the recommended value is min_snr_gamma = 5.0
.
cache_vae_outputs
If True, the VAE will be applied to all of the images in the dataset before starting training and the results will be cached to disk. This reduces the VRAM requirements during training (don't have to keep the VAE in VRAM), and speeds up training (don't have to run the VAE encoding step).
This option can only be enabled if all non-deterministic image augmentations are disabled (i.e. center_crop=True
,
random_flip=False
, etc.).
enable_cpu_offload_during_validation
If True, models will be kept in CPU memory and loaded into GPU memory one-by-one while generating validation images. This reduces VRAM requirements at the cost of slower generation of validation images.
gradient_accumulation_steps
The number of gradient steps to accumulate before each weight update. This is an alternative to increasing the
train_batch_size
when training with limited VRAM.
weight_dtype
All weights (trainable and fixed) will be cast to this precision. Lower precision dtypes require less VRAM, and result in faster training, but are more prone to issues with numerical stability.
Recommendations:
"float32"
: Use this mode if you have plenty of VRAM available."bfloat16"
: Use this mode if you have limited VRAM and a GPU that supports bfloat16."float16"
: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
See also mixed_precision
.
mixed_precision
The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified weight_dtype
, and
trainable parameters are kept in float32 precision to avoid issues with numerical stability.
This value is passed to Hugging Face Accelerate. See
accelerate.Accelerator.mixed_precision
for more details.
gradient_checkpointing
Whether or not to use gradient checkpointing to save memory at the expense of a slower backward pass. Enabling gradient checkpointing slows down training by ~20%.
max_checkpoints
The maximum number of checkpoints to keep. New checkpoints will replace earlier checkpoints to stay under this limit. Note that this limit is applied to 'step' and 'epoch' checkpoints separately.
prediction_type
The prediction type that will be used for training. If None
, the prediction type will be inferred from the
scheduler.
max_grad_norm
Maximum gradient norm for gradient clipping. Set to None
for no clipping.
validation_prompts
A list of prompts that will be used to generate images throughout training for the purpose of tracking progress.
negative_validation_prompts
A list of negative prompts that will be applied when generating validation images. If set, this list should have the same length as 'validation_prompts'.
num_validation_images_per_prompt
The number of validation images to generate for each prompt in validation_prompts
. Careful, validation can
become very slow if this number is too large.
use_masks
If True, image masks will be applied to weight the loss during training. The dataset must contain masks for this feature to be used.
data_loader
The data configuration.
See
TextualInversionSDDataLoaderConfig
for details.