Adding Conditional Control to Text-to-Image Diffusion Models
L. Zhang, A. Rao, and M. Agrawala. (2023)cite arxiv:2302.05543Comment: Codes and Supplementary Material: https://github.com/lllyasviel/ControlNet.
Abstract
We present ControlNet, a neural network architecture to add spatial
conditioning controls to large, pretrained text-to-image diffusion models.
ControlNet locks the production-ready large diffusion models, and reuses their
deep and robust encoding layers pretrained with billions of images as a strong
backbone to learn a diverse set of conditional controls. The neural
architecture is connected with "zero convolutions" (zero-initialized
convolution layers) that progressively grow the parameters from zero and ensure
that no harmful noise could affect the finetuning. We test various conditioning
controls, eg, edges, depth, segmentation, human pose, etc, with Stable
Diffusion, using single or multiple conditions, with or without prompts. We
show that the training of ControlNets is robust with small (<50k) and large
(>1m) datasets. Extensive results show that ControlNet may facilitate wider
applications to control image diffusion models.
%0 Generic
%1 zhang2023adding
%A Zhang, Lvmin
%A Rao, Anyi
%A Agrawala, Maneesh
%D 2023
%K CFG-resolution-weighting ControlNet LoRA U-net abrupt-success classifier-free-guidance finetuning low-rank-adaptation stable-diffusion zero-convolutions
%T Adding Conditional Control to Text-to-Image Diffusion Models
%U http://arxiv.org/abs/2302.05543
%X We present ControlNet, a neural network architecture to add spatial
conditioning controls to large, pretrained text-to-image diffusion models.
ControlNet locks the production-ready large diffusion models, and reuses their
deep and robust encoding layers pretrained with billions of images as a strong
backbone to learn a diverse set of conditional controls. The neural
architecture is connected with "zero convolutions" (zero-initialized
convolution layers) that progressively grow the parameters from zero and ensure
that no harmful noise could affect the finetuning. We test various conditioning
controls, eg, edges, depth, segmentation, human pose, etc, with Stable
Diffusion, using single or multiple conditions, with or without prompts. We
show that the training of ControlNets is robust with small (<50k) and large
(>1m) datasets. Extensive results show that ControlNet may facilitate wider
applications to control image diffusion models.
@misc{zhang2023adding,
abstract = {We present ControlNet, a neural network architecture to add spatial
conditioning controls to large, pretrained text-to-image diffusion models.
ControlNet locks the production-ready large diffusion models, and reuses their
deep and robust encoding layers pretrained with billions of images as a strong
backbone to learn a diverse set of conditional controls. The neural
architecture is connected with "zero convolutions" (zero-initialized
convolution layers) that progressively grow the parameters from zero and ensure
that no harmful noise could affect the finetuning. We test various conditioning
controls, eg, edges, depth, segmentation, human pose, etc, with Stable
Diffusion, using single or multiple conditions, with or without prompts. We
show that the training of ControlNets is robust with small (<50k) and large
(>1m) datasets. Extensive results show that ControlNet may facilitate wider
applications to control image diffusion models.},
added-at = {2023-11-08T12:37:18.000+0100},
author = {Zhang, Lvmin and Rao, Anyi and Agrawala, Maneesh},
biburl = {https://www.bibsonomy.org/bibtex/29806c048cb8f31aac1ddcaed49194024/jasha10},
description = {2302.05543.pdf},
interhash = {9481ee358fd48be19e61ca60b8ef518b},
intrahash = {9806c048cb8f31aac1ddcaed49194024},
keywords = {CFG-resolution-weighting ControlNet LoRA U-net abrupt-success classifier-free-guidance finetuning low-rank-adaptation stable-diffusion zero-convolutions},
note = {cite arxiv:2302.05543Comment: Codes and Supplementary Material: https://github.com/lllyasviel/ControlNet},
timestamp = {2023-11-08T13:43:06.000+0100},
title = {Adding Conditional Control to Text-to-Image Diffusion Models},
url = {http://arxiv.org/abs/2302.05543},
year = 2023
}