Abstract
Can a generative model be trained to produce images from a specific domain, guided only by a text prompt, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or infeasible to reach with existing methods. We conduct an extensive set of experiments across a wide range of domains. These demonstrate the effectiveness of our approach, and show that our models preserve the latent-space structure that makes generative models appealing for downstream tasks. Code and videos available at: stylegan-nada.github.io/
| Original language | English |
|---|---|
| Article number | 3530164 |
| Journal | ACM Transactions on Graphics |
| Volume | 41 |
| Issue number | 4 |
| DOIs | |
| State | Published - 22 Jul 2022 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2022 ACM.
Funding
This work was partially supported by Len Blavatnik and the Blavatnik family foundation, the Deutsch Foundation, the Yandex Initiative in Machine Learning, BSF (grant 2020280) and ISF (grants 2492/20 and 3441/21).
| Funders | Funder number |
|---|---|
| Deutsch Foundation | |
| Yandex Initiative in Machine Learning | |
| Blavatnik Family Foundation | |
| United States-Israel Binational Science Foundation | 2020280 |
| Israel Science Foundation | 2492/20, 3441/21 |
Keywords
- Generator domain adaptation
- Text-guided content generation
- Zero-shot training