Unsupervised K-modal styled content generation

Omry Sendik, Dani Lischinski, Daniel Cohen-Or

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

The emergence of deep generative models has recently enabled the automatic generation of massive amounts of graphical content, both in 2D and in 3D. Generative Adversarial Networks (GANs) and style control mechanisms, such as Adaptive Instance Normalization (AdaIN), have proved particularly effective in this context, culminating in the state-of-the-art StyleGAN architecture. While such models are able to learn diverse distributions, provided a sufficiently large training set, they are not well-suited for scenarios where the distribution of the training data exhibits a multi-modal behavior. In such cases, reshaping a uniform or normal distribution over the latent space into a complex multi-modal distribution in the data domain is challenging, and the generator might fail to sample the target distribution well. Furthermore, existing unsupervised generative models are not able to control the mode of the generated samples independently of the other visual attributes, despite the fact that they are typically disentangled in the training data. In this paper, we introduce uMM-GAN, a novel architecture designed to better model multi-modal distributions, in an unsupervised fashion. Building upon the StyleGAN architecture, our network learns multiple modes, in a completely unsupervised manner, and combines them using a set of learned weights. We demonstrate that this approach is capable of effectively approximating a complex distribution as a superposition of multiple simple ones. We further show that uMM-GAN effectively disentangles between modes and style, thereby providing an independent degree of control over the generated content.

Original languageEnglish
Article number100
JournalACM Transactions on Graphics
Volume39
Issue number4
DOIs
StatePublished - 8 Jul 2020
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2020 ACM.

Funding

We thank the anonymous reviewers for their constructive comments. This work was supported by the Israel Science Foundation (grant no. 2366/16).

FundersFunder number
Israel Science Foundation2366/16

    Keywords

    • StyleGAN
    • generative adversarial networks
    • multi-modal distributions

    Fingerprint

    Dive into the research topics of 'Unsupervised K-modal styled content generation'. Together they form a unique fingerprint.

    Cite this