Structural Analogy from a Single Image Pair
Abstract
The task of unsupervised image‐to‐image translation has seen substantial advancements in recent years through the use of deep neural networks. Typically, the proposed solutions learn the characterizing distribution of two large, unpaired collections of images, and are able to alter the appearance of a given image, while keeping its geometry intact. In this paper, we explore the capabilities of neural networks to understand image given only a single pair of images, and . We seek to generate images that are : that is, to generate an image that keeps the appearance and style of , but has a structural arrangement that corresponds to . The key idea is to map between image patches at different scales. This enables controlling the granularity at which analogies are produced, which determines the conceptual distinction between style and content. In addition to , our method can be used to generate high quality imagery in other conditional generation tasks utilizing images and only: guided image synthesis, style and texture transfer, text translation as well as video translation. Our code and additional results are available in
BibTeX
@article {10.1111:cgf.14186,
journal = {Computer Graphics Forum},
title = {{Structural Analogy from a Single Image Pair}},
author = {Benaim, S. and Mokady, R. and Bermano, A. and Wolf, L.},
year = {2021},
publisher = {© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14186}
}
journal = {Computer Graphics Forum},
title = {{Structural Analogy from a Single Image Pair}},
author = {Benaim, S. and Mokady, R. and Bermano, A. and Wolf, L.},
year = {2021},
publisher = {© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14186}
}