CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks
Abstract
We propose a framework to create projectively-correct and seam-free cube-map images using generative adversarial learning. Deep generation of cube-maps that contain the correct projection of the environment onto its faces is not straightforward as has been recognized in prior work. Our approach extends an existing framework, StyleGAN3, to produce cube-maps instead of planar images. In addition to reshaping the output, we include a cube-specific volumetric initialization component, a projective resampling component, and a modification of augmentation operations to the spherical domain. Our results demonstrate the network's generation capabilities trained on imagery from various 3D environments. Additionally, we show the power and quality of our GAN design in an inversion task, combined with navigation capabilities, to perform novel view synthesis.
BibTeX
@article {10.1111:cgf.14755,
journal = {Computer Graphics Forum},
title = {{CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks}},
author = {May, Christopher and Aliaga, Daniel},
year = {2023},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14755}
}
journal = {Computer Graphics Forum},
title = {{CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks}},
author = {May, Christopher and Aliaga, Daniel},
year = {2023},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14755}
}