Unsupervised Learning of Disentangled 3D Representation from a Single Image
Date
2021Metadata
Show full item recordAbstract
Learning 3D representation of a single image is challenging considering the ambiguity, occlusion, and perspective project of an object in an image. Previous works either seek image annotation or 3D supervision to learn meaningful factors of an object or employ a StyleGAN-like framework for image synthesis. While the first ones rely on tedious annotation and even dense geometry ground truth, the second solutions usually cannot guarantee consistency of shapes between different view images. In this paper, we combine the advantages of both frameworks and propose an image disentanglement method based on 3D representation. Results show our method facilitates unsupervised 3D representation learning while preserving consistency between images.
BibTeX
@inproceedings {10.2312:egp.20211030,
booktitle = {Eurographics 2021 - Posters},
editor = {Bittner, Jirí and Waldner, Manuela},
title = {{Unsupervised Learning of Disentangled 3D Representation from a Single Image}},
author = {Lv, Junliang and Jiang, Haiyong and Xiao, Jun},
year = {2021},
publisher = {The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-134-2},
DOI = {10.2312/egp.20211030}
}
booktitle = {Eurographics 2021 - Posters},
editor = {Bittner, Jirí and Waldner, Manuela},
title = {{Unsupervised Learning of Disentangled 3D Representation from a Single Image}},
author = {Lv, Junliang and Jiang, Haiyong and Xiao, Jun},
year = {2021},
publisher = {The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-134-2},
DOI = {10.2312/egp.20211030}
}