Show simple item record

dc.contributor.authorLv, Junliangen_US
dc.contributor.authorJiang, Haiyongen_US
dc.contributor.authorXiao, Junen_US
dc.contributor.editorBittner, Jirí and Waldner, Manuelaen_US
dc.date.accessioned2021-04-09T19:18:48Z
dc.date.available2021-04-09T19:18:48Z
dc.date.issued2021
dc.identifier.isbn978-3-03868-134-2
dc.identifier.issn1017-4656
dc.identifier.urihttps://doi.org/10.2312/egp.20211030
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egp20211030
dc.description.abstractLearning 3D representation of a single image is challenging considering the ambiguity, occlusion, and perspective project of an object in an image. Previous works either seek image annotation or 3D supervision to learn meaningful factors of an object or employ a StyleGAN-like framework for image synthesis. While the first ones rely on tedious annotation and even dense geometry ground truth, the second solutions usually cannot guarantee consistency of shapes between different view images. In this paper, we combine the advantages of both frameworks and propose an image disentanglement method based on 3D representation. Results show our method facilitates unsupervised 3D representation learning while preserving consistency between images.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectImage representations
dc.subjectReconstruction
dc.subjectMesh models
dc.titleUnsupervised Learning of Disentangled 3D Representation from a Single Imageen_US
dc.description.seriesinformationEurographics 2021 - Posters
dc.description.sectionheadersPosters
dc.identifier.doi10.2312/egp.20211030
dc.identifier.pages11-12


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record