The Shape Variational Autoencoder: A Deep Generative Model of Part-segmented 3D Objects
Abstract
We introduce a generative model of part-segmented 3D objects: the shape variational auto-encoder (ShapeVAE). The ShapeVAE describes a joint distribution over the existence of object parts, the locations of a dense set of surface points, and over surface normals associated with these points. Our model makes use of a deep encoder-decoder architecture that leverages the partdecomposability of 3D objects to embed high-dimensional shape representations and sample novel instances. Given an input collection of part-segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference enables imputation of missing parts or surface normals. In addition, by generating both points and surface normals, our model allows for the use of powerful surface-reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape-completion and test-set log-likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape-embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals.
BibTeX
@article {10.1111:cgf.13240,
journal = {Computer Graphics Forum},
title = {{The Shape Variational Autoencoder: A Deep Generative Model of Part-segmented 3D Objects}},
author = {Nash, Charlie and Williams, Chris K. I.},
year = {2017},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.13240}
}
journal = {Computer Graphics Forum},
title = {{The Shape Variational Autoencoder: A Deep Generative Model of Part-segmented 3D Objects}},
author = {Nash, Charlie and Williams, Chris K. I.},
year = {2017},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.13240}
}