PERGAMO: Personalized 3D Garments from Monocular Video
Date
2022Metadata
Show full item recordAbstract
Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.
BibTeX
@article {10.1111:cgf.14644,
journal = {Computer Graphics Forum},
title = {{PERGAMO: Personalized 3D Garments from Monocular Video}},
author = {Casado-Elvira, Andrés and Comino Trinidad, Marc and Casas, Dan},
year = {2022},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14644}
}
journal = {Computer Graphics Forum},
title = {{PERGAMO: Personalized 3D Garments from Monocular Video}},
author = {Casado-Elvira, Andrés and Comino Trinidad, Marc and Casas, Dan},
year = {2022},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14644}
}