dc.contributor.author | Habermann, Marc | en_US |
dc.contributor.author | Liu, Lingjie | en_US |
dc.contributor.author | Xu, Weipeng | en_US |
dc.contributor.author | Pons-Moll, Gerard | en_US |
dc.contributor.author | Zollhoefer, Michael | en_US |
dc.contributor.author | Theobalt, Christian | en_US |
dc.contributor.editor | Wang, Huamin | en_US |
dc.contributor.editor | Ye, Yuting | en_US |
dc.contributor.editor | Victor Zordan | en_US |
dc.date.accessioned | 2023-10-16T12:32:59Z | |
dc.date.available | 2023-10-16T12:32:59Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 2577-6193 | |
dc.identifier.uri | https://doi.org/10.1145/3606927 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1145/3606927 | |
dc.description.abstract | Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings. However, current avatar generation approaches either fall short in high-fidelity novel view synthesis, generalization to novel motions, reproduction of loose clothing, or they cannot render characters at the high resolution offered by modern displays. To this end, we propose HDHumans, which is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface and highly photo-realistic images of arbitrary novel views and of motions not seen at training time. At the technical core, our method tightly integrates a classical deforming character template with neural radiance fields (NeRF). Our method is carefully designed to achieve a synergy between classical surface deformation and a NeRF. First, the template guides the NeRF, which allows synthesizing novel views of a highly dynamic and articulated character and even enables the synthesis of novel motions. Second, we also leverage the dense pointclouds resulting from the NeRF to further improve the deforming surface via 3D-to-3D supervision. We outperform the state of the art quantitatively and qualitatively in terms of synthesis quality and resolution, as well as the quality of 3D surface reconstruction. | en_US |
dc.publisher | ACM Association for Computing Machinery | en_US |
dc.subject | CCS Concepts: Computing methodologies -> Computer vision; Rendering human synthesis, neural synthesis, human modeling, human performance capture" | |
dc.subject | Computing methodologies | |
dc.subject | Computer vision | |
dc.subject | Rendering human synthesis | |
dc.subject | neural synthesis | |
dc.subject | human modeling | |
dc.subject | human performance capture" | |
dc.title | HDHumans: A Hybrid Approach for High-fidelity Digital Humans | en_US |
dc.description.seriesinformation | Proceedings of the ACM on Computer Graphics and Interactive Techniques | |
dc.description.sectionheaders | Character Synthesis | |
dc.description.volume | 6 | |
dc.description.number | 3 | |
dc.identifier.doi | 10.1145/3606927 | |