Show simple item record

dc.contributor.authorYang, Dongseoken_US
dc.contributor.authorKang, Jihoen_US
dc.contributor.authorMa, Lingnien_US
dc.contributor.authorGreer, Josephen_US
dc.contributor.authorYe, Yutingen_US
dc.contributor.authorLee, Sung-Heeen_US
dc.contributor.editorBermano, Amit H.en_US
dc.contributor.editorKalogerakis, Evangelosen_US
dc.date.accessioned2024-04-16T14:43:26Z
dc.date.available2024-04-16T14:43:26Z
dc.date.issued2024
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.15057
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf15057
dc.description.abstractFull-body avatar presence is important for immersive social and environmental interactions in digital reality. However, current devices only provide three six degrees of freedom (DOF) poses from the headset and two controllers (i.e. three-point trackers). Because it is a highly under-constrained problem, inferring full-body pose from these inputs is challenging, especially when supporting the full range of body proportions and use cases represented by the general population. In this paper, we propose a deep learning framework, DivaTrack, which outperforms existing methods when applied to diverse body sizes and activities. We augment the sparse three-point inputs with linear accelerations from Inertial Measurement Units (IMU) to improve foot contact prediction. We then condition the otherwise ambiguous lower-body pose with the predictions of foot contact and upper-body pose in a two-stage model. We further stabilize the inferred full-body pose in a wide range of configurations by learning to blend predictions that are computed in two reference frames, each of which is designed for different types of motions. We demonstrate the effectiveness of our design on a large dataset that captures 22 subjects performing challenging locomotion for three-point tracking, including lunges, hula-hooping, and sitting. As shown in a live demo using the Meta VR headset and Xsens IMUs, our method runs in real-time while accurately tracking a user's motion when they perform a diverse set of movements.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Motion capture
dc.subjectComputing methodologies
dc.subjectMotion capture
dc.titleDivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackersen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersCamera Paths and Motion Tracking
dc.description.volume43
dc.description.number2
dc.identifier.doi10.1111/cgf.15057
dc.identifier.pages13 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License