dc.contributor.author | Milef, Nicholas | en_US |
dc.contributor.author | Sueda, Shinjiro | en_US |
dc.contributor.author | Kalantari, Nima Khademi | en_US |
dc.contributor.editor | Myszkowski, Karol | en_US |
dc.contributor.editor | Niessner, Matthias | en_US |
dc.date.accessioned | 2023-05-03T06:10:46Z | |
dc.date.available | 2023-05-03T06:10:46Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14767 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14767 | |
dc.description.abstract | We propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies -> Neural networks; Motion processing; Virtual reality | |
dc.subject | Computing methodologies | |
dc.subject | Neural networks | |
dc.subject | Motion processing | |
dc.subject | Virtual reality | |
dc.title | Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Capturing Human Pose and Appearance | |
dc.description.volume | 42 | |
dc.description.number | 2 | |
dc.identifier.doi | 10.1111/cgf.14767 | |
dc.identifier.pages | 359-369 | |
dc.identifier.pages | 11 pages | |