Show simple item record

dc.contributor.authorPonton, Jose Luisen_US
dc.contributor.authorYun, Haoranen_US
dc.contributor.authorAndujar, Carlosen_US
dc.contributor.authorPelechano, Nuriaen_US
dc.contributor.editorDominik L. Michelsen_US
dc.contributor.editorSoeren Pirken_US
dc.date.accessioned2022-08-10T15:19:19Z
dc.date.available2022-08-10T15:19:19Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14628
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14628
dc.description.abstractThe animation of user avatars plays a crucial role in conveying their pose, gestures, and relative distances to virtual objects or other users. Self-avatar animation in immersive VR helps improve the user experience and provides a Sense of Embodiment. However, consumer-grade VR devices typically include at most three trackers, one at the Head Mounted Display (HMD), and two at the handheld VR controllers. Since the problem of reconstructing the user pose from such sparse data is ill-defined, especially for the lower body, the approach adopted by most VR games consists of assuming the body orientation matches that of the HMD, and applying animation blending and time-warping from a reduced set of animations. Unfortunately, this approach produces noticeable mismatches between user and avatar movements. In this work we present a new approach to animate user avatars that is suitable for current mainstream VR devices. First, we use a neural network to estimate the user's body orientation based on the tracking information from the HMD and the hand controllers. Then we use this orientation together with the velocity and rotation of the HMD to build a feature vector that feeds a Motion Matching algorithm. We built a MoCap database with animations of VR users wearing a HMD and used it to test our approach on both self-avatars and other users' avatars. Our results show that our system can provide a large variety of lower body animations while correctly matching the user orientation, which in turn allows us to represent not only forward movements but also stepping in any direction.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Human-centered computing --> User models; Computing methodologies --> Motion capture; Virtual reality
dc.subjectHuman centered computing
dc.subjectUser models
dc.subjectComputing methodologies
dc.subjectMotion capture
dc.subjectVirtual reality
dc.titleCombining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devicesen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersMotion I
dc.description.volume41
dc.description.number8
dc.identifier.doi10.1111/cgf.14628
dc.identifier.pages107-118
dc.identifier.pages12 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 41-Issue 8
    ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2022

Show simple item record