Show simple item record

dc.contributor.authorFechteler, P.en_US
dc.contributor.authorHilsmann, A.en_US
dc.contributor.authorEisert, P.en_US
dc.contributor.editorChen, Min and Benes, Bedrichen_US
dc.date.accessioned2019-09-27T14:11:22Z
dc.date.available2019-09-27T14:11:22Z
dc.date.issued2019
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.13608
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13608
dc.description.abstractIn this paper, we address simultaneous markerless motion and shape capture from 3D input meshes of partial views onto a moving subject. We exploit a computer graphics model based on kinematic skinning as template tracking model. This template model consists of vertices, joints and skinning weights learned a priori from registered full‐body scans, representing true human shape and kinematics‐based shape deformations. Two data‐driven priors are used together with a set of constraints and cues for setting up sufficient correspondences. A Gaussian mixture model‐based pose prior of successive joint configurations is learned to soft‐constrain the attainable pose space to plausible human poses. To make the shape adaptation robust to outliers and non‐visible surface regions and to guide the shape adaptation towards realistically appearing human shapes, we use a mesh‐Laplacian‐based shape prior. Both priors are learned/extracted from the training set of the template model learning phase. The output is a model adapted to the captured subject with respect to shape and kinematic skeleton as well as the animation parameters to resemble the observed movements. With example applications, we demonstrate the benefit of such footage. Experimental evaluations on publicly available datasets show the achieved natural appearance and accuracy.: In this paper, we address simultaneous markerless motion and shape capture from 3D input meshes of partial views onto a moving subject. We exploit a computer graphics model based on kinematic skinning as template tracking model. This template model consists of vertices, joints and skinning weights learned a priori from registered full‐body scans, representing true human shape and kinematics‐based shape deformations. Two data‐driven priors are used together with a set of constraints and cues for setting up sufficient correspondences. A Gaussian mixture model‐based pose prior of successive joint configurations is learned to soft‐constrain the attainable pose space to plausible human poses. To make the shape adaptation robust to outliers and non‐visible surface regions and to guide the shape adaptation towards realistically appearing human shapes, we use a mesh‐Laplacian‐based shape prior. Both priors are learned/extracted from the training set of the template model learning phase.en_US
dc.publisher© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltden_US
dc.subjectcomputer vision ‐ tracking
dc.subjectmethods and applications
dc.subjectgeometric modelling
dc.subjectmodelling
dc.subjectmotion capture
dc.subjectanimation
dc.subject• Computing methodologies → Motion capture; Shape analysis
dc.titleMarkerless Multiview Motion Capture with 3D Shape Model Adaptationen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersArticles
dc.description.volume38
dc.description.number6
dc.identifier.doi10.1111/cgf.13608
dc.identifier.pages91-109


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record