dc.contributor.author | Lorenzo, M.S. | en_US |
dc.contributor.author | Edge, J.D. | en_US |
dc.contributor.author | King, S.A. | en_US |
dc.contributor.author | Maddock, S. | en_US |
dc.contributor.editor | Peter Hall and Philip Willis | en_US |
dc.date.accessioned | 2016-02-09T10:27:06Z | |
dc.date.available | 2016-02-09T10:27:06Z | |
dc.date.issued | 2003 | en_US |
dc.identifier.isbn | 3-905673-54-1 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/vvg.20031019 | en_US |
dc.description.abstract | Motion capture (mocap) data is commonly used to recreate complex human motions in computer graphics. Markers are placed on an actor, and the captured movement of these markers allows us to animate computer-generated characters. Technologies have been introduced which allow this technique to be used not only to retrieve rigid body transformations, but also soft body motion such as the facial movement of an actor. The inherent difficulties of working with facial mocap lies in the application of a discrete sampling of surface points to animate a fine discontinuous mesh. Furthermore, in the general case, where the morphology of the actor's face does not coincide with that of the model we wish to animate, some form of retargetting must be applied. In this paper we discuss methods to animate face meshes from mocap data with minimal user intervention using a surface-oriented deformation paradigm. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | I.3.7 [Computer Graphics] | en_US |
dc.subject | Animation | en_US |
dc.title | Use and Re-use of Facial Motion CaptureData | en_US |
dc.description.seriesinformation | Vision, Video, and Graphics (VVG) 2003 | en_US |
dc.description.sectionheaders | Faces | en_US |
dc.identifier.doi | 10.2312/vvg.20031019 | en_US |
dc.identifier.pages | M.S. Lorenzo, J.D. Edge, S.A. King and S. Maddock-I.3.7 [Computer Graphics]: Animation | en_US |