Show simple item record

dc.contributor.authorMa, Xiaohanen_US
dc.contributor.authorLe, Binh Huyen_US
dc.contributor.authorDeng, Zhigangen_US
dc.contributor.editorEitan Grinspun and Jessica Hodginsen_US
dc.date.accessioned2016-02-18T11:50:48Z
dc.date.available2016-02-18T11:50:48Z
dc.date.issued2009en_US
dc.identifier.isbn978-1-60558-610-6en_US
dc.identifier.issn1727-5288en_US
dc.identifier.urihttp://dx.doi.org/10.1145/1599470.1599486en_US
dc.description.abstractMost of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency. In this paper, we present a novel facial editing style learning framework that is able to learn a constraint-based Gaussian Process model from a small number of facial-editing pairs, and then it can be effectively applied to automate the editing of the remaining facial animation frames or transfer editing styles between different animation sequences. Comparing with the state of the art, multiresolution-based mesh sequence editing technique, our approach is more flexible, powerful, and adaptive. Our approach can dramatically reduce the manual efforts required by most of current facial animation editing approaches.en_US
dc.publisherACM SIGGRAPH / Eurographics Associationen_US
dc.subjectComputer Graphics [I.3.7]en_US
dc.subjectThree Dimensional Graphics and Realismen_US
dc.subjectAnimationen_US
dc.subjectArtificial Intelligence [I.2.6]en_US
dc.subjectLearningen_US
dc.subjectAnalogiesen_US
dc.titleStyle Learning and Transferring for Facial Animation Editingen_US
dc.description.seriesinformationEurographics/ ACM SIGGRAPH Symposium on Computer Animationen_US
dc.description.sectionheadersEditing in Styleen_US
dc.identifier.doi10.1145/1599470.1599486en_US
dc.identifier.pages123-132en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record