dc.contributor.author | Ma, Xiaohan | en_US |
dc.contributor.author | Le, Binh Huy | en_US |
dc.contributor.author | Deng, Zhigang | en_US |
dc.contributor.editor | Eitan Grinspun and Jessica Hodgins | en_US |
dc.date.accessioned | 2016-02-18T11:50:48Z | |
dc.date.available | 2016-02-18T11:50:48Z | |
dc.date.issued | 2009 | en_US |
dc.identifier.isbn | 978-1-60558-610-6 | en_US |
dc.identifier.issn | 1727-5288 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1145/1599470.1599486 | en_US |
dc.description.abstract | Most of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency. In this paper, we present a novel facial editing style learning framework that is able to learn a constraint-based Gaussian Process model from a small number of facial-editing pairs, and then it can be effectively applied to automate the editing of the remaining facial animation frames or transfer editing styles between different animation sequences. Comparing with the state of the art, multiresolution-based mesh sequence editing technique, our approach is more flexible, powerful, and adaptive. Our approach can dramatically reduce the manual efforts required by most of current facial animation editing approaches. | en_US |
dc.publisher | ACM SIGGRAPH / Eurographics Association | en_US |
dc.subject | Computer Graphics [I.3.7] | en_US |
dc.subject | Three Dimensional Graphics and Realism | en_US |
dc.subject | Animation | en_US |
dc.subject | Artificial Intelligence [I.2.6] | en_US |
dc.subject | Learning | en_US |
dc.subject | Analogies | en_US |
dc.title | Style Learning and Transferring for Facial Animation Editing | en_US |
dc.description.seriesinformation | Eurographics/ ACM SIGGRAPH Symposium on Computer Animation | en_US |
dc.description.sectionheaders | Editing in Style | en_US |
dc.identifier.doi | 10.1145/1599470.1599486 | en_US |
dc.identifier.pages | 123-132 | en_US |