Style Learning and Transferring for Facial Animation Editing
Abstract
Most of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency. In this paper, we present a novel facial editing style learning framework that is able to learn a constraint-based Gaussian Process model from a small number of facial-editing pairs, and then it can be effectively applied to automate the editing of the remaining facial animation frames or transfer editing styles between different animation sequences. Comparing with the state of the art, multiresolution-based mesh sequence editing technique, our approach is more flexible, powerful, and adaptive. Our approach can dramatically reduce the manual efforts required by most of current facial animation editing approaches.
BibTeX
@inproceedings {10.1145:1599470.1599486,
booktitle = {Eurographics/ ACM SIGGRAPH Symposium on Computer Animation},
editor = {Eitan Grinspun and Jessica Hodgins},
title = {{Style Learning and Transferring for Facial Animation Editing}},
author = {Ma, Xiaohan and Le, Binh Huy and Deng, Zhigang},
year = {2009},
publisher = {ACM SIGGRAPH / Eurographics Association},
ISSN = {1727-5288},
ISBN = {978-1-60558-610-6},
DOI = {10.1145/1599470.1599486}
}
booktitle = {Eurographics/ ACM SIGGRAPH Symposium on Computer Animation},
editor = {Eitan Grinspun and Jessica Hodgins},
title = {{Style Learning and Transferring for Facial Animation Editing}},
author = {Ma, Xiaohan and Le, Binh Huy and Deng, Zhigang},
year = {2009},
publisher = {ACM SIGGRAPH / Eurographics Association},
ISSN = {1727-5288},
ISBN = {978-1-60558-610-6},
DOI = {10.1145/1599470.1599486}
}