Dynamic, Expressive Speech Animation From a Single Mesh
View/ Open
Date
2007Author
Wampler, Kevin
Sasaki, Daichi
Zhang, Li
Popovic, Zoran
Metadata
Show full item recordAbstract
In this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time.
BibTeX
@inproceedings {10.2312:SCA:SCA07:053-062,
booktitle = {Eurographics/SIGGRAPH Symposium on Computer Animation},
editor = {Dimitris Metaxas and Jovan Popovic},
title = {{Dynamic, Expressive Speech Animation From a Single Mesh}},
author = {Wampler, Kevin and Sasaki, Daichi and Zhang, Li and Popovic, Zoran},
year = {2007},
publisher = {The Eurographics Association},
ISSN = {1727-5288},
ISBN = {978-3-905673-44-9},
DOI = {10.2312/SCA/SCA07/053-062}
}
booktitle = {Eurographics/SIGGRAPH Symposium on Computer Animation},
editor = {Dimitris Metaxas and Jovan Popovic},
title = {{Dynamic, Expressive Speech Animation From a Single Mesh}},
author = {Wampler, Kevin and Sasaki, Daichi and Zhang, Li and Popovic, Zoran},
year = {2007},
publisher = {The Eurographics Association},
ISSN = {1727-5288},
ISBN = {978-3-905673-44-9},
DOI = {10.2312/SCA/SCA07/053-062}
}