dc.contributor.author | Chang, Yao-Jen | en_US |
dc.contributor.author | Ezzat, Tony | en_US |
dc.contributor.editor | D. Terzopoulos and V. Zordan and K. Anjyo and P. Faloutsos | en_US |
dc.date.accessioned | 2014-01-29T07:12:27Z | |
dc.date.available | 2014-01-29T07:12:27Z | |
dc.date.issued | 2005 | en_US |
dc.identifier.isbn | 1-59593-198-8 | en_US |
dc.identifier.issn | 1727-5288 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/SCA/SCA05/143-152 | en_US |
dc.description.abstract | Image-based videorealistic speech animation achieves significant visual realism at the cost of the collection of a large 5- to 10-minute video corpus from the specific person to be animated. This requirement hinders its use in broad applications, since a large video corpus for a specific person under a controlled recording setup may not be easily obtained. In this paper, we propose a model transfer and adaptation algorithm which allows for a novel person to be animated using only a small video corpus. The algorithm starts with a multidimensional morphable model (MMM) previously trained from a different speaker with a large corpus, and transfers it to the novel speaker with a much smaller corpus. The algorithm consists of 1) a novel matching-by-synthesis algorithm which semi-automatically selects new MMM prototype images from the new video corpus and 2) a novel gradient descent linear regression algorithm which adapts the MMM phoneme models to the data in the novel video corpus. Encouraging experimental results are presented in which a morphable model trained from a performer with a 10- minute corpus is transferred to a novel person using a 15-second movie clip of him as the adaptation video corpus. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation | en_US |
dc.title | Transferable Videorealistic Speech Animation | en_US |
dc.description.seriesinformation | Symposium on Computer Animation | en_US |