Show simple item record

dc.contributor.authorKim, Ig-Jaeen_US
dc.contributor.authorKo, Hyeong-Seoken_US
dc.date.accessioned2015-02-21T15:40:58Z
dc.date.available2015-02-21T15:40:58Z
dc.date.issued2007en_US
dc.identifier.issn1467-8659en_US
dc.identifier.urihttp://dx.doi.org/10.1111/j.1467-8659.2007.01051.xen_US
dc.description.abstractThis paper proposes a new technique for generating three-dimensional speech animation. The proposed technique takes advantage of both data-driven and machine learning approaches. It seeks to utilize the most relevant part of the captured utterances for the synthesis of input phoneme sequences. If highly relevant data are missing or lacking, then it utilizes less relevant (but more abundant) data and relies more heavily on machine learning for the lip-synch generation. This hybrid approach produces results that are more faithful to real data than conventional machine learning approaches, while being better able to handle incompleteness or redundancy in the database than conventional data-driven approaches. Experimental results, obtained by applying the proposed technique to the utterance of various words and phrases, show that (1) the proposed technique generates lip-synchs of different qualities depending on the availability of the data, and (2) the new technique produces more realistic results than conventional machine learning approaches.en_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltden_US
dc.title3D Lip-Synch Generation with Data-Faithful Machine Learningen_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume26en_US
dc.description.number3en_US
dc.identifier.doi10.1111/j.1467-8659.2007.01051.xen_US
dc.identifier.pages295-301en_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record