dc.contributor.author | Kim, Ig-Jae | en_US |
dc.contributor.author | Ko, Hyeong-Seok | en_US |
dc.date.accessioned | 2015-02-21T15:40:58Z | |
dc.date.available | 2015-02-21T15:40:58Z | |
dc.date.issued | 2007 | en_US |
dc.identifier.issn | 1467-8659 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1111/j.1467-8659.2007.01051.x | en_US |
dc.description.abstract | This paper proposes a new technique for generating three-dimensional speech animation. The proposed technique takes advantage of both data-driven and machine learning approaches. It seeks to utilize the most relevant part of the captured utterances for the synthesis of input phoneme sequences. If highly relevant data are missing or lacking, then it utilizes less relevant (but more abundant) data and relies more heavily on machine learning for the lip-synch generation. This hybrid approach produces results that are more faithful to real data than conventional machine learning approaches, while being better able to handle incompleteness or redundancy in the database than conventional data-driven approaches. Experimental results, obtained by applying the proposed technique to the utterance of various words and phrases, show that (1) the proposed technique generates lip-synchs of different qualities depending on the availability of the data, and (2) the new technique produces more realistic results than conventional machine learning approaches. | en_US |
dc.publisher | The Eurographics Association and Blackwell Publishing Ltd | en_US |
dc.title | 3D Lip-Synch Generation with Data-Faithful Machine Learning | en_US |
dc.description.seriesinformation | Computer Graphics Forum | en_US |
dc.description.volume | 26 | en_US |
dc.description.number | 3 | en_US |
dc.identifier.doi | 10.1111/j.1467-8659.2007.01051.x | en_US |
dc.identifier.pages | 295-301 | en_US |