dc.contributor.author | Lazalde, Oscar M. Martinez | en_US |
dc.contributor.author | Maddock, Steve | en_US |
dc.contributor.editor | John Collomosse and Ian Grimstead | en_US |
dc.date.accessioned | 2014-01-31T20:12:00Z | |
dc.date.available | 2014-01-31T20:12:00Z | |
dc.date.issued | 2010 | en_US |
dc.identifier.isbn | 978-3-905673-75-3 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/LocalChapterEvents/TPCG/TPCG10/199-206 | en_US |
dc.description.abstract | A common approach to producing visual speech is to interpolate the parameters describing a sequence of mouth shapes, known as visemes, where visemes are the visual counterpart of phonemes. A single viseme typically represents a group of phonemes that are visually similar. Often these visemes are based on the static poses used in producing a phoneme. In this paper we investigate alternative representations for visemes, produced using motion-captured data, in conjunction with a constraint-based approach for visual speech production. We show that using visemes which incorporate more contextual information produces better results that using static pose visemes. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Comparison of Different Types of Visemes using a Constraint-based Coarticulation Model | en_US |
dc.description.seriesinformation | Theory and Practice of Computer Graphics | en_US |