dc.contributor.author | Charalambous, Constantinos | en_US |
dc.contributor.author | Yumak, Zerrin | en_US |
dc.contributor.author | Stappen, A. Frank van der | en_US |
dc.contributor.editor | Jain, Eakta and Kosinka, Jirí | en_US |
dc.date.accessioned | 2018-04-14T18:29:56Z | |
dc.date.available | 2018-04-14T18:29:56Z | |
dc.date.issued | 2018 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.uri | http://dx.doi.org/10.2312/egp.20181019 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egp20181019 | |
dc.description.abstract | We propose a procedural audio-driven speech animation method that takes into account emotional variations in speech. Given any audio with its corresponding speech transcript, the method generates speech animation for any 3D character. The expressive speech model matches the pitch and intensity variations in audio to individual visemes. In addition, we introduce a dynamic co-articulation model that takes into account linguistic rules varying among emotions. We test our approach against two popular speech animation tools and we show that our method surpass them in a perceptual experiment. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Audio-driven Emotional Speech Animation | en_US |
dc.description.seriesinformation | EG 2018 - Posters | |
dc.description.sectionheaders | Posters | |
dc.identifier.doi | 10.2312/egp.20181019 | |
dc.identifier.pages | 23-24 | |