Show simple item record

dc.contributor.authorCharalambous, Constantinosen_US
dc.contributor.authorYumak, Zerrinen_US
dc.contributor.authorStappen, A. Frank van deren_US
dc.contributor.editorJain, Eakta and Kosinka, Jiríen_US
dc.date.accessioned2018-04-14T18:29:56Z
dc.date.available2018-04-14T18:29:56Z
dc.date.issued2018
dc.identifier.issn1017-4656
dc.identifier.urihttp://dx.doi.org/10.2312/egp.20181019
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egp20181019
dc.description.abstractWe propose a procedural audio-driven speech animation method that takes into account emotional variations in speech. Given any audio with its corresponding speech transcript, the method generates speech animation for any 3D character. The expressive speech model matches the pitch and intensity variations in audio to individual visemes. In addition, we introduce a dynamic co-articulation model that takes into account linguistic rules varying among emotions. We test our approach against two popular speech animation tools and we show that our method surpass them in a perceptual experiment.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleAudio-driven Emotional Speech Animationen_US
dc.description.seriesinformationEG 2018 - Posters
dc.description.sectionheadersPosters
dc.identifier.doi10.2312/egp.20181019
dc.identifier.pages23-24


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record