dc.contributor.author | Marsella, Stacy | en_US |
dc.contributor.author | Xu, Yuyu | en_US |
dc.contributor.author | Lhommet, Margaux | en_US |
dc.contributor.author | Feng, Andrew | en_US |
dc.contributor.author | Scherer, Stefan | en_US |
dc.contributor.author | Shapirok, Ari | en_US |
dc.contributor.editor | Theodore Kim and Robert Sumner | en_US |
dc.date.accessioned | 2016-02-18T12:01:20Z | |
dc.date.available | 2016-02-18T12:01:20Z | |
dc.date.issued | 2013 | en_US |
dc.identifier.isbn | 978-1-4503-2132-7 | en_US |
dc.identifier.issn | 1727-5288 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1145/2485895.2485900 | en_US |
dc.description.abstract | We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system performs a shallow analysis of the utterance text to determine its semantic, pragmatic and rhetorical content. Based on these analyses, the system generates facial expressions and behaviors including head movements, eye saccades, gestures, blinks and gazes. Our technique is able to synthesize the performance and generate novel gesture animations based on coarticulation with other closely scheduled animations. Because our method utilizes semantics in addition to prosody, we are able to generate virtual character performances that are more appropriate than methods that use only prosody. We perform a study that shows that our technique outperforms methods that use prosody alone. | en_US |
dc.publisher | ACM SIGGRAPH / Eurographics Association | en_US |
dc.subject | CR Categories | en_US |
dc.subject | I.3.7 [Computer Graphics] | en_US |
dc.subject | Three Dimensional Graphics and Realism | en_US |
dc.subject | Animation | en_US |
dc.subject | I.6.8 [Simulation and Modeling] | en_US |
dc.subject | Types of Simulation | en_US |
dc.subject | Animation | en_US |
dc.subject | Keywords | en_US |
dc.subject | animation | en_US |
dc.subject | gestures | en_US |
dc.subject | behavior | en_US |
dc.subject | conversational agent | en_US |
dc.title | Virtual Character Performance From Speech | en_US |
dc.description.seriesinformation | Eurographics/ ACM SIGGRAPH Symposium on Computer Animation | en_US |
dc.description.sectionheaders | Animating the Human Body | en_US |
dc.identifier.doi | 10.1145/2485895.2485900 | en_US |
dc.identifier.pages | 25-36 | en_US |