Show simple item record

dc.contributor.authorMiyawaki, Ryosukeen_US
dc.contributor.authorPerusquia-Hernandez, Monicaen_US
dc.contributor.authorIsoyama, Naoyaen_US
dc.contributor.authorUchiyama, Hideakien_US
dc.contributor.authorKiyokawa, Kiyoshien_US
dc.contributor.editorHideaki Uchiyamaen_US
dc.contributor.editorJean-Marie Normanden_US
dc.date.accessioned2022-11-29T07:25:17Z
dc.date.available2022-11-29T07:25:17Z
dc.date.issued2022
dc.identifier.isbn978-3-03868-179-3
dc.identifier.issn1727-530X
dc.identifier.urihttps://doi.org/10.2312/egve.20221273
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egve20221273
dc.description.abstractKnowing the relationship between speech-related facial movement and speech is important for avatar animation. Accurate facial displays are necessary to convey perceptual speech characteristics fully. Recently, an effort has been made to infer the relationship between facial movement and speech with data-driven methodologies using computer vision. To this aim, we propose to use blendshape-based facial movement tracking, because it can be easily translated to avatar movement. Furthermore, we present a protocol for audio-visual and behavioral data collection and a tool running on WEB that aids in collecting and synchronizing data. As a start, we provide a database of six Japanese participants reading emotion-related scripts at different volume levels. Using this methodology, we found a relationship between speech volume and facial movement around the nose, cheek, mouth, and head pitch. We hope that our protocols, WEB-based tool, and collected data will be useful for other scientists to derive models for avatar animation.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Human-centered computing -> Visualization toolkits
dc.subjectHuman centered computing
dc.subjectVisualization toolkits
dc.titleA Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animationen_US
dc.description.seriesinformationICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.description.sectionheadersInteraction
dc.identifier.doi10.2312/egve.20221273
dc.identifier.pages27-34
dc.identifier.pages8 pages


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License