Show simple item record

dc.contributor.authorSaunders, Jack R.en_US
dc.contributor.authorNamboodiri, Vinay P.en_US
dc.contributor.editorHu, Ruizhenen_US
dc.contributor.editorCharalambous, Panayiotisen_US
dc.date.accessioned2024-04-16T15:38:59Z
dc.date.available2024-04-16T15:38:59Z
dc.date.issued2024
dc.identifier.isbn978-3-03868-237-0
dc.identifier.issn1017-4656
dc.identifier.urihttps://doi.org/10.2312/egs.20241017
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egs20241017
dc.description.abstractThe ability to accurately capture and express emotions is a critical aspect of creating believable characters in video games and other forms of entertainment. Traditionally, this animation has been achieved with artistic effort or performance capture, both requiring costs in time and labor. More recently, audio-driven models have seen success, however, these often lack expressiveness in areas not correlated to the audio signal. In this paper, we present a novel approach to facial animation by taking existing animations and allowing for the modification of style characteristics. We maintain the lip-sync of the animations with this method thanks to the use of a novel viseme-preserving loss. We perform quantitative and qualitative experiments to demonstrate the effectiveness of our work.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Animation; Machine learning
dc.subjectComputing methodologies → Animation
dc.subjectMachine learning
dc.titleFACTS: Facial Animation Creation using the Transfer of Stylesen_US
dc.description.seriesinformationEurographics 2024 - Short Papers
dc.description.sectionheadersHuman Simulation
dc.identifier.doi10.2312/egs.20241017
dc.identifier.pages4 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License