Show simple item record

dc.contributor.authorSerra, J.en_US
dc.contributor.authorCetinaslan, O.en_US
dc.contributor.authorRavikumar, S.en_US
dc.contributor.authorOrvalho, V.en_US
dc.contributor.authorCosker, D.en_US
dc.contributor.editorChen, Min and Benes, Bedrichen_US
dc.date.accessioned2018-04-05T12:48:37Z
dc.date.available2018-04-05T12:48:37Z
dc.date.issued2018
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.13218
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13218
dc.description.abstractFacial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph, with coherent noise introduced by varying the optimal path that connects the desired nodes. Expression labels, extracted from the database, provide an intuitive control mechanism for animation. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort.en_US
dc.publisher© 2018 The Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectfacial animation
dc.subjectprocedural animation
dc.subjectmotion synthesis
dc.subjectmotion blending
dc.subjectmotion graphs
dc.subjectI.3.7 [Computer Graphics]: Three‐Dimensional Graphics and Realism—Animation
dc.subjectI.3.8 [Computer Graphics]: Applications
dc.titleEasy Generation of Facial Animation Using Motion Graphsen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersArticles
dc.description.volume37
dc.description.number1
dc.identifier.doi10.1111/cgf.13218
dc.identifier.pages97-111


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record