Show simple item record

dc.contributor.authorMason, Ianen_US
dc.contributor.authorStarke, Sebastianen_US
dc.contributor.authorZhang, Heen_US
dc.contributor.authorBilen, Hakanen_US
dc.contributor.authorKomura, Takuen_US
dc.contributor.editorFu, Hongbo and Ghosh, Abhijeet and Kopf, Johannesen_US
dc.date.accessioned2018-10-07T14:58:47Z
dc.date.available2018-10-07T14:58:47Z
dc.date.issued2018
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.13555
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13555
dc.description.abstractUsing neural networks for learning motion controllers from motion capture data is becoming popular due to the natural and smooth motions they can produce, the wide range of movements they can learn and their compactness once they are trained. Despite these advantages, these systems require large amounts of motion capture data for each new character or style of motion to be generated, and systems have to undergo lengthy retraining, and often reengineering, to get acceptable results. This can make the use of these systems impractical for animators and designers and solving this issue is an open and rather unexplored problem in computer graphics. In this paper we propose a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained. Given a pretrained character controller in the form of a Phase-Functioned Neural Network for locomotion, our system can quickly adapt the locomotion to novel styles using only a short motion clip as an example. We introduce a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style, which both reduces the memory burden at runtime and facilitates learning from smaller quantities of data. We show that our system is suitable for learning stylized motions with few clips of motion data and synthesizing smooth motions in real-time.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subjectAnimation
dc.subjectNeural networks
dc.subjectMotion capture
dc.titleFew-shot Learning of Homogeneous Human Locomotion Stylesen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersAnimation
dc.description.volume37
dc.description.number7
dc.identifier.doi10.1111/cgf.13555
dc.identifier.pages143-153


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 37-Issue 7
    Pacific Graphics 2018 - Symposium Proceedings

Show simple item record