Show simple item record

dc.contributor.authorPan, Junjunen_US
dc.contributor.authorWang, Siyuanen_US
dc.contributor.authorBai, Junxuanen_US
dc.contributor.authorDai, Juen_US
dc.contributor.editorZhang, Fang-Lue and Eisemann, Elmar and Singh, Karanen_US
dc.date.accessioned2021-10-14T11:11:19Z
dc.date.available2021-10-14T11:11:19Z
dc.date.issued2021
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14402
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14402
dc.description.abstractExisting keyframe-based motion synthesis mainly focuses on the generation of cyclic actions or short-term motion, such as walking, running, and transitions between close postures. However, these methods will significantly degrade the naturalness and diversity of the synthesized motion when dealing with complex and impromptu movements, e.g., dance performance and martial arts. In addition, current research lacks fine-grained control over the generated motion, which is essential for intelligent human-computer interaction and animation creation. In this paper, we propose a novel keyframe-based motion generation network based on multiple constraints, which can achieve diverse dance synthesis via learned knowledge. Specifically, the algorithm is mainly formulated based on the recurrent neural network (RNN) and the Transformer architecture. The backbone of our network is a hierarchical RNN module composed of two long short-term memory (LSTM) units, in which the first LSTM is utilized to embed the posture information of the historical frames into a latent space, and the second one is employed to predict the human posture for the next frame. Moreover, our framework contains two Transformer-based controllers, which are used to model the constraints of the root trajectory and the velocity factor respectively, so as to better utilize the temporal context of the frames and achieve fine-grained motion control. We verify the proposed approach on a dance dataset containing a wide range of contemporary dance. The results of three quantitative analyses validate the superiority of our algorithm. The video and qualitative experimental results demonstrate that the complex motion sequences generated by our algorithm can achieve diverse and smooth motion transitions between keyframes, even for long-term synthesis.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subjectMotion processing
dc.subjectMotion capture
dc.titleDiverse Dance Synthesis via Keyframes with Transformer Controllersen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersAnimation
dc.description.volume40
dc.description.number7
dc.identifier.doi10.1111/cgf.14402
dc.identifier.pages71-83


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 40-Issue 7
    Pacific Graphics 2021 - Symposium Proceedings

Show simple item record