Show simple item record

dc.contributor.authorPark, Soominen_US
dc.contributor.authorJang, Deok-Kyeongen_US
dc.contributor.authorLee, Sung-Heeen_US
dc.contributor.editorNarain, Rahul and Neff, Michael and Zordan, Victoren_US
dc.date.accessioned2022-02-07T13:32:34Z
dc.date.available2022-02-07T13:32:34Z
dc.date.issued2021
dc.identifier.issn2577-6193
dc.identifier.urihttps://doi.org/10.1145/3480145
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1145/3480145
dc.description.abstractThis paper presents a novel deep learning-based framework for translating a motion into various styles within multiple domains. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels to support mapping between multiple style domains. We construct a spatio-temporal graph to model a motion sequence and employ the spatial-temporal graph convolution networks (ST-GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatial-temporal modeling, our framework shows improved style translation results between significantly different actions and on a long motion sequence containing multiple actions. In addition, we first develop a mapping network for motion stylization that maps a random noise to style, which allows for generating diverse stylization results without using reference motions. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.en_US
dc.publisherACMen_US
dc.subjectComputing methodologies
dc.subjectMotion processing
dc.subjectNeural networks
dc.subjectmotion synthesis
dc.subjectgenerative model
dc.subjectgraph convolutional networks
dc.subjectcharacter animation
dc.subjectdeep learning
dc.titleDiverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Modelen_US
dc.description.seriesinformationProceedings of the ACM on Computer Graphics and Interactive Techniques
dc.description.sectionheaderspapers
dc.description.volume4
dc.description.number3
dc.identifier.doi10.1145/3480145


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record