Show simple item record

dc.contributor.authorCheema, Noshaba
dc.contributor.authorhosseini, somayeh
dc.contributor.authorSprenger, Janis
dc.contributor.authorHerrmann, Erik
dc.contributor.authorDu, Han
dc.contributor.authorFischer, Klaus
dc.contributor.authorSlusallek, Philipp
dc.contributor.editorCignoni, Paolo and Miguel, Ederen_US
dc.date.accessioned2019-05-05T17:50:16Z
dc.date.available2019-05-05T17:50:16Z
dc.date.issued2019
dc.identifier.issn1017-4656
dc.identifier.urihttps://doi.org/10.2312/egs.20191017
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egs20191017
dc.description.abstractHuman motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectMotion processing
dc.subjectMotion capture
dc.subjectImage processing
dc.titleFine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networksen_US
dc.description.seriesinformationEurographics 2019 - Short Papers
dc.description.sectionheadersLearning and Networks
dc.identifier.doi10.2312/egs.20191017
dc.identifier.pages69-72


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record