dc.contributor.author | Cheema, Noshaba | |
dc.contributor.author | hosseini, somayeh | |
dc.contributor.author | Sprenger, Janis | |
dc.contributor.author | Herrmann, Erik | |
dc.contributor.author | Du, Han | |
dc.contributor.author | Fischer, Klaus | |
dc.contributor.author | Slusallek, Philipp | |
dc.contributor.editor | Cignoni, Paolo and Miguel, Eder | en_US |
dc.date.accessioned | 2019-05-05T17:50:16Z | |
dc.date.available | 2019-05-05T17:50:16Z | |
dc.date.issued | 2019 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.uri | https://doi.org/10.2312/egs.20191017 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egs20191017 | |
dc.description.abstract | Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Motion processing | |
dc.subject | Motion capture | |
dc.subject | Image processing | |
dc.title | Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks | en_US |
dc.description.seriesinformation | Eurographics 2019 - Short Papers | |
dc.description.sectionheaders | Learning and Networks | |
dc.identifier.doi | 10.2312/egs.20191017 | |
dc.identifier.pages | 69-72 | |