dc.contributor.author | Kikuchi, Takazumi | en_US |
dc.contributor.author | Endo, Yuki | en_US |
dc.contributor.author | Kanamori, Yoshihiro | en_US |
dc.contributor.author | Hashimoto, Taisuke | en_US |
dc.contributor.author | Mitani, Jun | en_US |
dc.contributor.editor | Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung | en_US |
dc.date.accessioned | 2017-10-16T05:26:08Z | |
dc.date.available | 2017-10-16T05:26:08Z | |
dc.date.issued | 2017 | |
dc.identifier.isbn | 978-3-03868-051-2 | |
dc.identifier.uri | http://dx.doi.org/10.2312/pg.20171317 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/pg20171317 | |
dc.description.abstract | Human parsing is a fundamental task to estimate semantic parts in a human image such as face, arm, leg, hat, and dress. Recent deep-learning based methods have achieved significant improvements, but collecting training datasets of pixel-wise annotations is labor-intensive. In this paper, we propose two solutions to cope with limited dataset. First, to handle various poses, we incorporate a pose estimation network into an end-to-end human parsing network in order to transfer common features across the domains. The pose estimation network can be trained using rich datasets and feed valuable features to the human parsing network. Second, to handle complicated backgrounds, we increase the variations of background images automatically by replacing the original backgrounds of human images with those obtained from large-scale scenery image datasets. While each of the two solutions is versatile and beneficial to human parsing, their combination yields further improvement. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Image segmentation | |
dc.subject | Image processing | |
dc.title | Transferring Pose and Augmenting Background Variation for Deep Human Image Parsing | en_US |
dc.description.seriesinformation | Pacific Graphics Short Papers | |
dc.description.sectionheaders | Short Papers | |
dc.identifier.doi | 10.2312/pg.20171317 | |
dc.identifier.pages | 7-12 | |