dc.contributor.author | Gonzalez-Sosa, Ester | en_US |
dc.contributor.author | Perez, Pablo | en_US |
dc.contributor.author | Kachach, Redouane | en_US |
dc.contributor.author | Ruiz, Jaime Jesus | en_US |
dc.contributor.author | Villegas, Alvaro | en_US |
dc.contributor.editor | Jain, Eakta and Kosinka, Jirí | en_US |
dc.date.accessioned | 2018-04-14T18:29:52Z | |
dc.date.available | 2018-04-14T18:29:52Z | |
dc.date.issued | 2018 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.uri | http://dx.doi.org/10.2312/egp.20181012 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egp20181012 | |
dc.description.abstract | In this work, we propose the use of deep learning techniques to segment items of interest from the local region to increase self-presence in Virtual Reality (VR) scenarios. Our goal is to segment hand images from the perspective of a user wearing a VR headset. We create the VR Hand Dataset, composed of more than 10:000 images, including variations of hand position, scenario, outfits, sleeve and people. We also describe the procedure followed to automatically generate groundtruth images and create synthetic images. Preliminary results look promising. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Towards Self-Perception in Augmented Virtuality: Hand Segmentation with Fully Convolutional Networks | en_US |
dc.description.seriesinformation | EG 2018 - Posters | |
dc.description.sectionheaders | Posters | |
dc.identifier.doi | 10.2312/egp.20181012 | |
dc.identifier.pages | 9-10 | |