Show simple item record

dc.contributor.authorGonzalez-Sosa, Esteren_US
dc.contributor.authorPerez, Pabloen_US
dc.contributor.authorKachach, Redouaneen_US
dc.contributor.authorRuiz, Jaime Jesusen_US
dc.contributor.authorVillegas, Alvaroen_US
dc.contributor.editorJain, Eakta and Kosinka, Jiríen_US
dc.date.accessioned2018-04-14T18:29:52Z
dc.date.available2018-04-14T18:29:52Z
dc.date.issued2018
dc.identifier.issn1017-4656
dc.identifier.urihttp://dx.doi.org/10.2312/egp.20181012
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egp20181012
dc.description.abstractIn this work, we propose the use of deep learning techniques to segment items of interest from the local region to increase self-presence in Virtual Reality (VR) scenarios. Our goal is to segment hand images from the perspective of a user wearing a VR headset. We create the VR Hand Dataset, composed of more than 10:000 images, including variations of hand position, scenario, outfits, sleeve and people. We also describe the procedure followed to automatically generate groundtruth images and create synthetic images. Preliminary results look promising.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleTowards Self-Perception in Augmented Virtuality: Hand Segmentation with Fully Convolutional Networksen_US
dc.description.seriesinformationEG 2018 - Posters
dc.description.sectionheadersPosters
dc.identifier.doi10.2312/egp.20181012
dc.identifier.pages9-10


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record