dc.contributor.author | Asano, Nao | en_US |
dc.contributor.author | Masai, Katsutoshi | en_US |
dc.contributor.author | Sugiura, Yuta | en_US |
dc.contributor.author | Sugimoto, Maki | en_US |
dc.contributor.editor | Robert W. Lindeman and Gerd Bruder and Daisuke Iwai | en_US |
dc.date.accessioned | 2017-11-21T15:42:38Z | |
dc.date.available | 2017-11-21T15:42:38Z | |
dc.date.issued | 2017 | |
dc.identifier.isbn | 978-3-03868-038-3 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.uri | http://dx.doi.org/10.2312/egve.20171334 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20171334 | |
dc.description.abstract | Facial performance capture is used for animation production that projects a performer's facial expression to a computer graphics model. Retro-reflective markers and cameras are widely used for the performance capture. To capture expressions, we need to place markers on the performer's face and calibrate the intrinsic and extrinsic parameters of cameras in advance. However, the measurable space is limited to the calibrated area. In this paper, we propose a system to capture facial performance using a smart eyewear with photo reflective sensors and machine learning technique. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Hardware | |
dc.subject | Sensor devices and platforms | |
dc.title | Facial Performance Capture by Embedded Photo Reflective Sensors on A Smart Eyewear | en_US |
dc.description.seriesinformation | ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments | |
dc.description.sectionheaders | Tracking | |
dc.identifier.doi | 10.2312/egve.20171334 | |
dc.identifier.pages | 21-28 | |