dc.contributor.author | Fleury, Cedric | en_US |
dc.contributor.author | Popa, Tiberiu | en_US |
dc.contributor.author | Cham, Tat Jen | en_US |
dc.contributor.author | Fuchs, Henry | en_US |
dc.contributor.editor | Eric Galin and Michael Wand | en_US |
dc.date.accessioned | 2014-12-16T07:11:06Z | |
dc.date.available | 2014-12-16T07:11:06Z | |
dc.date.issued | 2014 | en_US |
dc.identifier.issn | 1017-4656 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/egsh.20141002 | en_US |
dc.description.abstract | This paper proposes a 3D head reconstruction method for low cost 3D telepresence systems that uses only a single consumer level hybrid sensor (color+depth) located in front of the users. Our method fuses the real-time, noisy and incomplete output of a hybrid sensor with a set of static, high-resolution textured models acquired in a calibration phase. A complete and fully textured 3D model of the users head can thus be reconstructed in real-time, accurately preserving the facial expression of the user. The main features of our method are a mesh interpolation and a fusion of a static and a dynamic textures to combine respectively a better resolution and the dynamic features of the face. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | I.3.7 [Computer Graphics] | en_US |
dc.subject | 3D Graphics and Realism H.4.3 [Info. Syst. Appli.] | en_US |
dc.subject | Communications Appli. Computer conferencing | en_US |
dc.subject | teleconf. | en_US |
dc.subject | videoconf. | en_US |
dc.title | Merging Live and pre-Captured Data to support Full 3D Head Reconstruction for Telepresence | en_US |
dc.description.seriesinformation | Eurographics 2014 - Short Papers | en_US |