Show simple item record

dc.contributor.authorFleury, Cedricen_US
dc.contributor.authorPopa, Tiberiuen_US
dc.contributor.authorCham, Tat Jenen_US
dc.contributor.authorFuchs, Henryen_US
dc.contributor.editorEric Galin and Michael Wanden_US
dc.date.accessioned2014-12-16T07:11:06Z
dc.date.available2014-12-16T07:11:06Z
dc.date.issued2014en_US
dc.identifier.issn1017-4656en_US
dc.identifier.urihttp://dx.doi.org/10.2312/egsh.20141002en_US
dc.description.abstractThis paper proposes a 3D head reconstruction method for low cost 3D telepresence systems that uses only a single consumer level hybrid sensor (color+depth) located in front of the users. Our method fuses the real-time, noisy and incomplete output of a hybrid sensor with a set of static, high-resolution textured models acquired in a calibration phase. A complete and fully textured 3D model of the users head can thus be reconstructed in real-time, accurately preserving the facial expression of the user. The main features of our method are a mesh interpolation and a fusion of a static and a dynamic textures to combine respectively a better resolution and the dynamic features of the face.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectI.3.7 [Computer Graphics]en_US
dc.subject3D Graphics and Realism H.4.3 [Info. Syst. Appli.]en_US
dc.subjectCommunications Appli. Computer conferencingen_US
dc.subjectteleconf.en_US
dc.subjectvideoconf.en_US
dc.titleMerging Live and pre-Captured Data to support Full 3D Head Reconstruction for Telepresenceen_US
dc.description.seriesinformationEurographics 2014 - Short Papersen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record