dc.contributor.author | Kettern, Markus | en_US |
dc.contributor.author | Hilsmann, Anna | en_US |
dc.contributor.author | Eisert, Peter | en_US |
dc.contributor.editor | David Bommes and Tobias Ritschel and Thomas Schultz | en_US |
dc.date.accessioned | 2015-10-07T05:13:36Z | |
dc.date.available | 2015-10-07T05:13:36Z | |
dc.date.issued | 2015 | en_US |
dc.identifier.isbn | 978-3-905674-95-8 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/vmv.20151263 | en_US |
dc.description.abstract | In this paper, we present a method for detailed temporally consistent facial performance capture that supports any number of arbitrarily placed video cameras. Using a suitable 3D model as reference geometry, our method tracks facial movement and deformation as well as photometric changes due to illumination and shadows. In an analysis-by-synthesis framework, we warp one single reference image per camera to all frames of the sequence thereby drastically reducing temporal drift which is a serious problem for many state-of-the-art approaches. Temporal appearance variations are handled by a photometric estimation component modeling local intensity changes between the reference image and each individual frame. All parameters of the problem are estimated jointly so that we do not require separate estimation steps that might interfere with one another. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Temporally Consistent Wide Baseline Facial Performance Capture via Image Warping | en_US |
dc.description.seriesinformation | Vision, Modeling & Visualization | en_US |
dc.description.sectionheaders | Images and Video | en_US |
dc.identifier.doi | 10.2312/vmv.20151263 | en_US |
dc.identifier.pages | 95-102 | en_US |