dc.contributor.author | Ma, L. | en_US |
dc.contributor.author | Deng, Z. | en_US |
dc.contributor.editor | Chen, Min and Benes, Bedrich | en_US |
dc.date.accessioned | 2019-03-17T09:57:00Z | |
dc.date.available | 2019-03-17T09:57:00Z | |
dc.date.issued | 2019 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13586 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13586 | |
dc.description.abstract | This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions. | en_US |
dc.publisher | © 2019 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | real‐time face reconstruction | |
dc.subject | expression transformation | |
dc.subject | facial animation | |
dc.subject | Computing methodologies → Animation | |
dc.subject | Image‐based rendering | |
dc.title | Real‐Time Facial Expression Transformation for Monocular RGB Video | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Articles | |
dc.description.volume | 38 | |
dc.description.number | 1 | |
dc.identifier.doi | 10.1111/cgf.13586 | |
dc.identifier.pages | 470-481 | |