Deep Learning-Based Unsupervised Human Facial Retargeting
Date
2021Metadata
Show full item recordAbstract
Traditional approaches to retarget existing facial blendshape animations to other characters rely heavily on manually paired data including corresponding anchors, expressions, or semantic parametrizations to preserve the characteristics of the original performance. In this paper, inspired by recent developments in face swapping and reenactment, we propose a novel unsupervised learning method that reformulates the retargeting of 3D facial blendshape-based animations in the image domain. The expressions of a source model is transferred to a target model via the rendered images of the source animation. For this purpose, a reenactment network is trained with the rendered images of various expressions created by the source and target models in a shared latent space. The use of shared latent space enable an automatic cross-mapping obviating the need for manual pairing. Next, a blendshape prediction network is used to extract the blendshape weights from the translated image to complete the retargeting of the animation onto a 3D target model. Our method allows for fully unsupervised retargeting of facial expressions between models of different configurations, and once trained, is suitable for automatic real-time applications.
BibTeX
@article {10.1111:cgf.14400,
journal = {Computer Graphics Forum},
title = {{Deep Learning-Based Unsupervised Human Facial Retargeting}},
author = {Kim, Seonghyeon and Jung, Sunjin and Seo, Kwanggyoon and Ribera, Roger Blanco i and Noh, Junyong},
year = {2021},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14400}
}
journal = {Computer Graphics Forum},
title = {{Deep Learning-Based Unsupervised Human Facial Retargeting}},
author = {Kim, Seonghyeon and Jung, Sunjin and Seo, Kwanggyoon and Ribera, Roger Blanco i and Noh, Junyong},
year = {2021},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14400}
}