Deep Video-Based Performance Cloning
Date
2019Metadata
Show full item recordAbstract
We present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.
BibTeX
@article {10.1111:cgf.13632,
journal = {Computer Graphics Forum},
title = {{Deep Video-Based Performance Cloning}},
author = {Aberman, Kfir and Shi, Mingyi and Liao, Jing and Lischinski, Dani and Chen, Baoquan and Cohen-Or, Daniel},
year = {2019},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.13632}
}
journal = {Computer Graphics Forum},
title = {{Deep Video-Based Performance Cloning}},
author = {Aberman, Kfir and Shi, Mingyi and Liao, Jing and Lischinski, Dani and Chen, Baoquan and Cohen-Or, Daniel},
year = {2019},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.13632}
}