dc.contributor.author | Futschik, David | en_US |
dc.contributor.author | Chai, Menglei | en_US |
dc.contributor.author | Cao, Chen | en_US |
dc.contributor.author | Ma, Chongyang | en_US |
dc.contributor.author | Stoliar, Aleksei | en_US |
dc.contributor.author | Korolev, Sergey | en_US |
dc.contributor.author | Tulyakov, Sergey | en_US |
dc.contributor.author | Kučera, Michal | en_US |
dc.contributor.author | Sýkora, Daniel | en_US |
dc.contributor.editor | Kaplan, Craig S. and Forbes, Angus and DiVerdi, Stephen | en_US |
dc.date.accessioned | 2019-05-20T09:49:46Z | |
dc.date.available | 2019-05-20T09:49:46Z | |
dc.date.issued | 2019 | |
dc.identifier.isbn | 978-3-03868-078-9 | |
dc.identifier.uri | https://doi.org/10.2312/exp.20191074 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/exp20191074 | |
dc.description.abstract | We present a learning-based style transfer algorithm for human portraits which significantly outperforms current state-of-the-art in computational overhead while still maintaining comparable visual quality. We show how to design a conditional generative adversarial network capable to reproduce the output of Fišer et al.'s patch-based method [FJS*17] that is slow to compute but can deliver state-of-the-art visual quality. Since the resulting end-to-end network can be evaluated quickly on current consumer GPUs, our solution enables first real-time high-quality style transfer to facial videos that runs at interactive frame rates. Moreover, in cases when the original algorithmic approach of Fišer et al. fails our network can provide a more visually pleasing result thanks to generalization. We demonstrate the practical utility of our approach on a variety of different styles and target subjects. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Non | |
dc.subject | photorealistic rendering | |
dc.title | Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network | en_US |
dc.description.seriesinformation | ACM/EG Expressive Symposium | |
dc.description.sectionheaders | Learned Styles | |
dc.identifier.doi | 10.2312/exp.20191074 | |
dc.identifier.pages | 33-42 | |