Show simple item record

dc.contributor.authorZhao, Yongen_US
dc.contributor.authorYang, Leen_US
dc.contributor.authorPei, Erchengen_US
dc.contributor.authorOveneke, Meshia Cédricen_US
dc.contributor.authorAlioscha‐Perez, Mitchelen_US
dc.contributor.authorLi, Longfeien_US
dc.contributor.authorJiang, Dongmeien_US
dc.contributor.authorSahli, Hichemen_US
dc.contributor.editorBenes, Bedrich and Hauser, Helwigen_US
dc.date.accessioned2021-10-08T07:38:06Z
dc.date.available2021-10-08T07:38:06Z
dc.date.issued2021
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14202
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14202
dc.description.abstractRecent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.en_US
dc.publisher© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltden_US
dc.subjectfacial animation
dc.subjectanimation
dc.subjectimage/video editing
dc.subjectimage and video processing
dc.subjectimage‐based rendering
dc.subjectrendering
dc.titleAction Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GANen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersArticles
dc.description.volume40
dc.description.number6
dc.identifier.doi10.1111/cgf.14202
dc.identifier.pages47-61


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record