Now showing items 1-5 of 5

    • Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments 

      Alvarado, Eduardo; Rohmer, Damien; Cani, Marie-Paule (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Real-time character animation in dynamic environments requires the generation of plausible upper-body movements regardless of the nature of the environment, including non-rigid obstacles such as vegetation. We propose a ...
    • Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding 

      Goel, Aman; Men, Qianhui; Ho, Edmond S. L. (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Synthesizing multi-character interactions is a challenging task due to the complex and varied interactions between the characters. In particular, precise spatiotemporal alignment between characters is required in generating ...
    • Monocular Facial Performance Capture Via Deep Expression Matching 

      Bailey, Stephen W.; Riviere, Jérémy; Mikkelsen, Morten; O'Brien, James F. (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive ...
    • Pose Representations for Deep Skeletal Animation 

      Andreou, Nefeli; Aristidou, Andreas; Chrysanthou, Yiorgos (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Data-driven skeletal animation relies on the existence of a suitable learning scheme, which can capture the rich context of motion. However, commonly used motion representations often fail to accurately encode the full ...
    • Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs 

      Villanueva Aylagas, Monica; Anadon Leon, Hector; Teye, Mattias; Tollmar, Konrad (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      We present Voice2Face: a Deep Learning model that generates face and tongue animations directly from recorded speech. Our approach consists of two steps: a conditional Variational Autoencoder generates mesh animations from ...