ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation
Date
2023Author
Teshima, Hitoshi
Wake, Naoki
Thomas, Diego
Nakashima, Yuta
Kawasaki, Hiroshi
Ikeuchi, Katsushi
Metadata
Show full item recordAbstract
Recent increase of remote-work, online meeting and tele-operation task makes people find that gesture for avatars and communication robots is more important than we have thought. It is one of the key factors to achieve smooth and natural communication between humans and AI systems and has been intensively researched. Current gesture generation methods are mostly based on deep neural network using text, audio and other information as the input, however, they generate gestures mainly based on audio, which is called a beat gesture. Although the ratio of the beat gesture is more than 70% of actual human gestures, content based gestures sometimes play an important role to make avatars more realistic and human-like. In this paper, we propose a attention-based contrastive learning for text-to-gesture (ACT2G), where generated gestures represent content of the text by estimating attention weight for each word from the input text. In the method, since text and gesture features calculated by the attention weight are mapped to the same latent space by contrastive learning, once text is given as input, the network outputs a feature vector which can be used to generate gestures related to the content. User study confirmed that the gestures generated by ACT2G were better than existing methods. In addition, it was demonstrated that wide variation of gestures were generated from the same text by changing attention weights by creators.
BibTeX
@inproceedings {10.1145:3606940,
booktitle = {Proceedings of the ACM on Computer Graphics and Interactive Techniques},
editor = {Wang, Huamin and Ye, Yuting and Victor Zordan},
title = {{ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation}},
author = {Teshima, Hitoshi and Wake, Naoki and Thomas, Diego and Nakashima, Yuta and Kawasaki, Hiroshi and Ikeuchi, Katsushi},
year = {2023},
publisher = {ACM Association for Computing Machinery},
ISSN = {2577-6193},
DOI = {10.1145/3606940}
}
booktitle = {Proceedings of the ACM on Computer Graphics and Interactive Techniques},
editor = {Wang, Huamin and Ye, Yuting and Victor Zordan},
title = {{ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation}},
author = {Teshima, Hitoshi and Wake, Naoki and Thomas, Diego and Nakashima, Yuta and Kawasaki, Hiroshi and Ikeuchi, Katsushi},
year = {2023},
publisher = {ACM Association for Computing Machinery},
ISSN = {2577-6193},
DOI = {10.1145/3606940}
}
Related items
Showing items related by title, author, creator and subject.
-
Character-Object Interaction Retrieval Using the Interaction Bisector Surface
Zhao, Xi; Choi, Myung Geol; Komura, Taku (The Eurographics Association and John Wiley & Sons Ltd., 2017)In this paper, we propose a novel approach for the classification and retrieval of interactions between human characters and objects. We propose to use the interaction bisector surface (IBS) between the body and the object ... -
Synthesizing Two-character Interactions by Merging Captured Interaction Samples with their Spacetime Relationships
Chan, Jacky C. P.; Tang, Jeff K. T.; Leung, Howard (The Eurographics Association and Blackwell Publishing Ltd., 2013)Existing synthesis methods for closely interacting virtual characters relied on user-specified constraints such as the reaching positions and the distance between body parts. In this paper, we present a novel method for ... -
Interactive configurable virtual environment with Kinect navigation and interaction
Pinto, João; Dias, Paulo; Eliseu, Sérgio; Santos, Beatriz Sousa (The Eurographics Association, 2020)As a solution to immersive virtual museum visits, we propose an extension upon the platform we previously developed for Setting-up Interactive Virtual Environments (pSIVE) that maintains all of the Virtual Environment ...