Show simple item record

dc.contributor.authorTeshima, Hitoshien_US
dc.contributor.authorWake, Naokien_US
dc.contributor.authorThomas, Diegoen_US
dc.contributor.authorNakashima, Yutaen_US
dc.contributor.authorKawasaki, Hiroshien_US
dc.contributor.authorIkeuchi, Katsushien_US
dc.contributor.editorWang, Huaminen_US
dc.contributor.editorYe, Yutingen_US
dc.contributor.editorVictor Zordanen_US
dc.date.accessioned2023-10-16T12:32:58Z
dc.date.available2023-10-16T12:32:58Z
dc.date.issued2023
dc.identifier.issn2577-6193
dc.identifier.urihttps://doi.org/10.1145/3606940
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1145/3606940
dc.description.abstractRecent increase of remote-work, online meeting and tele-operation task makes people find that gesture for avatars and communication robots is more important than we have thought. It is one of the key factors to achieve smooth and natural communication between humans and AI systems and has been intensively researched. Current gesture generation methods are mostly based on deep neural network using text, audio and other information as the input, however, they generate gestures mainly based on audio, which is called a beat gesture. Although the ratio of the beat gesture is more than 70% of actual human gestures, content based gestures sometimes play an important role to make avatars more realistic and human-like. In this paper, we propose a attention-based contrastive learning for text-to-gesture (ACT2G), where generated gestures represent content of the text by estimating attention weight for each word from the input text. In the method, since text and gesture features calculated by the attention weight are mapped to the same latent space by contrastive learning, once text is given as input, the network outputs a feature vector which can be used to generate gestures related to the content. User study confirmed that the gestures generated by ACT2G were better than existing methods. In addition, it was demonstrated that wide variation of gestures were generated from the same text by changing attention weights by creators.en_US
dc.publisherACM Association for Computing Machineryen_US
dc.subjectCCS Concepts: Interaction?Multimodal Interaction; Human-Computer Interfaces gesture generation, multimodal interaction, contrastive learning"
dc.subjectInteraction?Multimodal Interaction
dc.subjectHuman
dc.subjectComputer Interfaces gesture generation
dc.subjectmultimodal interaction
dc.subjectcontrastive learning"
dc.titleACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generationen_US
dc.description.seriesinformationProceedings of the ACM on Computer Graphics and Interactive Techniques
dc.description.sectionheadersCharacter Synthesis
dc.description.volume6
dc.description.number3
dc.identifier.doi10.1145/3606940


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record