dc.contributor.author | Katsumata, Yasunobu | en_US |
dc.contributor.author | Ishimoto, Hiroki | en_US |
dc.contributor.author | Inoue, Yasuyuki | en_US |
dc.contributor.author | Kitazaki, Michiteru | en_US |
dc.contributor.editor | Theophilus Teo | en_US |
dc.contributor.editor | Ryota Kondo | en_US |
dc.date.accessioned | 2022-11-29T07:23:47Z | |
dc.date.available | 2022-11-29T07:23:47Z | |
dc.date.issued | 2022 | |
dc.identifier.isbn | 978-3-03868-192-2 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.uri | https://doi.org/10.2312/egve.20221300 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20221300 | |
dc.description.abstract | We aimed to develop a sign language learning system using virtual reality to improve learning motivation. Hand movements for twenty words consisted of three letters were recorded with a hand motion capture system (model hand). In the learning system, the participant was asked to mimic the model hand movement while looking at both the model hand and the ''own hand'' in a head mounted display (HMD) with the hand motion capture. The ''own hand'' avatar was either of the real own hand or the shared hand motion, which was made by averaging the participant's hand and the model hand movements. The model hand was presented either in the opposite or same direction as the participant. Participants rated the usability of the system in 2 x 2 (own/shared hand x opposite/same direction) experimental blocked design. We found that the shared hand avatar and the same direction presentation were better than the own hand and the opposite direction presentation, respectively. Thus, the proposed shared hand avatar system with the HMD and hand motion capture could improve sign language learning. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing -> Virtual reality; Applied computing -> Computer-assisted instruction | |
dc.subject | Human centered computing | |
dc.subject | Virtual reality | |
dc.subject | Applied computing | |
dc.subject | Computer | |
dc.subject | assisted instruction | |
dc.title | Sign Language Learning System with Concurrent Shared Avatar Hand in a Virtual Environment: Psychological Evaluation | en_US |
dc.description.seriesinformation | ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos | |
dc.description.sectionheaders | Posters | |
dc.identifier.doi | 10.2312/egve.20221300 | |
dc.identifier.pages | 27-28 | |
dc.identifier.pages | 2 pages | |