Show simple item record

dc.contributor.authorKrum, David M.en_US
dc.contributor.authorOmoteso, Olugbengaen_US
dc.contributor.authorRibarsky, Williamen_US
dc.contributor.authorStarner, Thaden_US
dc.contributor.authorHodges, Larry F.en_US
dc.contributor.editorD. Ebert and P. Brunet and I. Navazoen_US
dc.date.accessioned2014-01-30T06:50:45Z
dc.date.available2014-01-30T06:50:45Z
dc.date.issued2002en_US
dc.identifier.isbn1-58113-536-Xen_US
dc.identifier.issn1727-5296en_US
dc.identifier.urihttp://dx.doi.org/10.2312/VisSym/VisSym02/195-200en_US
dc.description.abstractA growing body of research shows several advantages to multimodal interfaces including increased expressiveness, flexibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or standing at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole Earth 3D visualization which presents navigation interface challenges due to the large magnitude of scale and extended spaces that are available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleSpeech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environmenten_US
dc.description.seriesinformationEurographics / IEEE VGTC Symposium on Visualizationen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record