dc.contributor.author | Duke, D.J. | en_US |
dc.date.accessioned | 2014-10-21T07:37:54Z | |
dc.date.available | 2014-10-21T07:37:54Z | |
dc.date.issued | 1995 | en_US |
dc.identifier.issn | 1467-8659 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1111/j.1467-8659.1995.cgf143-0055.x | en_US |
dc.description.abstract | Many of the reported developments in the design of virtual spaces or visualisation systems are based on improvements in technology, either physical devices or algorithms for achieving realistic renderings within real-time constraints. While this experimental approach produces a wealth of empirical results, it operates largely without a sound underlying theory that can be used to design systems that will effectively support users in real-world domains. One of the main problems is that these sophisticated technologies rely on, but rarely assess, the cognitive abilities of the user. This paper introduces a new approach to modelling human-system interaction. A syndetic model combines a formal expression of system behaviour with an approximate representation of cognitive resources to allow reasoning about the flow and utilisation of information within the combined system. The power of the approach to provide insight into novel interaction techniques is illustrated by developing a syndetic model of a gesture-driven user interface. | en_US |
dc.publisher | Blackwell Science Ltd and the Eurographics Association | en_US |
dc.title | Reasoning About Gestural Interaction | en_US |
dc.description.seriesinformation | Computer Graphics Forum | en_US |
dc.description.volume | 14 | en_US |
dc.description.number | 3 | en_US |
dc.identifier.doi | 10.1111/j.1467-8659.1995.cgf143-0055.x | en_US |
dc.identifier.pages | 55-66 | en_US |