dc.description.abstract | The proliferation of projection-based systems in recent years resulted in a variety of specialised interaction techniques that make Virtual Environments a better man machine interface. However, the success of these techniques and metaphors is directly linked to the interaction and tracking devices used in their implementation. Users find that devices such as the data glove, stylus or joystick can be expensive and cumbersome especially for the inexperienced. A variety of approaches exist that make use of computer vision for tracking gestures or for achieving wireless interaction. Typically these approaches involve the use of a two- camera pair, or a stereoscopic camera.Our approach uses only one camera and one or more reflective surfaces, to effectively and accurately calculate 3D information. The calibration time is minimal and it allows for a very flexible positioning of the camera and reflecting surfaces. Wireless interaction and natural interaction metaphors in the user's physical space can be created using our method. The method can be combined easily and effectively with projection-based systems as well as with standard and stereoscopic monitors, or extended for the use in augmented spaces. It is an inexpensive method that uses commonly available hardware and therefore its application areas as an interaction and tracking device, include games and use of virtual environments in education. In this paper, we describe the method and its use as an interaction device in two applications, and conclude with a discussion on its advantages and limitations. | en_US |