dc.contributor.author | Çıg, Çagla | en_US |
dc.contributor.author | Sezgin, Tevfik Metin | en_US |
dc.contributor.editor | Ergun Akleman | en_US |
dc.date.accessioned | 2015-06-22T07:06:52Z | |
dc.date.available | 2015-06-22T07:06:52Z | |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/exp.20151179 | en_US |
dc.description.abstract | Recently there has been a growing interest in sketch recognition technologies for facilitating human-computer interaction. Existing sketch recognition studies mainly focus on recognizing pre-defined symbols and gestures. However, just as there is a need for systems that can automatically recognize symbols and gestures, there is also a pressing need for systems that can automatically recognize pen-based manipulation activities (e.g. dragging, maximizing, minimizing, scrolling). There are two main challenges in classifying manipulation activities. First is the inherent lack of characteristic visual appearances of pen inputs that correspond to manipulation activities. Second is the necessity of real-time classification based upon the principle that users must receive immediate and appropriate visual feedback about the effects of their actions. In this paper (1) an existing activity prediction system for pen-based devices is modified for real-time activity prediction and (2) an alternative time-based activity prediction system is introduced. Both systems use eye gaze movements that naturally accompany pen-based user interaction for activity classification. The results of our comprehensive experiments demonstrate that the newly developed alternative system is a more successful candidate (in terms of prediction accuracy and early prediction speed) than the existing system for real-time activity prediction. More specifically, midway through an activity, the alternative system reaches 66% of its maximum accuracy value (i.e. 66% of 70.34%) whereas the existing system reaches only 36% of its maximum accuracy value (i.e. 36% of 55.69%). | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | H.1.2 [Models and Principles] | en_US |
dc.subject | User/Machine Systems | en_US |
dc.subject | Human information processing H.5.2 [Information Interfaces and Presentation (e.g. | en_US |
dc.subject | HCI)] | en_US |
dc.subject | User Interfaces | en_US |
dc.subject | Input devices and strategies (e.g. | en_US |
dc.subject | mouse | en_US |
dc.subject | touchscreen) Keywords | en_US |
dc.subject | eager activity recognition | en_US |
dc.subject | sketch recognition | en_US |
dc.subject | proactive interfaces | en_US |
dc.subject | multimodal interaction | en_US |
dc.subject | sketchbased interaction | en_US |
dc.subject | gaze | en_US |
dc.subject | based interaction | en_US |
dc.subject | feature extraction | en_US |
dc.title | Real-Time Activity Prediction: A Gaze-Based Approach for Early Recognition of Pen-Based Interaction Tasks | en_US |
dc.description.seriesinformation | Sketch-Based Interfaces and Modeling | en_US |
dc.description.sectionheaders | Stylization | en_US |
dc.identifier.doi | 10.2312/exp.20151179 | en_US |
dc.identifier.pages | 59-65 | en_US |