Show simple item record

dc.contributor.authorÖney, Seydaen_US
dc.contributor.authorPathmanathan, Nelusaen_US
dc.contributor.authorBecher, Michaelen_US
dc.contributor.authorSedlmair, Michaelen_US
dc.contributor.authorWeiskopf, Danielen_US
dc.contributor.authorKurzhals, Kunoen_US
dc.contributor.editorBujack, Roxanaen_US
dc.contributor.editorArchambault, Danielen_US
dc.contributor.editorSchreck, Tobiasen_US
dc.date.accessioned2023-06-10T06:17:19Z
dc.date.available2023-06-10T06:17:19Z
dc.date.issued2023
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14837
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14837
dc.description.abstractAugmented Reality (AR) provides new ways for situated visualization and human-computer interaction in physical environments. Current evaluation procedures for AR applications rely primarily on questionnaires and interviews, providing qualitative means to assess usability and task solution strategies. Eye tracking extends these existing evaluation methodologies by providing indicators for visual attention to virtual and real elements in the environment. However, the analysis of viewing behavior, especially the comparison of multiple participants, is difficult to achieve in AR. Specifically, the definition of areas of interest (AOIs), which is often a prerequisite for such analysis, is cumbersome and tedious with existing approaches. To address this issue, we present a new visualization approach to define AOIs, label fixations, and investigate the resulting annotated scanpaths. Our approach utilizes automatic annotation of gaze on virtual objects and an image-based approach that also considers spatial context for the manual annotation of objects in the real world. Our results show, that with our approach, eye tracking data from AR scenes can be annotated and analyzed flexibly with respect to data aspects and annotation strategies.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Human-centered computing -> Visualization
dc.subjectHuman centered computing
dc.subjectVisualization
dc.titleVisual Gaze Labeling for Augmented Reality Studiesen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersWhere to Look? AR, VR, and Attention
dc.description.volume42
dc.description.number3
dc.identifier.doi10.1111/cgf.14837
dc.identifier.pages373-384
dc.identifier.pages12 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 42-Issue 3
    EuroVis 2023 - Conference Proceedings

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License