Show simple item record

dc.contributor.authorHolliman, Nicolas S.en_US
dc.contributor.editorXu, Kai and Turner, Martinen_US
dc.date.accessioned2021-09-07T05:45:00Z
dc.date.available2021-09-07T05:45:00Z
dc.date.issued2021
dc.identifier.isbn978-3-03868-158-8
dc.identifier.urihttps://doi.org/10.2312/cgvc.20211316
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/cgvc20211316
dc.description.abstractWe present a case study in the use of machine+human mixed intelligence for visualization quality assessment, applying automated visualization quality metrics to support the human assessment of data visualizations produced as coursework by students taking higher education courses. A set of image informatics algorithms including edge congestion, visual saliency and colour analysis generate machine analysis of student visualizations. The insight from the image informatics outputs has proved helpful for the marker in assessing the work and is also provided to the students as part of a written report on their work. Student and external reviewer comments suggest that the addition of the image informatics outputs to the standard feedback document was a positive step. We review the ethical challenges of working with assessment data and of automating assessment processes.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectHuman centered computing
dc.subjectVisualization design and evaluation methods
dc.subjectEmpirical studies in visualization
dc.titleAutomating Visualization Quality Assessment: a Case Study in Higher Educationen_US
dc.description.seriesinformationComputer Graphics and Visual Computing (CGVC)
dc.description.sectionheadersEducation
dc.identifier.doi10.2312/cgvc.20211316
dc.identifier.pages49-57


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record