Show simple item record

dc.contributor.authorPreim, Bernharden_US
dc.contributor.authorRopinski, Timoen_US
dc.contributor.authorIsenberg, Petraen_US
dc.contributor.editorPuig Puig, Anna and Schultz, Thomas and Vilanova, Anna and Hotz, Ingrid and Kozlikova, Barbora and Vázquez, Pere-Pauen_US
dc.date.accessioned2018-09-19T15:19:21Z
dc.date.available2018-09-19T15:19:21Z
dc.date.issued2018
dc.identifier.isbn978-3-03868-056-7
dc.identifier.issn2070-5786
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/vcbm20181228
dc.identifier.urihttps://doi.org/10.2312/vcbm.20181228
dc.description.abstractMedical visualization aims at directly supporting physicians in diagnosis and treatment planning, students and residents in medical education, and medical physicists as well as other medical researchers in answering specific research questions. For assessing whether single medical visualization techniques or entire medical visualization systems are useful in this respect, empirical evaluations involving participants from the target user group are indispensable. The human computer interaction field developed a wide range of evaluation instruments, and the information visualization community more recently adapted and refined these instruments for evaluating (information) visualization systems. However, often medical visualization lacks behind and should pay more attention to evaluation, in particular to evaluations in realistic settings that may assess how visualization techniques contribute to cognitive activities, such as deciding about a surgical strategy or other complex treatment decisions. In this vein, evaluations that are performed over a longer period are promising to study, in order to investigate how techniques are adapted. In this paper, we discuss the evaluation practice in medical visualization based on selected examples and contrast these evaluations with the broad range of existing empirical evaluation techniques. We would like to emphasize that this paper does not serve as a general call for evaluation in medical visualization, but argues that the individual situation must be assessed and that evaluations when they are carried out should be done more carefully.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectEmpirical Evaluation
dc.titleA Critical Analysis of the Evaluation Practice in Medical Visualizationen_US
dc.description.seriesinformationEurographics Workshop on Visual Computing for Biology and Medicine
dc.description.sectionheadersInteraction and Evaluation
dc.identifier.doi10.2312/vcbm.20181228
dc.identifier.pages45-56


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record