A Survey of Human-Centered Evaluations in Human-Centered Machine Learning
Date
2021Author
El-Assady, Mennatallah
Guo, Grace
Borgo, Rita
Chau, Duen Horng
Metadata
Show full item recordAbstract
Visual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.
BibTeX
@article {10.1111:cgf.14329,
journal = {Computer Graphics Forum},
title = {{A Survey of Human-Centered Evaluations in Human-Centered Machine Learning}},
author = {Sperrle, Fabian and El-Assady, Mennatallah and Guo, Grace and Borgo, Rita and Chau, Duen Horng and Endert, Alex and Keim, Daniel},
year = {2021},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14329}
}
journal = {Computer Graphics Forum},
title = {{A Survey of Human-Centered Evaluations in Human-Centered Machine Learning}},
author = {Sperrle, Fabian and El-Assady, Mennatallah and Guo, Grace and Borgo, Rita and Chau, Duen Horng and Endert, Alex and Keim, Daniel},
year = {2021},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14329}
}