Show simple item record

dc.contributor.authorChatzimparmpas, Angelosen_US
dc.contributor.authorMartins, Rafael M.en_US
dc.contributor.authorJusufi, Iliren_US
dc.contributor.authorKucher, Kostiantynen_US
dc.contributor.authorRossi, Fabriceen_US
dc.contributor.authorKerren, Andreasen_US
dc.contributor.editorSmit, Noeska and Oeltze-Jafra, Steffen and Wang, Beien_US
dc.date.accessioned2020-05-24T13:53:52Z
dc.date.available2020-05-24T13:53:52Z
dc.date.issued2020
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14034
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14034
dc.description.abstractMachine learning (ML) models are nowadays used in complex applications in various domains, such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/]
dc.subjecttrustworthy machine learning
dc.subjectvisualization
dc.subjectinterpretable machine learning
dc.subjectexplainable machine learning ACM CCS
dc.subjectInformation systems
dc.subjectTrust
dc.subjectHuman centered computing
dc.subjectVisual analytics
dc.subjectHuman centered computing
dc.subjectInformation visualization
dc.subjectHuman centered computing
dc.subjectVisualization systems and tools
dc.subjectMachine learning
dc.subjectSupervised learning
dc.subjectMachine learning
dc.subjectUnsupervised learning
dc.subjectMachine learning
dc.subjectSemi
dc.subjectsupervised learning
dc.subjectMachine learning
dc.subjectReinforcement learning
dc.titleThe State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizationsen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersTrust and Provenance
dc.description.volume39
dc.description.number3
dc.identifier.doi10.1111/cgf.14034
dc.identifier.pages713-756
dc.description.documenttypestar


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License