dc.contributor.author | Chatzimparmpas, Angelos | en_US |
dc.contributor.author | Martins, Rafael M. | en_US |
dc.contributor.author | Jusufi, Ilir | en_US |
dc.contributor.author | Kucher, Kostiantyn | en_US |
dc.contributor.author | Rossi, Fabrice | en_US |
dc.contributor.author | Kerren, Andreas | en_US |
dc.contributor.editor | Smit, Noeska and Oeltze-Jafra, Steffen and Wang, Bei | en_US |
dc.date.accessioned | 2020-05-24T13:53:52Z | |
dc.date.available | 2020-05-24T13:53:52Z | |
dc.date.issued | 2020 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14034 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14034 | |
dc.description.abstract | Machine learning (ML) models are nowadays used in complex applications in various domains, such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | ] |
dc.subject | trustworthy machine learning | |
dc.subject | visualization | |
dc.subject | interpretable machine learning | |
dc.subject | explainable machine learning ACM CCS | |
dc.subject | Information systems | |
dc.subject | Trust | |
dc.subject | Human centered computing | |
dc.subject | Visual analytics | |
dc.subject | Human centered computing | |
dc.subject | Information visualization | |
dc.subject | Human centered computing | |
dc.subject | Visualization systems and tools | |
dc.subject | Machine learning | |
dc.subject | Supervised learning | |
dc.subject | Machine learning | |
dc.subject | Unsupervised learning | |
dc.subject | Machine learning | |
dc.subject | Semi | |
dc.subject | supervised learning | |
dc.subject | Machine learning | |
dc.subject | Reinforcement learning | |
dc.title | The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Trust and Provenance | |
dc.description.volume | 39 | |
dc.description.number | 3 | |
dc.identifier.doi | 10.1111/cgf.14034 | |
dc.identifier.pages | 713-756 | |
dc.description.documenttype | star | |