dc.contributor.author | Magg, Caroline | en_US |
dc.contributor.author | Raidou, Renata Georgia | en_US |
dc.contributor.editor | Renata G. Raidou | en_US |
dc.contributor.editor | Björn Sommer | en_US |
dc.contributor.editor | Torsten W. Kuhlen | en_US |
dc.contributor.editor | Michael Krone | en_US |
dc.contributor.editor | Thomas Schultz | en_US |
dc.contributor.editor | Hsiang-Yun Wu | en_US |
dc.date.accessioned | 2022-09-19T11:46:33Z | |
dc.date.available | 2022-09-19T11:46:33Z | |
dc.date.issued | 2022 | |
dc.identifier.isbn | 978-3-03868-177-9 | |
dc.identifier.issn | 2070-5786 | |
dc.identifier.uri | https://doi.org/10.2312/vcbm.20221193 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/vcbm20221193 | |
dc.description.abstract | Accurate delineations of anatomically relevant structures are required for cancer treatment planning. Despite its accuracy, manual labeling is time-consuming and tedious-hence, the potential of automatic approaches, such as deep learning models, is being investigated. A promising trend in deep learning tumor segmentation is cross-modal domain adaptation, where knowledge learned on one source distribution (e.g., one modality) is transferred to another distribution. Yet, artificial intelligence (AI) engineers developing such models, need to thoroughly assess the robustness of their approaches, which demands a deep understanding of the model(s) behavior. In this paper, we propose a web-based visual analytics application that supports the visual assessment of the predictive performance of deep learning-based models built for cross-modal brain tumor segmentation. Our application supports the multi-level comparison of multiple models drilling from entire cohorts of patients down to individual slices, facilitates the analysis of the relationship between image-derived features and model performance, and enables the comparative exploration of the predictive outcomes of the models. All this is realized in an interactive interface with multiple linked views. We present three use cases, analyzing differences in deep learning segmentation approaches, the influence of the tumor size, and the relationship of other data set characteristics to the performance. From these scenarios, we discovered that the tumor size, i.e., both volumetric in 3D data and pixel count in 2D data, highly affects the model performance, as samples with small tumors often yield poorer results. Our approach is able to reveal the best algorithms and their optimal configurations to support AI engineers in obtaining more insights for the development of their segmentation models. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing → Visual Analytics; Applied computing → Life and medical sciences" | |
dc.subject | Human | |
dc.subject | centered computing → Visual Analytics | |
dc.subject | Applied computing → Life and medical sciences" | |
dc.title | Visual Analytics to Assess Deep Learning Models for Cross-Modal Brain Tumor Segmentation | en_US |
dc.description.seriesinformation | Eurographics Workshop on Visual Computing for Biology and Medicine | |
dc.description.sectionheaders | Visual Analytics, Artificial Intelligence | |
dc.identifier.doi | 10.2312/vcbm.20221193 | |
dc.identifier.pages | 111-115 | |
dc.identifier.pages | 5 pages | |