Show simple item record

dc.contributor.authorMagg, Carolineen_US
dc.contributor.authorRaidou, Renata Georgiaen_US
dc.contributor.editorRenata G. Raidouen_US
dc.contributor.editorBjörn Sommeren_US
dc.contributor.editorTorsten W. Kuhlenen_US
dc.contributor.editorMichael Kroneen_US
dc.contributor.editorThomas Schultzen_US
dc.contributor.editorHsiang-Yun Wuen_US
dc.date.accessioned2022-09-19T11:46:33Z
dc.date.available2022-09-19T11:46:33Z
dc.date.issued2022
dc.identifier.isbn978-3-03868-177-9
dc.identifier.issn2070-5786
dc.identifier.urihttps://doi.org/10.2312/vcbm.20221193
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/vcbm20221193
dc.description.abstractAccurate delineations of anatomically relevant structures are required for cancer treatment planning. Despite its accuracy, manual labeling is time-consuming and tedious-hence, the potential of automatic approaches, such as deep learning models, is being investigated. A promising trend in deep learning tumor segmentation is cross-modal domain adaptation, where knowledge learned on one source distribution (e.g., one modality) is transferred to another distribution. Yet, artificial intelligence (AI) engineers developing such models, need to thoroughly assess the robustness of their approaches, which demands a deep understanding of the model(s) behavior. In this paper, we propose a web-based visual analytics application that supports the visual assessment of the predictive performance of deep learning-based models built for cross-modal brain tumor segmentation. Our application supports the multi-level comparison of multiple models drilling from entire cohorts of patients down to individual slices, facilitates the analysis of the relationship between image-derived features and model performance, and enables the comparative exploration of the predictive outcomes of the models. All this is realized in an interactive interface with multiple linked views. We present three use cases, analyzing differences in deep learning segmentation approaches, the influence of the tumor size, and the relationship of other data set characteristics to the performance. From these scenarios, we discovered that the tumor size, i.e., both volumetric in 3D data and pixel count in 2D data, highly affects the model performance, as samples with small tumors often yield poorer results. Our approach is able to reveal the best algorithms and their optimal configurations to support AI engineers in obtaining more insights for the development of their segmentation models.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Human-centered computing → Visual Analytics; Applied computing → Life and medical sciences"
dc.subjectHuman
dc.subjectcentered computing → Visual Analytics
dc.subjectApplied computing → Life and medical sciences"
dc.titleVisual Analytics to Assess Deep Learning Models for Cross-Modal Brain Tumor Segmentationen_US
dc.description.seriesinformationEurographics Workshop on Visual Computing for Biology and Medicine
dc.description.sectionheadersVisual Analytics, Artificial Intelligence
dc.identifier.doi10.2312/vcbm.20221193
dc.identifier.pages111-115
dc.identifier.pages5 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License