Show simple item record

dc.contributor.authorKnutsson, Alexen_US
dc.contributor.authorUnnebäck, Jakoben_US
dc.contributor.authorJönsson, Danielen_US
dc.contributor.authorEilertsen, Gabrielen_US
dc.contributor.editorHansen, Christianen_US
dc.contributor.editorProcter, Jamesen_US
dc.contributor.editorRenata G. Raidouen_US
dc.contributor.editorJönsson, Danielen_US
dc.contributor.editorHöllt, Thomasen_US
dc.date.accessioned2023-09-19T11:31:48Z
dc.date.available2023-09-19T11:31:48Z
dc.date.issued2023
dc.identifier.isbn978-3-03868-216-5
dc.identifier.issn2070-5786
dc.identifier.urihttps://doi.org/10.2312/vcbm.20231212
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/vcbm20231212
dc.description.abstractTraining a deep neural network is computationally expensive, but achieving the same network performance with less computation is possible if the training data is carefully chosen. However, selecting input samples during training is challenging as their true importance for the optimization is unknown. Furthermore, evaluation of the importance of individual samples must be computationally efficient and unbiased. In this paper, we present a new input data importance sampling strategy for reducing the training time of deep neural networks. We investigate different importance metrics that can be efficiently retrieved as they are available during training, i.e., the training loss and gradient norm. We found that choosing only samples with large loss or gradient norm, which are hard for the network to learn, is not optimal for the network performance. Instead, we introduce an importance sampling strategy that selects samples based on the cumulative distribution function of the loss and gradient norm, thereby making it more likely to choose hard samples while still including easy ones. The behavior of the proposed strategy is first analyzed on a synthetic dataset, and then evaluated in the application of classification of malignant cancer in digital pathology image patches. As pathology images contain many repetitive patterns, there could be significant gains in focusing on features that contribute stronger to the optimization. Finally, we show how the importance sampling process can be used to gain insights about the input data through visualization of samples that are found most or least useful for the training.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Neural networks; Human-centered computing -> Visualization techniques
dc.subjectComputing methodologies
dc.subjectNeural networks
dc.subjectHuman centered computing
dc.subjectVisualization techniques
dc.titleCDF-Based Importance Sampling and Visualization for Neural Network Trainingen_US
dc.description.seriesinformationEurographics Workshop on Visual Computing for Biology and Medicine
dc.description.sectionheadersRadiology and Histopathology
dc.identifier.doi10.2312/vcbm.20231212
dc.identifier.pages51-55
dc.identifier.pages5 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License