Show simple item record

dc.contributor.authorGrósz, Tamásen_US
dc.contributor.authorKurimo, Mikkoen_US
dc.contributor.editorArchambault, Daniel and Nabney, Ian and Peltonen, Jaakkoen_US
dc.date.accessioned2020-05-24T13:27:44Z
dc.date.available2020-05-24T13:27:44Z
dc.date.issued2020
dc.identifier.isbn978-3-03868-113-7
dc.identifier.urihttps://doi.org/10.2312/mlvis.20201103
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/mlvis20201103
dc.description.abstractIn the past few years, Deep Neural Networks (DNN) have become the state-of-the-art solution in several areas, including automatic speech recognition (ASR), unfortunately, they are generally viewed as black boxes. Recently, this started to change as researchers have dedicated much effort into interpreting their behavior. In this work, we concentrate on visual interpretation by depicting the hidden activation vectors of the DNN, and propose the usage of deep Autoencoders (DAE) to transform these hidden representations for inspection. We use multiple metrics to compare our approach with other, widely-used algorithms and the results show that our approach is quite competitive. The main advantage of using Autoencoders over the existing ones is that after the training phase, it applies a fixed transformation that can be used to visualize any hidden activation vector without any further optimization, which is not true for the other methods.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectDimensionality reduction and manifold learning
dc.subjectSpeech recognition
dc.subjectNeural networks
dc.titleVisual Interpretation of DNN-based Acoustic Models using Deep Autoencodersen_US
dc.description.seriesinformationMachine Learning Methods in Visualisation for Big Data
dc.description.sectionheadersPapers
dc.identifier.doi10.2312/mlvis.20201103
dc.identifier.pages25-29


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record