dc.contributor.author | Wu, Tsung Heng | en_US |
dc.contributor.author | Zhao, Ye | en_US |
dc.contributor.author | Amiruzzaman, Md | en_US |
dc.contributor.editor | Turkay, Cagatay and Vrotsou, Katerina | en_US |
dc.date.accessioned | 2020-05-24T13:31:33Z | |
dc.date.available | 2020-05-24T13:31:33Z | |
dc.date.issued | 2020 | |
dc.identifier.isbn | 978-3-03868-116-8 | |
dc.identifier.issn | 2664-4487 | |
dc.identifier.uri | https://doi.org/10.2312/eurova.20201091 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/eurova20201091 | |
dc.description.abstract | Speech recognition technology has achieved impressive success recently with AI techniques of deep learning networks. Speechto- text tools are becoming prevalent in many social applications such as field surveys. However, the speech transcription results are far from perfection for direct use in these applications by domain scientists and practitioners, which prevents the users from fully leveraging the AI tools. In this paper, we show interactive visualization can play important roles in post-AI understanding, editing, and analysis of speech recognition results by presenting specified task characterization and case examples. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | ] |
dc.title | Interactive Visualization of AI-based Speech Recognition Texts | en_US |
dc.description.seriesinformation | EuroVis Workshop on Visual Analytics (EuroVA) | |
dc.description.sectionheaders | Intersecting Humans and AI | |
dc.identifier.doi | 10.2312/eurova.20201091 | |
dc.identifier.pages | 79-83 | |