dc.contributor.author | Louis-Alexandre, Judith | en_US |
dc.contributor.author | Waldner, Manuela | en_US |
dc.contributor.editor | Hoellt, Thomas | en_US |
dc.contributor.editor | Aigner, Wolfgang | en_US |
dc.contributor.editor | Wang, Bei | en_US |
dc.date.accessioned | 2023-06-10T06:34:30Z | |
dc.date.available | 2023-06-10T06:34:30Z | |
dc.date.issued | 2023 | |
dc.identifier.isbn | 978-3-03868-219-6 | |
dc.identifier.uri | https://doi.org/10.2312/evs.20231034 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/evs20231034 | |
dc.description.abstract | Language models are trained on large text corpora that often include stereotypes. This can lead to direct or indirect bias in downstream applications. In this work, we present a method for interactive visual exploration of indirect multiclass bias learned by contextual word embeddings. We introduce a new indirect bias quantification score and present two interactive visualizations to explore interactions between multiple non-sensitive concepts (such as sports, occupations, and beverages) and sensitive attributes (such as gender or year of birth) based on this score. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing -> Visual analytics; Computing methodologies -> Natural language processing | |
dc.subject | Human centered computing | |
dc.subject | Visual analytics | |
dc.subject | Computing methodologies | |
dc.subject | Natural language processing | |
dc.title | Visual Exploration of Indirect Bias in Language Models | en_US |
dc.description.seriesinformation | EuroVis 2023 - Short Papers | |
dc.description.sectionheaders | VA and Perception | |
dc.identifier.doi | 10.2312/evs.20231034 | |
dc.identifier.pages | 1-5 | |
dc.identifier.pages | 5 pages | |