dc.contributor.author | Saldanha, Emily | en_US |
dc.contributor.author | Praggastis, Brenda | en_US |
dc.contributor.author | Billow, Todd | en_US |
dc.contributor.author | Arendt, Dustin L. | en_US |
dc.contributor.editor | Johansson, Jimmy and Sadlo, Filip and Marai, G. Elisabeta | en_US |
dc.date.accessioned | 2019-06-02T18:14:27Z | |
dc.date.available | 2019-06-02T18:14:27Z | |
dc.date.issued | 2019 | |
dc.identifier.isbn | 978-3-03868-090-1 | |
dc.identifier.uri | https://doi.org/10.2312/evs.20191168 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/evs20191168 | |
dc.description.abstract | Reinforcement learning (RL) is a branch of machine learning where an agent learns to maximize reward through trial and error. RL is challenging and data/compute intensive leading practitioners to become overwhelmed and make poor modeling decisions. Our contribution is a Visual Analytics tool designed to help data scientists maintain situation awareness during RL experimentation. Our tool allows users to understand which hyper-parameter values lead to better or worse outcomes, what behaviors are associated with high and low reward, and how behaviors evolve throughout training. We evaluated our tool through three uses cases using state of the art deep RL models demonstrating how our tool leads to RL situation awareness. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Human | |
dc.subject | centered computing | |
dc.subject | Visualization systems and tools | |
dc.subject | Computing methodologies | |
dc.subject | Reinforcement learning | |
dc.subject | Computational control theory | |
dc.title | ReLVis: Visual Analytics for Situational Awareness During Reinforcement Learning Experimentation | en_US |
dc.description.seriesinformation | EuroVis 2019 - Short Papers | |
dc.description.sectionheaders | Volume, Simulation, and Data Reduction | |
dc.identifier.doi | 10.2312/evs.20191168 | |
dc.identifier.pages | 43-47 | |