Deep Learning Interpretability with Visual Analytics: Exploring Reasoning and Bias Exploitation
View/ Open
Date
2022-05-16Author
Jaunet, Theo
Item/paper (currently) not available via TIB Hannover.
Metadata
Show full item recordAbstract
In the last couple of years, Artificial Intelligence (AI) and Machine Learning have evolved, from research domains addressed in laboratories far from the public eye, to technology deployed on industrial scale widely impacting our daily lives. This trend has started to raise legitimate concerns, as it is also used to address critical problems like finance and autonomous driving, in which decisions can have a life-threatening impact. Since a large part of the underlying complexity of the decision process is learned from massive amounts of data, it remains unknown to both the builders of those models and to the people impacted by them how models take decisions. This led to the new field of eXplainable AI (XAI) and the problem of analyzing the behavior of trained models to shed their reasoning modes and the underlying biases they are subject to. This thesis contributes to this emerging field with the design of novel visual analytics systems tailored to the study and improvement of interpretability of Deep Neural Networks. Our goal was to empower experts with tools helping them to better interpret the decisions of their models. We also contributed with explorable applications designed to introduce Deep Learning methods to non-expert audiences. Our focus was on the under-explored challenge of interpreting and improving models for different applications such as robotics, where important decisions must be taken from high-dimensional and low-level inputs such as images.