Title:
Interactive and Explainable Methods in Machine Learning with Humans
Interactive and Explainable Methods in Machine Learning with Humans
Author(s)
Silva, Andrew
Advisor(s)
Gombolay, Matthew
Editor(s)
Collections
Supplementary to
Permanent Link
Abstract
This dissertation introduces and evaluates new mechanisms for interactivity and explainability within machine learning, specifically targeting human-in-the-loop learning systems. I will also introduce and evaluate new approaches for personalization with heterogeneous populations (i.e., populations in which users have diverse, potentially conflicting, preferences). The contributions of this dissertation aim to substantiate the thesis statement:
Interactive and explainable machine learning yields improved experiences for human users of intelligent systems. Specifically:
1. Machine learning with human expertise improves task performance as measured by success rates and reward.
2. Personalized machine learning improves task performance for a large heterogeneous population of users.
3. Machine learning with explainability improves human perceptions of intelligent agents and enhances user compliance with agent suggestions.
I offer both novel technical methods for interactivity and explainability within machine learning, as well as user studies to empirically validate my technical contributions. Along the way, I provide guidelines for future work in interactive and explainable machine learning, drawing on insights from empirical experimentation and user studies.
This dissertation begins with an overview of relevant background knowledge and related work (Chapter 2) and then presents novel technical contributions in interactive and personalized learning (Chapters 3-6). I then transition to contributions in explainable machine learning (Chapters 7-8) and a large-scale user study across various explainability approaches (Chapter 9). I present a project applying personalization techniques to explanation modalities in an in-person user study (Chapter 10), which serves to unify prior work on interactivity and explainability into a single framework for learning personalized explainability profiles across a population of heterogeneous end-users. This dissertation then closes with limitations of work presented herein and directions for future work in Chapter 11, and concluding remarks in Chapter 12.
Sponsor
Date Issued
2023-06-29
Extent
Resource Type
Text
Resource Subtype
Dissertation