Human-AI Partnerships in Gesture-Controlled Interactive Music Systems
Author(s)
Smith, Jason Brent
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
This dissertation explores the use of artificial intelligence (AI) in interactive music systems designed to create music based on gestural input from users. It presents three AI-based interactive music systems that collaborate with a performer by analyzing their gestures and motion to generate audio changes. The first system, Captune, uses machine learning models of varying depth to automate changes in musical parameters for looping audio. The second system, PoseFX, communicates its decision-making to the user through visualizations and musical output. The third system, GestAlt, uses online machine learning and reinforcement learning to adapt to a user’s hand motion patterns and allows a user to communicate their musical goals to the system.
Each system was evaluated with a study that measured how participants perceived the systems as creatively autonomous partners, how their understanding of the systems affected their relationships with the AI, and how their perceptions evolved as they learned how to perform with the system. Participants reported higher amounts of creativity and expression with a version of Captune with a deeper neural network. Additionally, visualizations supporting their understanding of PoseFX improved the ability of the participants to perform with it in a way that better matched their performance goals. When performing with GestAlt multiple times, their ability to communicate with the agent increased trust over time and the participants developed a sense of shared goals and motion with the system. This dissertation presents findings from these studies as design principles for AI-based interactive music systems to support human-AI collaboration.
Sponsor
Date
2024-08-15
Extent
Resource Type
Text
Resource Subtype
Dissertation