Organizational Unit:
School of Music

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 17
  • Item
    Rhythm Recreation Study To Inform Intelligent Pedagogy Systems
    (Georgia Institute of Technology, 2023-08-28) Alben, Noel
    Web-based intelligent pedagogy systems have great potential to provide interactive music lessons to those unable to access conventional, face-to-face music instruction from human experts. A key component of any effective pedagogy system is the expert domain knowledge used to generate, present, and evaluate the teachable content that makes up the ''syllabus'' of the system (Brusilovskiy, 1994). In this work, we investigate the application of computational musicology algorithms to devise the ''syllabus'' of intelligent rhythm pedagogy software. Many computational metrics that quantify and characterize rhythmic patterns have been proposed (Toussaint). We employ Cao et al.'s (2012) family theory of rhythms as a metric of rhythmic similarity and an entropy-based coded-element metric of rhythmic complexity (Thul, 2008). Both metrics have been shown to correlate with human judgments of rhythmic similarity and complexity. A rhythmic syllabus that uses these metrics to determine the order in which rhythmic patterns are learned will be easier for musicians to progress through. We test this hypothesis in a rhythm reproduction study hosted on a custom-designed web-based experimental interface. Our experiment consists of six individual blocks: In each block, a participant listens to five unique rhythmic patterns, which they must then reproduce by clapping into their computer's microphone. Each rhythmic pattern is two measures long on an eighth-note grid, presented at 105 BPM, and looped four times. The order and content of rhythmic patterns within each block are determined using our chosen complexity and similarity metrics. A participant completes a block when they reproduce all the rhythmic patterns of the block within the performance constraints defined by automatic performance assessment built into the experimental interface. Each of our six blocks represents key interactions: the order of the stimuli determined by our prescribed metrics, melodic information added to the rhythmic stimuli, and the presence of a visual representation of the rhythmic pattern. We also have control blocks where the patterns of each block are selected randomly without any theoretically informed metrics. Dependent variables to measure the effectiveness of the syllabus are the number of trials taken to reproduce a given rhythmic stimuli accurately. Participant reproductions are stored to afford future analyses, and the designed interface helps efficiently automate the data collection, making it more accessible for future rhythm reproduction studies. We conducted the rhythm recreation study with 28 participants across the United States, who accessed the experiment through a web-based portal. The data gathered from our experiment implies that computational music theory algorithms can contribute to creating syllabi that align with human perception. However, these results deviate from my initial predictions. Furthermore, It appears that while incorporating visual stimuli aided in learning rhythmic patterns, the introduction of pitched onsets negatively affected reproduction performance.
  • Item
    Toward Natural Singing Via External Prosthesis
    (Georgia Institute of Technology, 2022-12-15) Irvin, Bryce
    The accessibility of expressive singing is limited by the physical mechanisms that produce speech and singing. For individuals without these physical mechanisms, singing is either difficult or impossible. Through this work, we propose the development of an external electronic prosthesis capable of inducing a natural singing voice in a performer without the need for traditional singing mechanisms. The novelty introduced by this prosthesis will serve as a new way for performers of any background and ability to express themselves and participate in social music activities. Specifically, we first aim to resolve issues with common prosthesis transducers. We then aim to discover methods for inducing the most natural singing voice in users, focusing on the nature of the excitation waveform used to drive the transducer of the prosthesis.
  • Item
    Using music to modulate emotional memory
    (Georgia Institute of Technology, 2021-12-14) Mehdizadeh, Sophia Kaltsouni
    Music is powerful in both affecting emotion and evoking memory. This thesis explores if music might be able to modulate, or change, aspects of our emotional episodic memories. We present a behavioral, human-subjects experiment with a cognitive memory task targeting the reconsolidation mechanism. Memory reconsolidation allows for a previous experience to be relived and simultaneously reframed in memory. Moreover, reconsolidation of emotional, potentially maladaptive, autobiographical episodic memories has become a research focus in the development of new affective psychotherapy protocols. To this end, we propose that music may be a useful tool in driving and reshaping our memories and their associated emotions. This thesis additionally focuses on the roles that affect and preference may play in these memory processes. Through this research, we provide evidence supporting music’s ability to serve as a context for emotional autobiographical episodic memories. Overall, our results suggest that affective characteristics of the music and the emotions induced in the listener significantly influence memory creation and retrieval, and that furthermore, the musical emotion may be equally as powerful as the musical structure in contextualizing and cueing memories. We also find support for individual differences and personal relevance of the musical context playing a determining role in these processes. This thesis establishes a foundation for subsequent neuroimaging work and future clinical research directions.
  • Item
    The sound within: Learning audio features from electroencephalogram recordings of music listening
    (Georgia Institute of Technology, 2020-04-28) Vinay, Ashvala
    We look at the intersection of music, machine Learning and neuroscience. Specifically, we are interested in understanding how we can predict audio onset events by using the electroencephalogram response of subjects listening to the same music segment. We present models and approaches to this problem using approaches derived by deep learning. We worked with a highly imbalanced dataset and present methods to solve it - tolerance windows and aggregations. Our presented methods are a feed-forward network, a convolutional neural network (CNN), a recurrent neural network (RNN) and a RNN with a custom unrolling method. Our results find that at a tolerance window of 40 ms, a feed-forward network performed well. We also found that an aggregation of 200 ms suggested promising results, with aggregations being a simple way to reduce model complexity.
  • Item
    Regressing dexterous finger flexions using machine learning and multi-channel single element ultrasound transducers
    (Georgia Institute of Technology, 2018-04-27) Hantrakul, Lamtharn
    Human Machine Interfaces or "HMI's" come in many shapes and sizes. The mouse and keyboard is a typical and familiar HMI. In applications such as Virtual Reality or Music performance, a precise HMI for tracking finger movement is often required. Ultrasound, a safe and non-invasive imaging technique, has shown great promise as an alternative HMI interface that addresses the shortcomings of vision-based and glove-based sensors. This thesis develops a first-in-class system enabling real-time regression of individual and simultaneous finger flexions using single element ultrasound transducers. A comprehensive dataset of ultrasound signals is collected is collected from a study of 10 users. A series of machine learning experiments using this dataset demonstrate promising results supporting the use of single element transducers as a HMI device.
  • Item
    Enhancing stroke generation and expressivity in robotic drummers - A generative physics model approach
    (Georgia Institute of Technology, 2015-04-24) Edakkattil Gopinath, Deepak
    The goal of this master's thesis research is to enhance the stroke generation capabilities and musical expressivity in robotic drummers. The approach adopted is to understand the physics of human fingers-drumstick-drumhead interaction and try to replicate the same behavior in a robotic drumming system with the minimum number of degrees of freedom. The model that is developed is agnostic to the exact specifications of the robotic drummer that will attempt to emulate human like drum strokes, and therefore can be used in any robotic drummer that uses actuators with complete control over the motor position angle. Initial approaches based on exploiting the instability of a PID control system to generate multiple bounces and the limitations of this approach are also discussed in depth. In order to assess the success of the model and the implementation in the robotic platform a subjective evaluation was conducted. The evaluation results showed that, the observed data was statistically equivalent to the subjects resorting to a blind guess in order to distinguish between a human playing a multiple bounce stroke and a robot playing a similar kind of stroke.
  • Item
    Supervised feature learning via sparse coding for music information rerieval
    (Georgia Institute of Technology, 2015-04-24) O'Brien, Cian John
    This thesis explores the ideas of feature learning and sparse coding for Music Information Retrieval (MIR). Sparse coding is an algorithm which aims to learn new feature representations from data automatically. In contrast to previous work which uses sparse coding in an MIR context the concept of supervised sparse coding is also investigated, which makes use of the ground-truth labels explicitly during the learning process. Here sparse coding and supervised coding are applied to two MIR problems: classification of musical genre and recognition of the emotional content of music. A variation of Label Consistent K-SVD is used to add supervision during the dictionary learning process. In the case of Music Genre Recognition (MGR) an additional discriminative term is added to encourage tracks from the same genre to have similar sparse codes. For Music Emotion Recognition (MER) a linear regression term is added to learn an optimal classifier and dictionary pair. These results indicate that while sparse coding performs well for MGR, the additional supervision fails to improve the performance. In the case of MER, supervised coding significantly outperforms both standard sparse coding and commonly used designed features, namely MFCC and pitch chroma.
  • Item
    Analog synthesizers in the classroom: How creative play, musical composition, and project-based learning can enhance STEM standard literacy and self-efficacy
    (Georgia Institute of Technology, 2015-04-24) Howe, Christopher David
    The state of STEM education in America's high schools is currently in flux, with billions annually being poured into the NSF to increase national STEM literacy. Hands-on project-based learning interventions in the STEM classroom are ubiquitous but tend to focus on robotics or competition based curriculums. These curricula do not address musical creativity or cultural relevancy to reach under-represented or disinterested groups. By utilizing an analog synthesizer for STEM learning standards this research aims to engage students that may otherwise lack confidence in the field. By incorporating the Maker Movement, a STEAM architecture, and culturally relevant musical examples, this study’s goal to build both self-efficacy and literacy in STEM within under-represented groups through hands-on exercises with a Moog analog synthesizer, specifically the Moog Werkstatt. A quasi-experimental one-group pre-test/post-test design was crafted to determine study validity, and has been implemented in three separate studies. Several age demographics were selected across a variety of classroom models and teaching style. The purpose of this wide net was to explore where a tool like the Werkstatt and its accompanying curriculum would have the biggest impact. Results show that this curriculum and technique are largely ineffective in an inverted Music elective classroom. However, in the STEM classroom, literacy and confidence were built across genders, with females showing greater increases in engineering confidence and music technology interest than their male counterparts.
  • Item
    Audience participation using mobile phones as musical instruments
    (Georgia Institute of Technology, 2012-05-21) Lee, Sang Won
    This research aims at a music piece for audience participation using mobile phones as musical instruments in a music concert setting. Inspired by the ubiquity of smart phones, I attempted to accomplish audience engagement in a music performance by crafting an accessible musical instrument with which audience can be a part of the performance. The research begins by reviewing the related works in two areas, mobile music and audience participation at music performances, builds a charted map of the areas and its intersection to seek an innovation, and defines requisites for a successful audience participation where audience can participate in music making as musicians with their mobile phones. To make accessible audience participation, the concept of a networked multi-user instrument is applied for the system. With the lessons learnt, I developed echobo, a mobile musical instrument application for iOS devices (iPhone, iPad and iPod Touch). With this system, audience can download the app at the concert, play the instrument instantly, interact with other audience members, and contribute to the music by sound generated from their mobile phones. A music piece for echobo and a clarinet was presented in a series of performances and the application was found to work reliably and accomplish audience engagement. The post-survey results indicate that the system was accessible, and helped the audience to connect to the music and other musicians.
  • Item
    Musical swarm robot simulation strategies
    (Georgia Institute of Technology, 2011-11-16) Albin, Aaron Thomas
    Swarm robotics for music is a relatively new way to explore algorithmic composition as well as new modes of human robot interaction. This work outlines a strategy for making music with a robotic swarm constrained by acoustic sound, rhythmic music using sequencers, motion causing changes in the music, and finally human and swarm interaction. Two novel simulation programs are created in this thesis: the first is a multi-agent simulation designed to explore suitable parameters for motion to music mappings as well as parameters for real time interaction. The second is a boid-based robotic swarm simulation that adheres to the constraints established, using derived parameters from the multi-agent simulation: orientation, number of neighbors, and speed. In addition, five interaction modes are created that vary along an axis of direct and indirect forms of human control over the swarm motion. The mappings and interaction modes of the swarm robot simulation are evaluated in a user study involving music technology students. The purpose of the study is to determine the legibility of the motion to musical mappings and evaluate user preferences for the mappings and modes of interaction in problem solving and in open-ended contexts. The findings suggest that typical users of a swarm robot system do not necessarily prefer more inherently legible mappings in open-ended contexts. Users prefer direct and intermediate modes of interaction in problem solving scenarios, but favor intermediate modes of interaction in open-ended ones. The results from this study will be used in the design and development of a new swarm robotic system for music that can be used in both contexts.