Organizational Unit:
School of Music

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Using music to modulate emotional memory
    (Georgia Institute of Technology, 2021-12-14) Mehdizadeh, Sophia Kaltsouni
    Music is powerful in both affecting emotion and evoking memory. This thesis explores if music might be able to modulate, or change, aspects of our emotional episodic memories. We present a behavioral, human-subjects experiment with a cognitive memory task targeting the reconsolidation mechanism. Memory reconsolidation allows for a previous experience to be relived and simultaneously reframed in memory. Moreover, reconsolidation of emotional, potentially maladaptive, autobiographical episodic memories has become a research focus in the development of new affective psychotherapy protocols. To this end, we propose that music may be a useful tool in driving and reshaping our memories and their associated emotions. This thesis additionally focuses on the roles that affect and preference may play in these memory processes. Through this research, we provide evidence supporting music’s ability to serve as a context for emotional autobiographical episodic memories. Overall, our results suggest that affective characteristics of the music and the emotions induced in the listener significantly influence memory creation and retrieval, and that furthermore, the musical emotion may be equally as powerful as the musical structure in contextualizing and cueing memories. We also find support for individual differences and personal relevance of the musical context playing a determining role in these processes. This thesis establishes a foundation for subsequent neuroimaging work and future clinical research directions.
  • Item
    The sound within: Learning audio features from electroencephalogram recordings of music listening
    (Georgia Institute of Technology, 2020-04-28) Vinay, Ashvala
    We look at the intersection of music, machine Learning and neuroscience. Specifically, we are interested in understanding how we can predict audio onset events by using the electroencephalogram response of subjects listening to the same music segment. We present models and approaches to this problem using approaches derived by deep learning. We worked with a highly imbalanced dataset and present methods to solve it - tolerance windows and aggregations. Our presented methods are a feed-forward network, a convolutional neural network (CNN), a recurrent neural network (RNN) and a RNN with a custom unrolling method. Our results find that at a tolerance window of 40 ms, a feed-forward network performed well. We also found that an aggregation of 200 ms suggested promising results, with aggregations being a simple way to reduce model complexity.