Organizational Unit:
Sonification Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 20
  • Item
    Preliminary guidelines on the sonification of visual artworks: Linking music, sonification & visual arts
    (Georgia Institute of Technology, 2019-06) Nadri, Chihab ; Anaya, Chairunisa ; Yuan, Shan ; Jeon, Myounghoon
    Sonification and data processing algorithms have advanced over the years to reach practical applications in our everyday life. Similarly, image processing techniques have improved over time. While a number of image sonification methods have already been developed, few have delved into potential synergies through the combined use of multiple data and image processing techniques. Additionally, little has been done on the use of image sonification for artworks, as most research has been focused on the transcription of visual data for people with visual impairments. Our goal is to sonify paintings reflecting their art style and genre to improve the experience of both sighted and visually impaired individuals. To this end, we have designed initial sonifications for paintings of abstractionism and realism, and conducted interviews with visual and auditory experts to improve our mappings. We believe the recommendations and design directions we have received will help develop a multidimensional sonification algorithm that can better transcribe visual art into appropriate music.
  • Item
    Examining the learnability of auditory displays: Music, earcons, spearcons, and lyricons
    (Georgia Institute of Technology, 2018-06) Tislar, Kay ; Duford, Zackery ; Nelson, Brittany ; Peabody, Madeline ; Jeon, Myounghoon
    Auditory displays are a useful platform to convey information to users for a variety of reasons. The present study sought to examine the use of different types of sounds that can be used in auditory displays—music, earcons, spearcons, and lyricons—to determine which sounds have the highest learnability when presented in sequences. Participants were self-trained on sound meanings and then asked to recall meanings after listening to sequences of varying lengths. The relatedness of sounds and their attributed meanings, or the intuitiveness of the sounds, was also examined. The results show that participants were able to learn and recall lyricons and spearcons the best, and related meaning is an important contributing variable to learnability and memorability of all sound types. This should open the door for future research and experimentation of lyricons and spearcons presented in auditory streams.
  • Item
    “Musical Exercise” for people with visual impairments: A preliminary study with the blindfolded
    (Georgia Institute of Technology, 2018-06) Khan, Ridwan Ahmed ; Jeon, Myounghoon ; Yoon, Tejin
    Performing independent physical exercise is critical to maintain one's good health, but it is specifically hard for people with visual impairments. To address this problem, we have developed a Musical Exercise platform for people with visual impairments so that they can perform exercise in a good form consistently. We designed six different conditions, including blindfolded or visual without audio conditions, and blindfolded or visual with two different types of audio feedback (continuous vs. discrete) conditions. Eighteen sighted participants participated in the experiment, by doing two exercises - squat and wall sit with all six conditions. The results show that Musical Exercise is a usable exercise assistance system without any adverse effect on exercise completion time or perceived workload. Also, the results show that with a specific sound design (i.e., discrete), participants in the blindfolded condition can do exercise as consistently as participants in the non-blindfolded condition. This implies that not all sounds equally work and thus, care is required to refine auditory displays. Potentials and limitations of Musical Exercise and future works are discussed with the results.
  • Item
    Multisensory Cue Congruency In The Lane Change Test
    (Georgia Institute of Technology, 2017-06) Sun, Yuanjing ; Barnes, Jaclyn ; Jeon, Myounghoon
    Drivers interact with a number of systems while driving. Taking advantage of multiple modalities can reduce the cognitive effort of information processing and facilitate multitasking. The present study aims to investigate how and when auditory cues improve driver responses to a visual target. We manipulated three dimensions (spatial, semantic, and temporal) of verbal and nonverbal cues to interact with visual spatial instructions. Multimodal displays were compared with unimodal (visual-only) displays to see whether they would facilitate or degrade a vehicle control task. Twenty-six drivers participated in the Auditory-Spatial Stroop experiment using a lane change test (LCT). The preceding auditory cues improved response time over the visual-only condition. When dimensions conflicted, spatial (location) congruency had a stronger impact than semantic (meaning) congruency. The effects on accuracy was minimal, but there was a trend of speed-accuracy trade-offs. Results are discussed along with theoretical issues and future works.
  • Item
    Participatory Design Research Methodologies: A Case Study in Dancer Sonification
    (Georgia Institute of Technology, 2017-06) Landry, Steven ; Jeon, Myounghoon
    Given that embodied interaction is widespread in Human-Computer Interaction, interests on the importance of body movements and emotions are gradually increasing. The present paper describes our process of designing and testing a dancer sonification system using a participatory design research methodology. The end goal of the dancer sonification project is to have dancers generate aesthetically pleasing music in real-time based on their dance gestures, instead of dancing to pre-recorded music. The generated music should reflect both the kinetic activities and affective contents of the dancer’s movement. To accomplish these goals, expert dancers and musicians were recruited as domain experts in affective gesture and auditory communication. Much of the dancer sonification literature focuses exclusively on describing the final performance piece or the techniques used to process motion data into auditory control parameters. This paper focuses on the methods we used to identify, select, and test the most appropriate motion to sound mappings for a dancer sonification system.
  • Item
    Influences of Visual and Auditory Displays on Aimed Movements Using Air Gesture Controls
    (Georgia Institute of Technology, 2017-06) Sterkenburg, Jason ; Landry, Steven ; Jeon, Myounghoon
    With the proliferation of technologies operated via in-air hand movements, e.g. virtual/augmented reality, in-vehicle infotainment systems, and large public information displays, there remains an open question about if/how auditory displays can be used effectively to facilitate eyes-free aimed movements. We conducted a within-subjects study, similar to a Fitts paradigm study, in which 24 participants completed simple aimed movements to acquire targets of varying sizes and distances. Participants completed these aimed movements for six conditions – each presenting a unique combination of visual and auditory displays. Results showed participants were generally faster to make selections when using visual displays compared to displays without visuals. However, selection accuracy was similar for auditory-only displays when compared to displays with visual components. These results highlight the potential for auditory displays to aid aimed movements using air gestures in conditions where visual displays are impractical, impossible, or unhelpful.
  • Item
    Musical Robots For Children With ASD Using A Client-Server Architecture
    (Georgia Institute of Technology, 2016-07) Zhang, Ruimin ; Barnes, Jaclyn ; Ryan, Joseph ; Jeon, Myounghoon ; Park, Chung Hyuk ; Howard, Ayanna M.
    People with Autistic Spectrum Disorders (ASD) are known to have difficulty recognizing and expressing emotions, which affects their social integration. Leveraging the recent advances in interactive robot and music therapy approaches, and integrating both, we have designed musical robots that can facilitate social and emotional interactions of children with ASD. Robots communicate with children with ASD while detecting their emotional states and physical activities and then, make real-time sonification based on the interaction data. Given that we envision the use of multiple robots with children, we have adopted a client-server architecture. Each robot and sensing device plays a role as a terminal, while the sonification server processes all the data and generates harmonized sonification. After describing our goals for the use of sonification, we detail the system architecture and on-going research scenarios. We believe that the present paper offers a new perspective on the sonification application for assistive technologies.
  • Item
    Towards An In-Vehicle Sonically-Enhanced Gesture Control Interface: A Pilot Study
    (Georgia Institute of Technology, 2016-07) Sterkenburg, Jason ; Landry, Steven ; Jeon, Myounghoon ; Johnson, Joshua
    A pilot study was conducted to explore the potential of sonically-enhanced gestures as controls for future in-vehicle information systems (IVIS). Four concept menu systems were developed using a LEAP Motion and Pure Data: (1) 2x2 with auditory feedback, (2) 2x2 without auditory feedback, (3) 4x4 with auditory feedback, and (4) 4x4 without auditory feedback. Seven participants drove in a simulator while completing simple target-acquisition tasks using each of the four prototype systems. Driving performance and eye glance behavior were collected as well as subjective ratings of workload and system preference. Results from driving performance and eye tracking measures strongly indicate that the 2x2 grids yield better driving safety outcomes than 4x4 grids. Subjective ratings show similar patterns for driver workload and preferences. Auditory feedback led to similar improvements in driving performance and eye glance behavior as well as subjective ratings of workload and preference, compared to visual-only.
  • Item
    Listen To Your Drive: Sonification Architecture and Strategies for Driver State and Performance
    (Georgia Institute of Technology, 2016-07) Landry, Steven ; Tascarella, David ; Jeon, Myounghoon ; FakhrHosseini, S. Maryam
    Driving is mainly a visual task, leaving other sensory channels open for additional information communication. As the level of automation increases in vehicles, monitoring the state and performance of the driver and vehicle shifts from the secondary to primary task. Auditory channels provide the flexibility to display a wide variety of information to the driver without increasing the workload of driving task. It is important to identify types of auditory displays and sonification strategies that provide integral information necessary for the driving task, and not overload the driver with unnecessary or intrusive data. To this end, we have developed an in-vehicle interactive sonification system using the medium-fidelity simulator and neurophysiological devices. The system is intended to integrate driving performance data and driver affective state data in real-time. The present paper introduces the architecture of our in-vehicle interactive sonification system and potential sonification strategies for providing feedback to the driver in an intuitive and non-intrusive manner.
  • Item
    LifeMusic: Reflection Of Life Memories By Data Sonification
    (Georgia Institute of Technology, 2016-07) Khan, Ridwan A. ; Avvari, Ram K. ; Wiykovics, Katherine ; Ranay, Pooja ; Jeon, Myounghoon
    Memorable life events are important to form the present selfimage. Looking back on these memories provides an opportunity to ruminate meaning of life and envision future. Integrating the life-log concept and auditory graphs, we have implemented a mobile application, "LifeMusic", which helps people reflect their memories by listening to their life event sonifcation that is synchronous to these memories. Reflecting the life events through LifeMusic can relieve users of the present and have them journey to the past moments and thus, they can keep balance of emotions in the present life. In the current paper, we describe the implementation and workflow of LifeMusic and briefly discuss focus group results, improvements, and future works.