Series
International Conference on Auditory Display (ICAD)

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 24
  • Item
    NEURODIVERGENCE IN SOUND: SONIFICATION AS A TOOL FOR MENTAL HEALTH AWARENESS
    (Georgia Institute of Technology, 2023-06) Nadri, Chihab ; Al Mater, Hamza ; Morrison, Spencer ; Tiemann, Allison ; Song, Inuk ; Lee, Tae Ho ; Jeon, Myounghoon
    The need to build greater mental health awareness as an important factor in decreasing stigma surrounding individuals with neurodivergent conditions has led to the development of programs and activities that seek to increase mental health awareness. Using a sonification approach with neural activity can effectively convey an individual’s psychological and mental characteristics in a simple and intuitive manner. In this study, we developed a sonification algorithm that alters existing music clips according to fMRI data corresponding to the salience network activity from neurotypical and neurodivergent individuals with schizophrenia. We conducted an evaluation of these sonifications with 24 participants. Results indicate that participants were able to differentiate between sound clips stemming from different neurological conditions and that participants gained increased awareness of schizophrenia through this brief intervention. Findings indicate sonification could be an effective tool in raising mental health awareness and relate neurodivergence to a neurotypical audience.
  • Item
    Preliminary evaluation of lead time variation for rail crossing in-vehicle alerts
    (Georgia Institute of Technology, 2022-06) Nadri, Chihab ; Zieger, Scott ; Lautala, Pasi ; Nelson, David ; Jeon, Myounghoon
    In-Vehicle Auditory Alert (IVAA) effectiveness depends on several auditory factors. Lead time has been shown to significantly influence IVAA effectiveness for automotive displays, although applications for Highway-Rail Grade Crossings (HRGCs) have yet to modulate and determine an appropriate lead time. To address this research gap, we conducted a small-scale driving simulator study to investigate the effect of lead time variation on driving performance and gaze behavior at rail crossings. We recruited 11 participants who drove through three experimental drives with different alert state conditions. Preliminary results show that a seven second lead time led to statistically higher temporal demand, a slower approach speed to crossings, and better gaze behavior than the no IVAA condition. The seven second lead time condition had similar higher values than the advanced warning condition, although they were not statistically significant. Findings of the current study offer insight into auditory display guidance for HRGCs, although future work involving a larger recruitment pool is needed to confirm study findings.
  • Item
    Investigating the effect of earcon and speech variables on hybrid auditory alerts at rail crossings
    (Georgia Institute of Technology, 2021-06) Nadri, Chihab ; Lee, Seul Chan ; Kekal, Siddhant ; Li, Yinjia ; Li, Xuan ; Lautala, Pasi ; Nelson, David ; Jeon, Myounghoon
    Despite rail industry advances in reducing accidents at Highway Rail Grade Crossings (HRGCs), train-vehicle collisions continue to happen. The use of auditory displays has been suggested as a countermeasure to improve driver behavior at HRGCs, with prior research recommending the use of hybrid sound alerts consisting of earcons and speech messages. In this study, we sought to further investigate the effect of auditory variables in hybrid sound alerts. Nine participants were recruited and instructed to evaluate 18 variations of a hybrid In-Vehicle Auditory Alert (IVAA) along 11 subjective ratings. Results showed that earcon speed and pitch contour design can change user perception of the hybrid IVAA. Results further indicated the influence of speech gender and other semantic variables on user assessment of HRGC IVAAs. Findings of the current study can also inform and instruct the design of appropriate hybrid IVAAs for HRGCs.
  • Item
    Introduction of a computational modelling approach to auditory display research: Case studies using the QN-MHP framework
    (Georgia Institute of Technology, 2021-06) Jeon, Myounghoon ; Nadri, Chihab ; Zhang, Yiqi
    For more than two decades, a myriad of design and research methods have been proposed in the ICAD community. Neurological methods have been presented since the inception of ICAD, and psychological human-subjects research has become as a legitimate approach to auditory display design and evaluation. However, little research has been conducted on modelling approaches to formalize human behavior in response to auditory displays. To bridge this gap, the present paper introduces computational modelling in auditory displays using the Queuing Network- Model Human Processor (QN-MHP) framework. After delineating the advantages of computational modelling and the QN-MHP framework, the paper introduces four case studies, which modelled drivers' behavior in response to invehicle auditory warnings, followed by the implications and future work. We hope that this paper can spark lively discussions on computational modelling in the ICAD community and thus, more researchers can benefit from using this method for future research.
  • Item
    Preliminary guidelines on the sonification of visual artworks: Linking music, sonification & visual arts
    (Georgia Institute of Technology, 2019-06) Nadri, Chihab ; Anaya, Chairunisa ; Yuan, Shan ; Jeon, Myounghoon
    Sonification and data processing algorithms have advanced over the years to reach practical applications in our everyday life. Similarly, image processing techniques have improved over time. While a number of image sonification methods have already been developed, few have delved into potential synergies through the combined use of multiple data and image processing techniques. Additionally, little has been done on the use of image sonification for artworks, as most research has been focused on the transcription of visual data for people with visual impairments. Our goal is to sonify paintings reflecting their art style and genre to improve the experience of both sighted and visually impaired individuals. To this end, we have designed initial sonifications for paintings of abstractionism and realism, and conducted interviews with visual and auditory experts to improve our mappings. We believe the recommendations and design directions we have received will help develop a multidimensional sonification algorithm that can better transcribe visual art into appropriate music.
  • Item
    Examining the learnability of auditory displays: Music, earcons, spearcons, and lyricons
    (Georgia Institute of Technology, 2018-06) Tislar, Kay ; Duford, Zackery ; Nelson, Brittany ; Peabody, Madeline ; Jeon, Myounghoon
    Auditory displays are a useful platform to convey information to users for a variety of reasons. The present study sought to examine the use of different types of sounds that can be used in auditory displays—music, earcons, spearcons, and lyricons—to determine which sounds have the highest learnability when presented in sequences. Participants were self-trained on sound meanings and then asked to recall meanings after listening to sequences of varying lengths. The relatedness of sounds and their attributed meanings, or the intuitiveness of the sounds, was also examined. The results show that participants were able to learn and recall lyricons and spearcons the best, and related meaning is an important contributing variable to learnability and memorability of all sound types. This should open the door for future research and experimentation of lyricons and spearcons presented in auditory streams.
  • Item
    “Musical Exercise” for people with visual impairments: A preliminary study with the blindfolded
    (Georgia Institute of Technology, 2018-06) Khan, Ridwan Ahmed ; Jeon, Myounghoon ; Yoon, Tejin
    Performing independent physical exercise is critical to maintain one's good health, but it is specifically hard for people with visual impairments. To address this problem, we have developed a Musical Exercise platform for people with visual impairments so that they can perform exercise in a good form consistently. We designed six different conditions, including blindfolded or visual without audio conditions, and blindfolded or visual with two different types of audio feedback (continuous vs. discrete) conditions. Eighteen sighted participants participated in the experiment, by doing two exercises - squat and wall sit with all six conditions. The results show that Musical Exercise is a usable exercise assistance system without any adverse effect on exercise completion time or perceived workload. Also, the results show that with a specific sound design (i.e., discrete), participants in the blindfolded condition can do exercise as consistently as participants in the non-blindfolded condition. This implies that not all sounds equally work and thus, care is required to refine auditory displays. Potentials and limitations of Musical Exercise and future works are discussed with the results.
  • Item
    Multisensory Cue Congruency In The Lane Change Test
    (Georgia Institute of Technology, 2017-06) Sun, Yuanjing ; Barnes, Jaclyn ; Jeon, Myounghoon
    Drivers interact with a number of systems while driving. Taking advantage of multiple modalities can reduce the cognitive effort of information processing and facilitate multitasking. The present study aims to investigate how and when auditory cues improve driver responses to a visual target. We manipulated three dimensions (spatial, semantic, and temporal) of verbal and nonverbal cues to interact with visual spatial instructions. Multimodal displays were compared with unimodal (visual-only) displays to see whether they would facilitate or degrade a vehicle control task. Twenty-six drivers participated in the Auditory-Spatial Stroop experiment using a lane change test (LCT). The preceding auditory cues improved response time over the visual-only condition. When dimensions conflicted, spatial (location) congruency had a stronger impact than semantic (meaning) congruency. The effects on accuracy was minimal, but there was a trend of speed-accuracy trade-offs. Results are discussed along with theoretical issues and future works.
  • Item
    Participatory Design Research Methodologies: A Case Study in Dancer Sonification
    (Georgia Institute of Technology, 2017-06) Landry, Steven ; Jeon, Myounghoon
    Given that embodied interaction is widespread in Human-Computer Interaction, interests on the importance of body movements and emotions are gradually increasing. The present paper describes our process of designing and testing a dancer sonification system using a participatory design research methodology. The end goal of the dancer sonification project is to have dancers generate aesthetically pleasing music in real-time based on their dance gestures, instead of dancing to pre-recorded music. The generated music should reflect both the kinetic activities and affective contents of the dancer’s movement. To accomplish these goals, expert dancers and musicians were recruited as domain experts in affective gesture and auditory communication. Much of the dancer sonification literature focuses exclusively on describing the final performance piece or the techniques used to process motion data into auditory control parameters. This paper focuses on the methods we used to identify, select, and test the most appropriate motion to sound mappings for a dancer sonification system.
  • Item
    Influences of Visual and Auditory Displays on Aimed Movements Using Air Gesture Controls
    (Georgia Institute of Technology, 2017-06) Sterkenburg, Jason ; Landry, Steven ; Jeon, Myounghoon
    With the proliferation of technologies operated via in-air hand movements, e.g. virtual/augmented reality, in-vehicle infotainment systems, and large public information displays, there remains an open question about if/how auditory displays can be used effectively to facilitate eyes-free aimed movements. We conducted a within-subjects study, similar to a Fitts paradigm study, in which 24 participants completed simple aimed movements to acquire targets of varying sizes and distances. Participants completed these aimed movements for six conditions – each presenting a unique combination of visual and auditory displays. Results showed participants were generally faster to make selections when using visual displays compared to displays without visuals. However, selection accuracy was similar for auditory-only displays when compared to displays with visual components. These results highlight the potential for auditory displays to aid aimed movements using air gestures in conditions where visual displays are impractical, impossible, or unhelpful.