Series
International Conference on Auditory Display (ICAD)

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 461
  • Item
    Congruent audio-visual alarms for supervision tasks
    (Georgia Institute of Technology, 2019-06) Audry, Eliott ; Garcia, Jérémie
    Operators in surveillance activities face cognitive overload due to the fragmentation of information on several screens, the dynamic nature of the task and the multiple visual or audible alarms. This paper presents our ongoing efforts to design efficient audio-visual alarms for surveillance activities such as traffic management or air traffic control. We motivate the use of congruent cross-modal animations to design alarms and describe audio-visual mappings based on this paradigm. We ran a preference experiments with 24 participants to assess our designs and found that specific polarities between visual and audio parameters were preferred. We conclude with future research directions to validate the efficiency of our alarms with different cognitive load levels.
  • Item
    Sonifigrapher: Sonified light curve synthesizer
    (Georgia Institute of Technology, 2019-06) García Riber, Adrian
    In an attempt to contribute to the constant feedback existing between science and music, this work describes the design strategies used in the development of the virtual synthesizer prototype called Sonifigrapher. Trying to achieve new ways of creating experimental music through the exploration of exoplanet data sonifications, this software provides an easy-touse graph-to-sound quadraphonic converter, designed for the sonification of the light curves from NASAメs publiclyavailable exoplanet archive. Based on some features of the first analog tape recorder samplers, the prototype allows end-users to load a light curve from the archive and create controlled audio spectra making use of additive synthesis sonification. It is expected to be useful in creative, educational and informational contexts as part of an experimental and interdisciplinary development project for sonification tools, oriented to both non-specialized and specialized audiences.
  • Item
    An investigation into customisable automatically generated auditory route overviews for pre-navigation
    (Georgia Institute of Technology, 2019-06) Aziz, Nida ; Stockman, Tony ; Stewart, Rebecca
    While travelling to new places, maps are often used to determine the specifics of the route to follow. This helps prepare for the journey by forming a cognitive model of the route in our minds. However, the process is predominantly visual and thus inaccessible to people who are either blind or visually impaired (BVI) or doing an activity where their eyes are otherwise engaged. This work explores effective methods of generating route overviews, which can create a similar cognitive model as visual routes, using audio. The overviews thus generated can help users plan their journey according to their preferences and prepare for it in advance. This paper explores usefulness and usability of auditory routes overviews for the BVI and draws design implications for such a system following a 2-stage study with audio and sound designers and users.The findings underline that auditory route overviews are an important tool that can assist BVI users to make more informed travel choices. A properly designed auditory display might contain an integration of different sonification methods and interaction and customisation capabilities. Findings also show that such a system would benefit from the application of a participatory design approach.
  • Item
    A psychoacoustic sound design for pulse oximetry
    (Georgia Institute of Technology, 2019-06) Schwarz, Sebastian ; Ziemer, Tim
    Oxygen saturation monitoring of neonates is a demanding task, as oxygen saturation (SpO2) has to be maintained in a particular range. However, auditory displays of conventional pulse oximeters are not suitable for informing a clinician about deviations from a target range. A psychoacoustic sonification for neonatal oxygen saturation monitoring is presented. It consists of a continuous Shepard tone at its core. In a laboratory study it was tested if participants (N = 6) could differentiate between seven ranges of oxygen saturation using the proposed sonification. On average participants could identify in 84% of all cases the correct SpO2 range. Moreover, detection rates differed significantly between the seven ranges and as a function of the magnitude of SpO2 change between two consecutive values. Possible explanations for these findings are discussed and implications for further improvements of the presented sonification are proposed.
  • Item
    Designing auditory color space for color sonification systems
    (Georgia Institute of Technology, 2019-06) Osinski, Dominik ; Bizon, Patrycja ; Midtfjord, Helene ; Wierzchon, Michal ; Hjelme, Dag Roar
    Designing of color sonification systems provides a possibility of contribution to various fields ranging from rehabilitation of visually impaired through color perception, multisensory art experience to consciousness studies. The design process itself requires understanding and integrating knowledge from many difficult and inherently different branches of science and the resulting sonification method will be highly dependent on the purpose of the system. We present work in progress on designing and experimental verification of color sonification method that will be implemented in Colorophone, a wearable assistive device for the visually impaired, which enables perception of the information about color through sound. Although our system shows promising results in color and object recognition, we would like to enhance the existing color sonification method by designing a framework for experimental verification of our color sonification algorithm. The goal of this paper is therefore to briefly describe our way of thinking in order to provide the basis for the discussion.
  • Item
    Hearing artificial intelligence: Sonification guidelines & results from a case-study in melanoma diagnosis
    (Georgia Institute of Technology, 2019-06) R. Michael, Winters ; Kalra, Ankur ; Walker, Bruce N.
    The applications of artificial intelligence are becoming more and more prevalent in everyday life. Although many AI systems can operate autonomously, their goal is often assisting humans. Knowledge from the AI system must somehow be perceptualized. Towards this goal, we present a case-study in the application of data-driven non-speech audio for melanoma diagnosis. A physician photographs a suspicious skin lesion, triggering a sonification of the system's penultimate classification layer. We iterated on sonification strategies and coalesced around designs representing three general approaches. We tested each in a group of novice listeners (n=7) for mean sensitivity, specificity, and learning effects. The mean accuracy was greatest for a simple model, but a trained dermatologist preferred a perceptually compressed model of the full classification layer. We discovered that training the AI on sonifications from this model improved accuracy further. We argue for perceptual compression as a general technique and for a comprehensible number of simultaneous streams.
  • Item
    Traces of modal synergy: studying interactive musical sonification of images in general-audience use
    (Georgia Institute of Technology, 2019-06) Rönnberg, Niklas ; Lowgren, Jonas
    Photone is an interactive installation combining color images with musical sonification. The musical expression is generated based on the syntactic (as opposed to semantic) features of an image as it is explored by the userメs pointing device, intending to catalyze a holistic user experience we refer to as modal synergy where visual and auditory modalities multiply rather than add. We collected and analyzed two months' worth of data from visitorsメ interactions with Photone in a public exhibition at a science center. Our results show that a small proportion of visitors engaged in sustained interaction with Photone, as indicated by session times. Among the most deeply engaged visitors, a majority of the interaction was devoted to visually salient objects, i.e., semantic features of the images. However, the data also contains instances of interactive behavior that are best explained by exploration of the syntactic features of an image, and thus may suggest the emergence of modal synergy.
  • Item
    Direct segmented sonification of characteristic features of the data domain
    (Georgia Institute of Technology, 2019-06) Vickers, Paul ; Höldrich, Robert
    Like audification, auditory graphs maintain the temporal relationships of data while using parameter mappings to represent the ordinate values. Such direct approaches have the advantage of presenting the data stream 'as is' without the imposed interpretations or accentuation of particular features found in indirect approaches. However, datasets can often be subdivided into short non-overlapping variable length segments that each encapsulate a discrete unit of domain-specific significant information and current direct approaches cannot represent these. We present Direct Segmented Sonification (DSSon) for highlighting the segments' data distributions as individual sonic events. Using domain knowledge DSSon presents segments as discrete auditory gestalts while retaining the overall temporal regime and relationships of the dataset. The method's structural decoupling from the sound stream's formation means playback speed is independent of the individual sonic event durations, thereby offering highly flexible time compression/stretching to allow zooming into or out of the data. DSSon displays high directness, letting the data 'speak' for themselves.
  • Item
    Disclosing cyber attacks on water distribution systems: an experimental approach to the sonification of threats and anomalous data
    (Georgia Institute of Technology, 2019-06) Lenzi, Sara ; Terenghi, Ginevra ; Taormina, Riccardo ; Galelli, Stefano ; Ciuccarelli, Paolo
    Water distribution systems are undergoing a process of intensive digitalization, adopting networked devices for monitoring and control. While this transition improves efficiency and reliability, these infrastructures are increasingly exposed to cyber-attacks. Cyber-attacks engender anomalous system behaviors which can be detected by data-driven algorithms monitoring sensors readings to disclose the presence of potential threats. At the same time, the use of sonification in real time process monitoring has grown in importance as a valid alternative to avoid information overload and allowing peripheral monitoring. Our project aims to design a sonification system allowing human operators to take better decisions on anomalous behavior while occupied in other (mainly visual) tasks. Using a state-of-the-art detection algorithm and data sets from the Battle of the Attack Detection Algorithms, a series of sonification prototypes were designed and tested in the real world. This paper illustrates the design process and the experimental data collected, as well results and plans for future steps.
  • Item
    Audio guidance for optimal placement of an auditory brainstem implant with magnetic navigation and maximum clinical application accuracy
    (Georgia Institute of Technology, 2019-06) Miljic, Ognjen ; Bardosi, Zoltan ; Freysinger, Wolfgang
    For patients with ineffective auditory nerve and complete hearing loss, Auditory Brainstem Implant (ABI) presents diversity of hearing sensations to help with sound consciousness and communication. At present, during the surgical intervention, surgeons use preoperative patient images to determine optimal position of an ABI on cochlear nucleus on brainstem. When found, the optimal position is marked and mentally mapped by the surgeon; Next, the surgeon tries to locate the optimal position in patientメs head again and places the ABI. The aim of this project is to provide the surgeon with maximum clinical application accuracy guidance to store the optimal position for the implant, and to provide intuitive audio guidance for positioning the implant at the stored optimal position. By using three audio methods, in combination with visual information on Image-Guided Surgery (IGS), surgeon should spend less time looking at the screen, and more time focused on the patient.