Organizational Unit:
Sonification Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 8 of 8
  • Item
    Cross-modal collaborative interaction between visually-impaired and sighted users in the workplace
    (Georgia Institute of Technology, 2012-06) Metatla, Oussama ; Bryan-Kinns, Nick ; Stockman, Tony ; Martin, Fiore
    We present a detailed description of the design and integration of auditory and haptic displays in a collaborative diagram editing tool to allow simultaneous visual and non-visual interaction. The tool was deployed in various workplaces where visually-impaired and sighted coworkers access and edit diagrams as part of their daily jobs. We use our initial observations and analyses of the recorded interactions to outline preliminary design recommendations for supporting cross-modal collaboration in the workplace.
  • Item
    Perceptual effects of auditory information about own and other movements
    (Georgia Institute of Technology, 2012-06) Schmitz, Gerd ; Effenberg, Alfred O.
    In sport accurate predictions of other persons’ movements are essential. Former studies have shown that predictions can be enhanced by mapping movements onto sound (sonification) and providing audiovisual feedback [1]. The present study investigated behavioral mechanisms of movement sonification and scrutinized whether effects of own movements and those of other persons can be predicted just by listening to them. Eight athletes heard sonifications of an indoor rower and quantified resulting velocities of a virtual boat. Although boat velocity was not mapped onto sound directly, it explained subjects’ quantifications by regression analysis (R2 = 0.80) significantly better than the directly sonified amplitude and force parameters. Thus perception of boat velocity might have emerged from those sonifications. Predictions of effects of unknown movements were above chance level and as good as predictions of own movements. Furthermore athletes were able to identify their own technique among others (d’ = 0.47 ± 0.43). The results confirm large perceptual effects of auditory feedback and - most importantly - suggest that movement sonification can address central motor representations just by listening to it. Therefore not only predictability but also synchronization with other persons’ movements might be supported.
  • Item
    CircoSonic: a sonification of CIRCOS, a circular graph of pair wise table data
    (Georgia Institute of Technology, 2012-06) Nguyen, Vinh Xuan
    This paper presents, applies and evaluates “CircoSonic,” an interactive sonification of “Circos.” It outlines the development of modifying a gaming engine to replicate Circos, a circular graph for comparing pair wise relationships in a 2D data table, with the added capabilities of sonification through interaction. The developed prototype is evaluated using an insight based methodology and uses a static dataset of historic, current and projected water availability of the Murray Darling Basin. A muted version of CircoSonic is used in the evaluation to establish a baseline comparison of visualization to visualization, from which a projected comparison can be made for visualization to visualization and sonification. In general Circos was found to outperform CircoSonic, except in the aspect of generating complex insights which referred to contextual knowledge. It is concluded that a move from static to dynamic data may show different results and that further investigation is needed into the sonification of novel visualizations.
  • Item
    A Modular Computer Vision Sonification Model For The Visually Impaired
    (Georgia Institute of Technology, 2012-06) Banf, Michael ; Blanz, Volker
    This paper presents a Modular Computer Vision Sonification Model which is a general framework for acquisition, exploration and sonification of visual information to support visually impaired people. The model exploits techniques from Computer Vision and aims to convey as much information as possible about the image to the user, including color, edges and what we refer to as Orientation maps and Micro-Textures. We deliberatively focus on low level features to provide a very general image analysis tool. Our sonification approach relies on MIDI using "real-world" instead of synthetic instruments. The goal is to provide direct perceptual access to images or environments actively and in real time. Our system is already in use, at an experimental stage, at a local residential school, helping congenital blind children develop various cognitive abilities such as geometric understanding and spatial sense as well as offering an intuitive approach to colors and textures.
  • Item
    Acoustic interface for tremor analysis
    (Georgia Institute of Technology, 2012-06) Pirrò, David ; Wankhammer, Alexander ; Schwingenschuh, Petra ; Höldrich, Robert ; Sontacchi, Alois
    In this paper we introduce new methods for real-time acoustical tremor diagnosis. We outline the problems of tremor diagnosis in the clinical context and discuss how sonification can complement and expand the existing tools neurologists have at their disposal. Based on three preliminary sonification experiments upon recorded tremor movement data, we show how temporal as well as spectral characteristics of a tremor can be made audible in realtime. Our first observations indicate that differences among tremor types can be made recognizable via sonification. Therefore, we suggest that the proposed methods could allow for the formulation of more confident diagnoses. At the end of the paper, we will also shortly outline the central topics of future research.
  • Item
    Auditory Support for Situation Awareness in Video Surveillance
    (Georgia Institute of Technology, 2012-06) Höferlin, Benjamin ; Höferlin, Markus ; Goloubets, Boris ; Heidemann, Gunther ; Weiskopf, Daniel
    We introduce a parameter mapping sonification to support situational awareness of surveillance operators during their task of monitoring video data. The presented auditory display produces a continuous ambient soundscape reflecting the changes in video data. For this purpose, we use low-level computer vision techniques, such as optical-flow extraction and background subtraction, and rely on the capabilities of the human auditory system for high-level recognition. Special focus is put on the mapping between video features and sound parameters. We optimize this mapping to provide a good interpretability of the sound pattern, as well as an aesthetic non-obtrusive sonification: precision of the conveyed information, psychoacoustic capabilities of the auditory system, and aesthetical guidelines of sound design are considered by optimally balancing the mapping parameters using gradient descent. A user study evaluates the capabilities and limitations of the presented sonification, as well as its applicability to supporting situational awareness in surveillance scenarios.
  • Item
    Impoving the efficacy of auditory alarms in medical devices by exploring the effect of amplitude envelope on learning and retention
    (Georgia Institute of Technology, 2012-06) Gillard, Jessica ; Schutz, Michael
    Despite strong interest in designing auditory alarms in medical devices, learning and retention of these alarms remains problematic. Based on our previous work exploring learning and retention of associations between sounds and objects, we suspect that some of the problems might in fact stem from the types of sounds used in medical auditory alarms. Several of our previous studies demonstrate improvements in memory associations when using sounds with “percussive” (i.e. decaying) envelops vs. those with “flat” (i.e. artificial sounding) envelopes – the standard structure generally used in many current alarms. Here, we attempt to extend our previous findings on the effects of temporal structure on the learning and memory. Unfortunately, we did not find evidence of any such benefit in the current study. However, several interesting patterns are emerging with respect to “confusions” – the times when one alarm was confused with another. We believe this paradigm and way of thinking about alarms (i.e. attention to temporal structure) could provide insight on ways to improve auditory alarms, thereby prevent injuries and saving lives in hospitals. We welcome the chance to gather feedback on our approaches and thoughts as to why our current attempts (which we believe are based on a solid theoretical basis) have not yet led to our hoped-for improvements.
  • Item
    Multi-dimensional synchronization for rhythmic sonification
    (Georgia Institute of Technology, 2012-06) Boyd, Jeffrey E. ; Godbout, Andrew
    Human locomotion is fundamentally periodic, so when sonifying gait, it is desirable to exploit this periodicity to produce rhythmic sonification synchronized to the motion. To achieve this rhythmic sonification, some mechanism is required to synchronize an oscillator to the period of the motion. This paper presents a method to synchronize to multidimensional signals like those produced by a motion capture system. Using a subset of the joint-angle signals produced by motion capture, the method estimates the phase of a periodic, multidimensional model to match data observed from a moving subject. It does this using an optimization algorithm applied to a suitable objective function. We demonstrate the synchronization with data from a publicly available motion capture database, producing sonifications of drum beats synchronized to footfalls of subjects. The method is robust and shares some common features of phase-locked loops used for synchronizing one-dimensional sinusoidal signals. We foresee applications to sonification for athletics and clinical treatment of gait disorders.