Organizational Unit:
Sonification Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 17
  • Item
    Evaluation of a non-visual auditory choropleth and travel map viewer
    (Georgia Institute of Technology, 2022-06) Biggs, Brandon ; Toth, Christopher ; Stockman, Tony ; Coughlan, James M. ; Walker, Bruce N.
    The auditory virtual reality interface of Audiom, a web-based map viewer, was evaluated by thirteen blind participants. In Audiom, the user is an avatar that navigates, using the arrow keys, through geographic data, as if they are playing a first-person, egocentric game. The research questions were: What will make blind users want to use Audiom maps? And Can participants demonstrate basic acquisition of spatial knowledge after viewing an auditory map? A dynamic choropleth map of state-level US COVID-19 data, and a detailed OpenStreetMap powered travel map, were evaluated. All participants agreed they wanted more maps of all kinds, in particular county-level COVID data, and they would use Audiom once some bugs were fixed and their few recommended features were added. Everyone wanted to see Audiom embedded in their existing travel and mapping applications. All participants were able to answer a question evaluating spatial knowledge. Participants also agreed this spatial information was not available in existing applications.
  • Item
    An investigation into customisable automatically generated auditory route overviews for pre-navigation
    (Georgia Institute of Technology, 2019-06) Aziz, Nida ; Stockman, Tony ; Stewart, Rebecca
    While travelling to new places, maps are often used to determine the specifics of the route to follow. This helps prepare for the journey by forming a cognitive model of the route in our minds. However, the process is predominantly visual and thus inaccessible to people who are either blind or visually impaired (BVI) or doing an activity where their eyes are otherwise engaged. This work explores effective methods of generating route overviews, which can create a similar cognitive model as visual routes, using audio. The overviews thus generated can help users plan their journey according to their preferences and prepare for it in advance. This paper explores usefulness and usability of auditory routes overviews for the BVI and draws design implications for such a system following a 2-stage study with audio and sound designers and users.The findings underline that auditory route overviews are an important tool that can assist BVI users to make more informed travel choices. A properly designed auditory display might contain an integration of different sonification methods and interaction and customisation capabilities. Findings also show that such a system would benefit from the application of a participatory design approach.
  • Item
    SoundTrAD, a method and tool for prototyping auditory displays: Can we apply it to an autonomous driving scenario?
    (Georgia Institute of Technology, 2018-06) MacDonald, Doon ; Stockman, Tony
    This paper presents SoundTrAD, a method and tool for designing auditory displays for the user interface. SoundTrAD brings together ideas from user interface design and soundtrack composition and supports novice auditory display designers in building an auditory user interface. The paper argues for the need for such a method before going on to describe the fundamental structure of the method and construction of the supporting tools. The second half of the paper applies SoundTrAD to an autonomous driving scenario and demonstrates its use in prototyping ADs for a wide range of scenarios.
  • Item
    Accessible Spectrum Analyser
    (Georgia Institute of Technology, 2016-07) Martin, Fiore ; Metatla, Oussama ; Bryan-Kinns, Nick ; Stockman, Tony
    This paper presents the Accessible Spectrum Analyser (ASA) developed as part of the DePic project (Design Patterns for Inclusive collaboration) at Queen Mary University of London. The ASA uses sonification to provide an accessible representation of frequency spectra to visually impaired audio engineers. The software is free and open source and is distributed as a VST plug-in under OSX and Windows. The aim of reporting this work at the ICAD 2016 conference is to solicit feedback about the design of the present tool and its more generalized counterpart, as well as to invite ideas for other possible applications where it is thought that auditory spectral analysis may be useful, for example in situations where line of sight is not always possible.
  • Item
    Sonifications for digital audio workstations: Reflections on a participatory design approach
    (Georgia Institute of Technology, 2015-07) Metatla, Oussama ; Bryan-Kinns, Nick ; Stockman, Tony ; Martin, Fiore
    Methods to engage users in the design process rely predominantly on visual techniques, such as paper prototypes, to facilitate the expression and communication of design ideas. The visual nature of these tools makes them inaccessible to people living with visual impairments. Additionally, while using visual means to express ideas for designing graphical interfaces is appropriate, it is harder to use them to articulate the design of non-visual displays. We applied a user-centred approach that incorporates various participatory design techniques to help make the design process accessible to visually impaired musicians and audio production specialists to examine how auditory displays, sonification and haptic interaction can support some of their activities. We describe this approach together with the resulting designs, and reflect on the benefits and challenges that we encountered when applying these techniques in the context of designing sonifications to support audio editing.
  • Item
    The Development Of A Method For Designing Auditory Displays Based On Soundtrack Composition
    (Georgia Institute of Technology, 2013-07) MacDonald, Doon ; Stockman, Tony
    This paper details work toward the design of a method for creating auditory displays for the human-computer interface, based on soundtrack composition. We begin with the benefits to this approach before discussing methods for auditory display design and the need for a unification of different design techniques. We then outline our on-going investigation into the tools and techniques employed within the working practices of sound designers and soundtrack composers. Following this we report our observations of the main priorities that influence how composers create soundtracks and propose ways in which our method may support these. We argue that basing the first steps of the method on a ‘cue sheet could enable designers to identify actions, objects and events within an HCI scenario whilst taking into account the user and the context of use. This is followed by some initial observations of a preliminary study into whether a participant can successfully use this cue sheet methodology. We conclude by identifying that certain elements of the methodology need to be changed: Further investigation and subsequent design needs to be carried out into ways participants can successfully comprehend and systematically use the cue sheet to identify seen and unseen events, actions and objects within the human-computer interface. Additionally we need to investigate how best categorize and map these elements to sound. We conclude our paper with our plans for future work
  • Item
    Cross-modal collaborative interaction between visually-impaired and sighted users in the workplace
    (Georgia Institute of Technology, 2012-06) Metatla, Oussama ; Bryan-Kinns, Nick ; Stockman, Tony ; Martin, Fiore
    We present a detailed description of the design and integration of auditory and haptic displays in a collaborative diagram editing tool to allow simultaneous visual and non-visual interaction. The tool was deployed in various workplaces where visually-impaired and sighted coworkers access and edit diagrams as part of their daily jobs. We use our initial observations and analyses of the recorded interactions to outline preliminary design recommendations for supporting cross-modal collaboration in the workplace.
  • Item
    Auditory Cues for Gestural Control of Multi-Track Audio
    (Georgia Institute of Technology, 2011-06) Morrell, Martin J ; Reiss, Joshua D ; Stockman, Tony
    This paper presents a study undertaken to evaluate user ratings on auditory feedback of sound source selection within a multi-track auditory environment where sound placement is controlled by a gesture control system. Selection confirmation is presented to the participants via changes to the audio mixture over the stereo loudspeakers or feedback over a single ear bluetooth headset. Overall five different methods are compared and results of our study are presented. A second task in the study was given to evaluate a preselection method to help find sound sources before selection, the participant altered a width control of the pre-selection that was heard in the bluetooth headset. Results indicate a specific value irrespective of genre that the pre-selection should be set to whilst the selection confirmation can be perceived to be dependant on genre and instrumentation.
  • Item
    On the Road to Design: Developing a Sonified Route Navigator for Cyclists
    (Georgia Institute of Technology, 2011-06) Moulster, Andrew ; Stockman, Tony
    This paper describes the development of a system that uses sonification to assist cyclist’s navigation. The focus of the paper is on the design process, the decisions made and the methods used to make design choices. An important aspect was the use of listening tests undertaken by potential users of the system while cycling in order to obtain data used to underpin key design decisions regarding the timing and representation of route elements. The architecture and usage of the developed system is described, as well as details of on and off road evaluations of the system.
  • Item
    Development and Evaluation of a Cross-Modal XML Schema Browser
    (Georgia Institute of Technology, 2010-06) Stockman, Tony ; Al-Thanki, Dena
    We describe the development and evaluation of across-modal XML (Extensible Mark-up Language) schema browser. The aim of developing the system is to investigate cross-modal collaboration between users. The browser provides an audio representation of XML schema documents in a way that preserves the structure of documents and supports multi-level navigation. The project has two principle objectives: 1) to overcome the difficulties faced by visually impaired users and sighted people using small screen devices when browsing XML schema files, 2) To explore usability issues when users collaborate using the auditory and visual interfaces of the system. The paper also examines differences between sighted and visually impaired users of the developed auditory interface. The overall results of the usability evaluations demonstrate that both sighted and visually impaired users were able to perform tasks using the audio modality efficiently and accurately, and the same was true of sighted users interactions with the GUI. The use of the system to support collaboration where each user employs a different mode (audio or visual) of the system clearly demonstrated that cross-modal collaboration is effectively supported, enabling users to collaborate and successfully complete a complex shared task.