Person:
Walker, Bruce N.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 23
  • Item
    Hearing artificial intelligence: Sonification guidelines & results from a case-study in melanoma diagnosis
    (Georgia Institute of Technology, 2019-06) R. Michael, Winters ; Kalra, Ankur ; Walker, Bruce N.
    The applications of artificial intelligence are becoming more and more prevalent in everyday life. Although many AI systems can operate autonomously, their goal is often assisting humans. Knowledge from the AI system must somehow be perceptualized. Towards this goal, we present a case-study in the application of data-driven non-speech audio for melanoma diagnosis. A physician photographs a suspicious skin lesion, triggering a sonification of the system's penultimate classification layer. We iterated on sonification strategies and coalesced around designs representing three general approaches. We tested each in a group of novice listeners (n=7) for mean sensitivity, specificity, and learning effects. The mean accuracy was greatest for a simple model, but a trained dermatologist preferred a perceptually compressed model of the full classification layer. We discovered that training the AI on sonifications from this model improved accuracy further. We argue for perceptual compression as a general technique and for a comprehensible number of simultaneous streams.
  • Item
    Auditory displays to facilitate object targeting in 3D space
    (Georgia Institute of Technology, 2019-06) May, Keenan R. ; Sobel, Briana ; Wilson, Jeff ; Walker, Bruce N.
    In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting モarea cue,ヤ (2) ongoing モproximity feedback,ヤ or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.
  • Item
    Mixed speech and non-speech auditory displays: impacts of design, learning, and individual differences in musical engagement
    (Georgia Institute of Technology, 2019-06) Li, Grace ; Walker, Bruce N.
    Information presented in auditory displays is often spread across multiple streams to make it easier for listeners to distinguish between different sounds and changes in multiple cues. Due to the limited resources of the auditory sense and the fact that they are often untrained compared to the visual senses, studies have tried to determine the limit to which listeners are able to monitor different auditory streams while not compromising performance in using the displays. This study investigates the difference between non-speech auditory displays, speech auditory displays, and mixed displays; and the effects of the different display designs and individual differences on performance and learnability. Results showed that practice with feedback significantly improves performance regardless of the display design and that individual differences such as active engagement in music and motivation can predict how well a listener is able to learn to use these displays. Findings of this study contribute to understanding how musical experience can be linked to usability of auditory displays, as well as the capability of humans to learn to use their auditory senses to overcome visual workload and receive important information.
  • Item
    Soccer sonification: Enhancing viewer experience
    (Georgia Institute of Technology, 2019-06) Savery, Richard ; Ayyagari, Madhukesh ; May, Keenan ; Walker, Bruce N.
    We present multiple approaches to soccer sonification, focusing on enhancing the experience for a general audience. For this work, we developed our own soccer data set through computer vision analysis of footage from a tactical overhead camera. This data-set included X, Y, coordinates for the ball and players throughout, as well as passes, steals and goals. After a divergent creation process, we developed four main methods of sports sonification for entertainment. For the Tempo Variation and Pitch Variation methods, tempo or pitch is operationalized to demonstrate ball and player movement data. The Key Moments method features only pass, steal and goal data, while the Musical Moments method takes existing music and attempts to align the track with important data points. Evaluation was done using a combination of qualitative focus groups and quantitative surveys, with 36 participants completing hour long sessions. Results indicated an overall preference for the Pitch Variation and Musical Moments methods, and revealed a robust trade-off between usability and enjoyability.
  • Item
    Auditory and Head-Up Displays for Eco-Driving Interfaces
    (Georgia Institute of Technology, 2017-06) Shortridge, Woodbury ; Gable, Thomas M. ; Noah, Brittany E. ; Walker, Bruce N.
    Eco-driving describes a strategy for operating a vehicle in a fuel-efficient manner. Current research shows that visual ecodriving interfaces can reduce fuel consumption by shaping motorists’ driving behavior but may hinder safe driving performance. The present study aimed to generate insights and direction for design iterations of auditory eco-driving displays and a potential matching head-up visual display to minimize the negative effects of using purely visual headdown eco-driving displays. Experiment 1 used a sound cardsorting task to establish mapping, scaling, and polarity of acoustic parameters for auditory eco-driving interfaces. Surveys following each sorting task determined preferences for the auditory display types. Experiment 2 was a sorting task to investigate design parameters of visual icons that are to be paired with these auditory displays. Surveys following each task revealed preferences for the displays. The results facilitated the design of intuitive interface prototypes for an auditory and matching head-up eco-driving display that can be compared to each other.
  • Item
    Solar System Sonification: Exploring Earth and its Neighbors Through Sound
    (Georgia Institute of Technology, 2017-06) Tomlinson, Brianna J. ; Winters, R. Michael ; Latina, Christopher ; Bhat, Smruthi ; Rane, Milap ; Walker, Bruce N.
    Informal learning environments (ILEs) like museums incorporate multi-modal displays into their exhibits as a way to engage a wider group of visitors, often relying on tactile, audio, and visual means to accomplish this. Planetariums, however, represent one type of ILE where a single, highly visual presentation modality is used to entertain, inform, and engage a large group of users in a passive viewing experience. Recently, auditory displays have been used as a supplement or even an alternative to visual presentation of astronomy concepts, though there has been little evaluation of those displays. Here, we designed an auditory model of the solar system and created a planetarium show, which was later presented at a local science center. Attendees evaluated the performance on helpfulness, interest, pleasantness, understandability, and relatability of the sounds mappings. Overall, attendees rated the solar system and planetary details very highly, in addition to providing open-ended responses about their entire experience.
  • Item
    Spindex and Spearcons in Mandarin: Auditory Menu Enhancements Successful in a Tonal Language
    (Georgia Institute of Technology, 2017-06) Gable, Thomas M. ; Tomlinson, Brianna ; Cantrell, Stanley ; Walker, Bruce N.
    Auditory displays have been used extensively to enhance visual menus across diverse settings for various reasons. While standard auditory displays can be effective and help users across these settings, standard auditory displays often consist of text to speech cues, which can be time intensive to use. Advanced auditory cues including spindex and spearcon cues have been developed to help address this slow feedback issue. While these cues are most often used in English, they have also been applied to other languages, but research on using them in tonal languages, which may affect the ability to use them, is lacking. The current research investigated the use of spindex and spearcon cues in Mandarin, to determine their effectiveness in a tonal language. The results suggest that the cues can be effectively applied and used in a tonal language by untrained novices. This opens the door to future use of the cues in languages that reach a large portion of the world’s population.
  • Item
    Introducing Multimodal Sliding Index: Qualitative Feedback, Perceived Workload, and Driving Performance with an Auditory Enhanced Menu Navigation Method
    (Georgia Institute of Technology, 2017-06) Sardesai, Ruta R. ; Gable, Thomas M. ; Walker, Bruce N.
    Using auditory menus on a mobile device has been studied in depth with standard flicking, as well as wheeling and tapping interactions. Here, we introduce and evaluate a new type of interaction with auditory menus, intended to speed up movement through a list. This multimodal “sliding index” was compared to use of the standard flicking interaction on a phone, while the user was also engaged in a driving task. The sliding index was found to require less mental workload than flicking. What’s more, the way participants used the sliding index technique modulated their preferences, including their reactions to the presence of audio cues. Follow-on work should study how sliding index use evolves with practice.
  • Item
    The Mwangaza Project: A Comprehensive Report on the Nationwide Baseline Survey of Technology Skills for Learners with Vision Impairment in Kenya
    (Georgia Institute of Technology, 2016-12) Walker, Bruce N. ; Mbari-Kirika, Irene ; Miheso-O’Connor, Marguerite
    This document presents the results of a major portion of the PEER-funded collaborative research project called the Mwangaza Project. The project is a shared effort between: the Sonification Lab at the Georgia Institute of Technology (“Georgia Tech”) in Atlanta, USA; inABLE, a non-profit organization based in Nairobi, Kenya, and Washington DC, USA; and Kenyatta University, in Nairobi. This research team has completed a two-phase project including (1) a nation-wide survey of the interests, needs, skills, and opinions of blind students and their teachers, with respect to information and communications technology (ICT, aka “technology”); and (2) initial development, deployment, and evaluation of some novel assistive technologies that represent potential new approaches to STEM education for students with vision loss. This report describes the baseline survey of students and teachers.
  • Item
    What's the Weather: Making Weather Data Accessible for Visually Impaired Students
    (Georgia Institute of Technology, 2016) Tomlinson, Brianna J. ; Bruce, Carrie M. ; Schuett, Jonathan H. ; Walker, Bruce N.
    We determined during a collaboration project in Kenya that students with visual impairments were interested in learning about weather data as part of their Science, Technology, Engineering, and Mathematics (STEM) education. Unfortunately much of this data is not accessible to the students due to lack of integration with assistive technologies, as well as limited access to landline internet. Therefore we created the Accessible Weather App to run on Android and integrate with the TalkBack accessibility feature that is already available on the operating system. This paper discusses the process for determining what features the users’ would require, and our methodology for evaluating the beta version of the app. User feedback was positive and suggestions have helped advance the interface design. The overall goal of our project is to develop, evaluate, and integrate the Accessible Weather App into weather and meteorology learning activities for students with visual impairments.