Person:
Walker, Bruce N.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 15
  • Item
    Hearing artificial intelligence: Sonification guidelines & results from a case-study in melanoma diagnosis
    (Georgia Institute of Technology, 2019-06) R. Michael, Winters ; Kalra, Ankur ; Walker, Bruce N.
    The applications of artificial intelligence are becoming more and more prevalent in everyday life. Although many AI systems can operate autonomously, their goal is often assisting humans. Knowledge from the AI system must somehow be perceptualized. Towards this goal, we present a case-study in the application of data-driven non-speech audio for melanoma diagnosis. A physician photographs a suspicious skin lesion, triggering a sonification of the system's penultimate classification layer. We iterated on sonification strategies and coalesced around designs representing three general approaches. We tested each in a group of novice listeners (n=7) for mean sensitivity, specificity, and learning effects. The mean accuracy was greatest for a simple model, but a trained dermatologist preferred a perceptually compressed model of the full classification layer. We discovered that training the AI on sonifications from this model improved accuracy further. We argue for perceptual compression as a general technique and for a comprehensible number of simultaneous streams.
  • Item
    Auditory displays to facilitate object targeting in 3D space
    (Georgia Institute of Technology, 2019-06) May, Keenan R. ; Sobel, Briana ; Wilson, Jeff ; Walker, Bruce N.
    In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting モarea cue,ヤ (2) ongoing モproximity feedback,ヤ or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.
  • Item
    Mixed speech and non-speech auditory displays: impacts of design, learning, and individual differences in musical engagement
    (Georgia Institute of Technology, 2019-06) Li, Grace ; Walker, Bruce N.
    Information presented in auditory displays is often spread across multiple streams to make it easier for listeners to distinguish between different sounds and changes in multiple cues. Due to the limited resources of the auditory sense and the fact that they are often untrained compared to the visual senses, studies have tried to determine the limit to which listeners are able to monitor different auditory streams while not compromising performance in using the displays. This study investigates the difference between non-speech auditory displays, speech auditory displays, and mixed displays; and the effects of the different display designs and individual differences on performance and learnability. Results showed that practice with feedback significantly improves performance regardless of the display design and that individual differences such as active engagement in music and motivation can predict how well a listener is able to learn to use these displays. Findings of this study contribute to understanding how musical experience can be linked to usability of auditory displays, as well as the capability of humans to learn to use their auditory senses to overcome visual workload and receive important information.
  • Item
    Soccer sonification: Enhancing viewer experience
    (Georgia Institute of Technology, 2019-06) Savery, Richard ; Ayyagari, Madhukesh ; May, Keenan ; Walker, Bruce N.
    We present multiple approaches to soccer sonification, focusing on enhancing the experience for a general audience. For this work, we developed our own soccer data set through computer vision analysis of footage from a tactical overhead camera. This data-set included X, Y, coordinates for the ball and players throughout, as well as passes, steals and goals. After a divergent creation process, we developed four main methods of sports sonification for entertainment. For the Tempo Variation and Pitch Variation methods, tempo or pitch is operationalized to demonstrate ball and player movement data. The Key Moments method features only pass, steal and goal data, while the Musical Moments method takes existing music and attempts to align the track with important data points. Evaluation was done using a combination of qualitative focus groups and quantitative surveys, with 36 participants completing hour long sessions. Results indicated an overall preference for the Pitch Variation and Musical Moments methods, and revealed a robust trade-off between usability and enjoyability.
  • Item
    Auditory and Head-Up Displays for Eco-Driving Interfaces
    (Georgia Institute of Technology, 2017-06) Shortridge, Woodbury ; Gable, Thomas M. ; Noah, Brittany E. ; Walker, Bruce N.
    Eco-driving describes a strategy for operating a vehicle in a fuel-efficient manner. Current research shows that visual ecodriving interfaces can reduce fuel consumption by shaping motorists’ driving behavior but may hinder safe driving performance. The present study aimed to generate insights and direction for design iterations of auditory eco-driving displays and a potential matching head-up visual display to minimize the negative effects of using purely visual headdown eco-driving displays. Experiment 1 used a sound cardsorting task to establish mapping, scaling, and polarity of acoustic parameters for auditory eco-driving interfaces. Surveys following each sorting task determined preferences for the auditory display types. Experiment 2 was a sorting task to investigate design parameters of visual icons that are to be paired with these auditory displays. Surveys following each task revealed preferences for the displays. The results facilitated the design of intuitive interface prototypes for an auditory and matching head-up eco-driving display that can be compared to each other.
  • Item
    Solar System Sonification: Exploring Earth and its Neighbors Through Sound
    (Georgia Institute of Technology, 2017-06) Tomlinson, Brianna J. ; Winters, R. Michael ; Latina, Christopher ; Bhat, Smruthi ; Rane, Milap ; Walker, Bruce N.
    Informal learning environments (ILEs) like museums incorporate multi-modal displays into their exhibits as a way to engage a wider group of visitors, often relying on tactile, audio, and visual means to accomplish this. Planetariums, however, represent one type of ILE where a single, highly visual presentation modality is used to entertain, inform, and engage a large group of users in a passive viewing experience. Recently, auditory displays have been used as a supplement or even an alternative to visual presentation of astronomy concepts, though there has been little evaluation of those displays. Here, we designed an auditory model of the solar system and created a planetarium show, which was later presented at a local science center. Attendees evaluated the performance on helpfulness, interest, pleasantness, understandability, and relatability of the sounds mappings. Overall, attendees rated the solar system and planetary details very highly, in addition to providing open-ended responses about their entire experience.
  • Item
    Spindex and Spearcons in Mandarin: Auditory Menu Enhancements Successful in a Tonal Language
    (Georgia Institute of Technology, 2017-06) Gable, Thomas M. ; Tomlinson, Brianna ; Cantrell, Stanley ; Walker, Bruce N.
    Auditory displays have been used extensively to enhance visual menus across diverse settings for various reasons. While standard auditory displays can be effective and help users across these settings, standard auditory displays often consist of text to speech cues, which can be time intensive to use. Advanced auditory cues including spindex and spearcon cues have been developed to help address this slow feedback issue. While these cues are most often used in English, they have also been applied to other languages, but research on using them in tonal languages, which may affect the ability to use them, is lacking. The current research investigated the use of spindex and spearcon cues in Mandarin, to determine their effectiveness in a tonal language. The results suggest that the cues can be effectively applied and used in a tonal language by untrained novices. This opens the door to future use of the cues in languages that reach a large portion of the world’s population.
  • Item
    Introducing Multimodal Sliding Index: Qualitative Feedback, Perceived Workload, and Driving Performance with an Auditory Enhanced Menu Navigation Method
    (Georgia Institute of Technology, 2017-06) Sardesai, Ruta R. ; Gable, Thomas M. ; Walker, Bruce N.
    Using auditory menus on a mobile device has been studied in depth with standard flicking, as well as wheeling and tapping interactions. Here, we introduce and evaluate a new type of interaction with auditory menus, intended to speed up movement through a list. This multimodal “sliding index” was compared to use of the standard flicking interaction on a phone, while the user was also engaged in a driving task. The sliding index was found to require less mental workload than flicking. What’s more, the way participants used the sliding index technique modulated their preferences, including their reactions to the presence of audio cues. Follow-on work should study how sliding index use evolves with practice.
  • Item
    Comprehension of Sonified Weather Data Across Multiple Auditory Streams
    (Georgia Institute of Technology, 2014-06) Schuett, Jonathan H. ; Winton, Riley J. ; Walker, Bruce N.
    Weather data has been one of the mainstays in sonification research. It is readily available, and every listener has presumably had some form of experience with meteorological events to draw from. When we want to use this type of complex data in a scenario such as in a classroom, we need to be sure that listeners are able to correctly comprehend the intended information. The current study proposes a method for evaluating the usability of complex sonifications that contain multiple data sets, especially for tasks that require inferences to be made through comparisons across multiple data streams. This extended abstract outlines a study that will address this issue by asking participants to listen to sonifications and then respond with a description of general understanding about what variables changed, and how said changes would physically be represented by real weather conditions.
  • Item
    Prototype Auditory Displays for a Fuel Efficiency Driver Interface
    (Georgia Institute of Technology, 2014-06) Nees, Michael A. ; Gable, Thomas M. ; Jeon, Myounghoon ; Walker, Bruce N.
    We describe work-in-progress prototypes of auditory displays for fuel efficiency driver interfaces (FEDIs). Although research has established that feedback from FEDIs can have a positive impact on driver behaviors associated with fuel economy, the impact of FEDIs on driver distraction has not been established. Visual displays may be problematic for providing this feedback; it is precisely during fuel-consuming behaviors that drivers should not divert attention away from the driving task. Auditory displays offer a viable alternative to visual displays for communicating information about fuel economy to the driver without introducing visual distraction.