Walker, Bruce N.

Associated Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 56
  • Item
    Evaluation of a non-visual auditory choropleth and travel map viewer
    (Georgia Institute of Technology, 2022-06) Biggs, Brandon ; Toth, Christopher ; Stockman, Tony ; Coughlan, James M. ; Walker, Bruce N.
    The auditory virtual reality interface of Audiom, a web-based map viewer, was evaluated by thirteen blind participants. In Audiom, the user is an avatar that navigates, using the arrow keys, through geographic data, as if they are playing a first-person, egocentric game. The research questions were: What will make blind users want to use Audiom maps? And Can participants demonstrate basic acquisition of spatial knowledge after viewing an auditory map? A dynamic choropleth map of state-level US COVID-19 data, and a detailed OpenStreetMap powered travel map, were evaluated. All participants agreed they wanted more maps of all kinds, in particular county-level COVID data, and they would use Audiom once some bugs were fixed and their few recommended features were added. Everyone wanted to see Audiom embedded in their existing travel and mapping applications. All participants were able to answer a question evaluating spatial knowledge. Participants also agreed this spatial information was not available in existing applications.
  • Item
    The Development of a Measurement Tool for Mastery of Assistive Technology
    (Georgia Institute of Technology, 2021-06-30) Satterfield, Richard (Ben) ; Walker, Bruce N. ; Milchus, Karen
    This report describes the development of a survey tool used to measure and assess “mastery of assistive technology”. A Delphi Panel comprised of experts in the area of Assistive Technology (AT) was gathered to explore the question of “What is mastery of AT?” For the purposes of this study, mastery was defined as becoming a “power user” of AT. Panelists were asked to identify what characteristics are associated with being a power user of AT. The panel gave these characteristics Likert Scale rankings as to their applicability as a predictor of becoming a power user and as an indicator of having become a power user. The rankings were compared, and the panel was asked to revisit the rankings in order to identify the most important factors. The panel identified 12 predictors and 14 indicators that they felt were highly predictive of becoming a power user or indicative of being one. These factors were analyzed and found to coalesce around four constructs or areas of mastery: (1) Experience (Usage) with AT; (2) Proficiency with AT; (3) Knowledge of AT; and (4) Personal Connection with AT. An online survey-based tool for measuring AT mastery was developed based on these constructs and presented to the panel for feedback and critique.
  • Item
    Highcharts Sonification Studio: an online, open-source, extensible, and accessible data sonification tool
    (Georgia Institute of Technology, 2021-06) Cantrell, Stanley J. ; Walker, Bruce N. ; Moseng, Øystein
    The Highcharts Sonification Studio is the culmination of a multi-year collaboration between Highsoft — the creators of Highcharts — and the Georgia Tech Sonification Lab to develop an extensible, accessible, online spreadsheet and multimodal graphing platform for the auditory display, assistive technology, and STEM education communities. The Highcharts Sonification Studio leverages the advances in auditory display and sonification research, as well as over 20 years of experience gained through research and development of the original Sonification Sandbox. We discuss the iterative design and evaluation process of the Highcharts Sonification Studio to ensure usability and accessibility, highlight opportunities for growth of the tool, and its use for research, art, and education within the ICAD community and beyond.
  • Item
    To sonify or not to sonify? Educator perceptions of auditory display in interactive simulations
    (Georgia Institute of Technology, 2021-06) Fiedler, Brett L. ; Walker, Bruce N. ; Moore, Emily B.
    With the growing presence of auditory display in popular learning tools, it is beneficial to researchers to consider not only the perceptions of the students who use the tools, but the educators who include the tools in their curriculum. We surveyed over 4000 educators to investigate educator perceptions and preferences across four interactive physics simulations for the presence and qualities of non-speech auditory display, as well as surveying users' selfrated musical sophistication as potentially predictive of auditory display preference. We find that the majority of teachers preferred the simulations with auditory display and consistently rated aspects of the experience using simulations with sound positively over the without-sound variants. We also identify simulation design features that align with trends in educator ratings. We did not find the measured musical sophistication to be a predictor of auditory display preference.
  • Item
    Can You Hear My Heartbeat?: Hearing an Expressive Biosignal Elicits Empathy - Supplementary Data
    (Georgia Institute of Technology, 2021-05-07) Winters, R. Michael ; Leslie, Grace ; Walker, Bruce N.
    Interfaces designed to elicit empathy provide an opportunity for HCI with important pro-social outcomes. Recent research has demonstrated that perceiving expressive biosignals can facilitate emotional understanding and connection with others, but this work has been largely limited to visual approaches. We propose that hearing these signals will also elicit empathy, and test this hypothesis with sounding heartbeats. In a lab-based within-subjects study, participants (N = 27) completed an emotion recognition task in different heartbeat conditions. We found that hearing heartbeats changed participants’ emotional perspective and increased their reported ability to “feel what the other was feeling.” From these results, we argue that auditory heartbeats are well-suited as an empathic intervention, and might be particularly useful for certain groups and use-contexts because of its musical and non-visual nature. This work establishes a baseline for empathic auditory interfaces, and offers a method to evaluate the effects of future designs.
  • Item
    Perceived Relational Risk and Perceived Situational Risk Scales
    (Georgia Institute of Technology, 2020-10-10) Stuck, Rachel E. ; Walker, Bruce N.
    This technical report provides an overview of how to use scales that were developed for perceived relational risk and perceived situational risk.
  • Item
    Hearing artificial intelligence: Sonification guidelines & results from a case-study in melanoma diagnosis
    (Georgia Institute of Technology, 2019-06) R. Michael, Winters ; Kalra, Ankur ; Walker, Bruce N.
    The applications of artificial intelligence are becoming more and more prevalent in everyday life. Although many AI systems can operate autonomously, their goal is often assisting humans. Knowledge from the AI system must somehow be perceptualized. Towards this goal, we present a case-study in the application of data-driven non-speech audio for melanoma diagnosis. A physician photographs a suspicious skin lesion, triggering a sonification of the system's penultimate classification layer. We iterated on sonification strategies and coalesced around designs representing three general approaches. We tested each in a group of novice listeners (n=7) for mean sensitivity, specificity, and learning effects. The mean accuracy was greatest for a simple model, but a trained dermatologist preferred a perceptually compressed model of the full classification layer. We discovered that training the AI on sonifications from this model improved accuracy further. We argue for perceptual compression as a general technique and for a comprehensible number of simultaneous streams.
  • Item
    Auditory displays to facilitate object targeting in 3D space
    (Georgia Institute of Technology, 2019-06) May, Keenan R. ; Sobel, Briana ; Wilson, Jeff ; Walker, Bruce N.
    In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting モarea cue,ヤ (2) ongoing モproximity feedback,ヤ or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.
  • Item
    Mixed speech and non-speech auditory displays: impacts of design, learning, and individual differences in musical engagement
    (Georgia Institute of Technology, 2019-06) Li, Grace ; Walker, Bruce N.
    Information presented in auditory displays is often spread across multiple streams to make it easier for listeners to distinguish between different sounds and changes in multiple cues. Due to the limited resources of the auditory sense and the fact that they are often untrained compared to the visual senses, studies have tried to determine the limit to which listeners are able to monitor different auditory streams while not compromising performance in using the displays. This study investigates the difference between non-speech auditory displays, speech auditory displays, and mixed displays; and the effects of the different display designs and individual differences on performance and learnability. Results showed that practice with feedback significantly improves performance regardless of the display design and that individual differences such as active engagement in music and motivation can predict how well a listener is able to learn to use these displays. Findings of this study contribute to understanding how musical experience can be linked to usability of auditory displays, as well as the capability of humans to learn to use their auditory senses to overcome visual workload and receive important information.
  • Item
    Soccer sonification: Enhancing viewer experience
    (Georgia Institute of Technology, 2019-06) Savery, Richard ; Ayyagari, Madhukesh ; May, Keenan ; Walker, Bruce N.
    We present multiple approaches to soccer sonification, focusing on enhancing the experience for a general audience. For this work, we developed our own soccer data set through computer vision analysis of footage from a tactical overhead camera. This data-set included X, Y, coordinates for the ball and players throughout, as well as passes, steals and goals. After a divergent creation process, we developed four main methods of sports sonification for entertainment. For the Tempo Variation and Pitch Variation methods, tempo or pitch is operationalized to demonstrate ball and player movement data. The Key Moments method features only pass, steal and goal data, while the Musical Moments method takes existing music and attempts to align the track with important data points. Evaluation was done using a combination of qualitative focus groups and quantitative surveys, with 36 participants completing hour long sessions. Results indicated an overall preference for the Pitch Variation and Musical Moments methods, and revealed a robust trade-off between usability and enjoyability.