Person:
Walker, Bruce N.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 33
  • Item
    A system for wearable audio navigation integrating advanced localization and auditory display
    (Georgia Institute of Technology, 2009-12-06) Walker, Bruce N. ; Dellaert, Frank
  • Item
    Measuring the use of sound in everyday software
    (Georgia Institute of Technology, 2009-05) Walker, Bruce N. ; Davison, Benjamin K.
    Members of the ICAD community might contend that auditory interfaces and even just well-designed sound in computer interfaces could be used more often than is currently the case. However, it is not entirely clear where, when, and how sound is actually being employed in everyday software. We discuss the development of a long-term research project aimed at identifying and categorizing sound use in software. Our mixed- methods approach explores software artifacts from three perspectives: detailed program behavior, source code word count of audio terms, and audio infrastructure. These complementary approaches could provide a deeper understanding of sound use today and, we hope, lead to predicting, guiding, and improving the future trajectory of its use.
  • Item
    The GT Accessible Aquarium Project
    (Georgia Institute of Technology, 2009-04-07) Walker, Bruce N.
    Zoos and aquaria are in the business of educating and entertaining the visiting public. However, as the number of people with disabilities living in the community has grown, and as public environments have become more accessible to them, such informal learning environments (ILEs) are faced with accommodating an increasingly diverse visitor population with varying physical and sensory needs. This is even more challenging for ILEs with dynamic exhibits, where the movements, changes, and interactions are extremely difficult to describe to individuals who lack vision. The GT Accessible Aquarium Project is an interdisciplinary team effort to make dynamic exhibits such as those at museums, science centers, zoos and aquaria more engaging and accessible for visitors with vision impairments by providing real-time interpretations of the exhibits using innovative tracking, music, narrations, and adaptive sonification.
  • Item
    Efficiency of Spearcon-Enhanced Navigation of One Dimensional Electronic Menus
    (International Community for Auditory Display, 2008-06) Palladino, Dianne K. ; Walker, Bruce N.
    This study investigated navigation through a cell phone menu in the presence of auditory cues (text-to-speech and spearcons), visual cues, or both. A total of 127 undergraduates navigated through a 50-item alphabetically listed menu to find a target name. Participants using visual cues (either alone or combined with auditory cues) responded faster than those using only auditory cues. Performance was not found to be significantly different among the two auditory only conditions. Although not significant, when combined with visual cues, spearcons improved navigational efficiency more than both text-to- speech cues and menus using no sound, and provided evidence for the ability of sound to enhance visual menus. Research results provide evidence applicable to efficient auditory menu creation.
  • Item
    Learnabiltiy of Sound Cues for Environmental Features: Auditory Icons, Earcons, Spearcons, and Speech
    (International Community for Auditory Display, 2008-06) Dingler, Tilman ; Lindsay, Jeffrey ; Walker, Bruce N.
    Awareness of features in our environment is essential for many daily activities. While often awareness of such features comes from vision, this modality is sometimes unavailable or undesirable. In these instances, auditory cues can be an excellent method of representing environmental features. The study reported here investigated the learnability of well known (auditory icons, earcons, and speech) and more novel (spearcons, earcon-icon hybrids, and sized hybrids) sonification techniques for representing common environmental features. Spearcons, which are speech stimuli that have been greatly sped up, were found to be as learnable as speech, while earcons unsurprisingly were much more difficult to learn. Practical implications are discussed.
  • Item
    AudioPlusWidgets: Bringing Sound to Software Widgets and Interface Components
    (International Community for Auditory Display, 2008-06) Davison, Benjamin K ; Walker, Bruce N.
    Using sound as part of the user interface in a typical software application is still extremely rare, despite the technical capabilities of computers to support such usage. The ICAD community has developed several interface concepts, patterns, and toolkits, and yet the overall software scene has remained dominated by the visual-only user interface. AudioPlusWidgets is a software library offering scientifically grounded audio enhancements to the standard Java Swing API. Through metaphors and transparency, AudioPlusWidgets can be inserted into existing code with minimal changes, easily adding auditory capabilities to the interface components in the system. This library uses an event-based model and an audio manager to render speech, MIDI, and prerecorded sounds.
  • Item
    Encoding and Representation of Information in Auditory Graphs: Descriptive Reports of Listener Strategies for Understanding Data
    (International Community for Auditory Display, 2008-06) Nees, Michael A. ; Walker, Bruce N.
    While a growing wealth of data have offered insights into the best practices for auditory display design and application, little is known about how listeners internally represent and use the information presented in auditory displays. At the conclusion of three separate studies, participants responded to an open-ended question about the strategies they used to perform auditory graphing tasks. We report a descriptive analysis of these qualitative responses. Participants' comments were coded by two raters along a number of dimensions that were chosen to represent a comprehensive set of encoding and task strategy possibilities. These descriptive analyses suggest that auditory graph listeners use a variety of strategies to cognitively represent the data in the display. Furthermore, these qualitative data offer a number of insights and questions for future research on information representation for auditory displays.
  • Item
    The Use of Different Technologies During a Medical Interview: Effects on Perceived Quality of Care
    (Georgia Institute of Technology, 2007-10) Caldwell, Britt ; DeBlasio, Julia M. ; Jacko, Julie A. ; Kintz, Erin ; Lyons, Kent ; Mauney, Lisa M. ; Starner, Thad ; Walker, Bruce N.
    This two-phase study examines a physician’s use of one of five different types of technology to note a patient’s symptoms during the medical interview. In this between-subjects design, 342 undergraduates viewed one of several videos that demonstrated one condition of the doctor/patient interaction. After viewing the interaction, each participant completed a series of questionnaires that evaluated their general satisfaction with the quality of care demonstrated in the medical interview. A main effect of technology condition was present in both phases. Further, in Phase 2 we found that drawing the participant’s attention to the type of technology used has a divergent effect on their general satisfaction with the doctor/patient interaction depending on the technology condition. These findings have implications for healthcare providers such as how to address technology and which type of technology to use.
  • Item
    Individual Differences and the Field of Auditory Display: Past Research, A Present Study, and an Agenda for the Future
    (Georgia Institute of Technology, 2007-06) Mauney, Lisa M. ; Walker, Bruce N.
    There has been some interest in the study of individual differences in the field of auditory displays, but we argue that there is a much greater potential than has been realized, to date. Relevant types of individual differences that may be applicable to interpreting auditory information include perceptual abilities, cognitive abilities, musical abilities, and learning styles. There are many measures of these individual differences available; however, they have not been thoroughly utilized in the auditory display arena. We discuss several types of individual differences relevant to auditory displays. We then present some examples of past research, along with the results of a current investigation of individual differences in auditory displays. Finally, we propose an agenda as to what research and tests should be used to further study this area.
  • Item
    Learning Rates for Auditory Menus Enhanced with Spearcons Versus Earcons
    (Georgia Institute of Technology, 2007-06) Palladino, Dianne K. ; Walker, Bruce N.
    Increasing the usability of menus on small electronic devices is essential due to their increasing proliferation and decreasing physical sizes in the marketplace. Auditory menus are being studied as an enhancement to the menus on these devices. This study compared the learning rates for earcons (hierarchical representations of menu locations using musical tones) and spearcons (compressed speech) as potential candidates for auditory menu enhancement. We found that spearcons outperformed earcons significantly in rate of learning. We also found evidence that spearcon comprehension was enhanced by a brief training cycle, and that participants considered the process of learning spearcons much easier than the same process using earcons. Since the efficiency of learning and the perceived ease of use of auditory menus will increase the likelihood they are embraced by those who need them, this paper presents compelling evidence that spearcons may be the superior choice for such applications.