Title:
Response Techniques and Auditory Localization Accuracy

Thumbnail Image
Author(s)
Iyer, Nandini
Thompson, Eric R.
Simpson, Brian D.
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Collections
Supplementary to
Abstract
Auditory cues, when coupled with visual objects, have lead to reduced response times in visual search tasks, suggesting that adding auditory information can potentially aid Air Force operators in complex scenarios. These benefits are substantial when the spatial transformations that one has to make are relatively simple i.e., mapping a 3-D auditory space to a 3-D visual scene. The current study focused on listeners' abilities to map sound surrounding a listener to a 2-D visual space, by measuring performance in localization tasks that required the following responses: 1) Head pointing: turn and face a loudspeaker from where a sound emanated, 2) Tablet: point to an icon representing a loudspeaker displayed in an array on a 2-D GUI or, 3) Hybrid: turn and face the loudspeaker from where a sound emanated and them indicate that location on a 2-D GUI. Results indicated that listeners' localization errors were small when the response modality was head-pointing, and localization errors roughly doubled when they were asked to make a complex transformation of auditory-visual space (i.e., while using a hybrid response); surprisingly, the hybrid response technique reduced errors compared to the tablet response conditions. These results have large implications for the design of auditory displays that require listeners to make complex, non-intuitive transformations of auditory-visual space.
Sponsor
Date Issued
2016-07
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.