Title:
Distance-based speech segregation in near-field virtual audio displays

dc.contributor.author Brungart, Douglas S
dc.contributor.author Simpson, Brian D
dc.contributor.corporatename International Community for Auditory Display
dc.contributor.corporatename Air Force Research Laboratory
dc.contributor.corporatename Veridian
dc.date.accessioned 2014-01-21T16:54:27Z
dc.date.available 2014-01-21T16:54:27Z
dc.date.issued 2001-07
dc.description Presented at the 7th International Conference on Auditory Display (ICAD), Espoo, Finland, July 29-August 1, 2001. en_US
dc.description Presented at the 7th International Conference on Auditory Display (ICAD), Espoo, Finland, July 29-August 1, 2001.
dc.description.abstract In tasks that require listeners to monitor two or more simultaneous talkers, substantial performance benefits can be achieved by spatially separating the competing speech messages with a virtual audio display. Although the advantages of spatial separation in azimuth are well documented, little is known about the performance benefits that can be achieved when competing speech signals are presented at different distances in the near field. In this experiment, head-related transfer functions (HRTFs) measured with a KEMAR manikin were used to simulate competing sound sources at distances ranging from 12 cm to 1 m along the interaural axis of the listener. One of the sound sources (the target) was a phrase from the Coordinate Response Measure (CRM) speech corpus, and the other sound source (the masker) was either a competing speech phrase from the CRM speech corpus or a speech-shaped noise signal. When speech-shaped noise was used as the masker, the intelligibility of the target phrase increased substantially only when the spatial separation in distance resulted in an improvement in signal-to-noise ratio (SNR) at one of the two ears. When a competing speech phrase was used as the masker, spatial separation in distance resulted in substantial improvements in the intelligibility of the target phrase even when the overall levels of the signals were normalized to eliminate any SNR advantages in the better ear, suggesting that binaural processing plays an important role in the segregation of competing speech messages in the near field. The results have important implications for the design of audio displays with multiple speech communication channels. en_US
dc.embargo.terms null en_US
dc.identifier.citation Proceedings of the 7th International Conference on Auditory Display (ICAD2001), Espoo, Finland, July 29-August 1, 2001. Eds.: J. Hiipakka, N. Zacharov, and T. Takala. International Community for Auditory Display, 2001. en_US
dc.identifier.uri http://hdl.handle.net/1853/50613
dc.language.iso en_US en_US
dc.publisher Georgia Institute of Technology en_US
dc.publisher Georgia Institute of Technology
dc.publisher.original International Community on Auditory Display en_US
dc.publisher.original International Community for Auditory Display (ICAD)
dc.relation.ispartofseries International Conference on Auditory Display (ICAD)
dc.subject Auditory display en_US
dc.subject HRTF en_US
dc.subject Spatial separation en_US
dc.title Distance-based speech segregation in near-field virtual audio displays en_US
dc.type Text
dc.type.genre Proceedings
dspace.entity.type Publication
local.contributor.corporatename Sonification Lab
local.relation.ispartofseries International Conference on Auditory Display (ICAD)
relation.isOrgUnitOfPublication 2727c3e6-abb7-4df0-877f-9f218987b22a
relation.isSeriesOfPublication 6cb90d00-3311-4767-954d-415c9341a358
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
BrungartSimpson2001.pdf
Size:
143.2 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.13 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections