Title:
Data-driven auditory contrast enhancement for everyday sounds and sonifications

Thumbnail Image
Author(s)
Hermann, Thomas
Weger, Marian
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Collections
Supplementary to
Abstract
We introduce Auditory Contrast Enhancement (ACE) as a technique to enhance sounds at hand of a given collection of sound or sonification examples that belong to different classes, such as sounds of machines with and without a certain malfunction, or medical data sonifications for different pathologies/conditions. A frequent use case in inductive data mining is the discovery of patterns in which such groups can be discerned, to guide subsequent paths for modelling and feature extraction. ACE provides researchers with a set of methods to render focussed auditory perspectives that accentuate inter-group differences and in turn also enhance the intra-group similarity, i.e. it warps sounds so that our human built-in metrics for assessing differences between sounds is better aligned to systematic differences between sounds belonging to different classes. We unfold and detail the concept along three different lines: temporal, spectral and spectrotemporal auditory contrast enhancement and we demonstrate their performance at hand of given sound and sonification collections.
Sponsor
Date Issued
2019-06
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
Licensed under Creative Commons Attribution Non-Commercial 4.0 International License.