A method for developing an improved mapping model for data sonification

Author(s)
Worrall, David
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Collections
Supplementary to:
Abstract
The unreliable detection of information in sonifications of multivariate data that employ parameter mapping is generally thought to be the result of the co-dependency of psychoacoustic dimensions. The method described here is aimed at discovering whether the perceptual accuracy of such information can be improved by rendering the sonification of the data with a mapping model influenced by the gestural metrics of performing musicians playing notated versions of the data. Conceptually, the Gesture-Encoded Sound Model (GESM) is a means of transducing multivariate datasets to sound synthesis and control parameters in such as way as to make the information in those datasets available to general listeners in a more perceptually coherent and stable way than is currently the case. The approach renders to sound a datastream not only using observable quantities (inverse transforms of known psychoacoustic principles), but latent variables of a Dynamic Bayesian Network trained with gestures of the physical body movements of performing musicians and hypotheses concerning other observable quantities of their coincident acoustic spectra. If successful, such a model should significantly broaden the applicability of data sonification as a perceptualisation technique.
Sponsor
Date
2011-06
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
Rights URI