Title:
FACIAL BEHAVIOR SONIFICATION WITH THE INTERACTIVE SONIFICATION FRAMEWORK PANSON

Thumbnail Image
Author(s)
Nalli, Michele
Johnson, David
Hermann, Thomas
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Collections
Supplementary to
Abstract
Facial behavior occupies a central role in social interaction. Its auditory representation is useful for various applications such as for supporting the visually impaired, for actors to train emotional expression, and for supporting annotation of multi-modal behavioral corpora. In this paper we present a prototype system for interactive sonification of facial behavior that works both in realtime mode, using a webcam, and offline mode, analyzing a video file. The system is based on python and Jupyter notebooks, and relies on the python module sc3nb for sonification-related functionalities. Facial feature extraction is realized using OpenFace 2.0. Designing the system led to the development of a framework of reusable components to develop interactive sonification applications, called Panson, which can be used to easily design and adapt sonifications for different use cases. We present the main concepts behind the facial behavior sonification system and the Panson framework. Furthermore, we introduce and discuss novel sonifications developed using Panson, and demonstrate them with a set of sonified videos. The sonifications and Panson are Open Source reproducible research available on GitHub.
Sponsor
Date Issued
2023-06
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
Licensed under Creative Commons Attribution Non-Commercial 4.0 International License.