Organizational Unit:
Sonification Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    FACIAL BEHAVIOR SONIFICATION WITH THE INTERACTIVE SONIFICATION FRAMEWORK PANSON
    (Georgia Institute of Technology, 2023-06) Nalli, Michele ; Johnson, David ; Hermann, Thomas
    Facial behavior occupies a central role in social interaction. Its auditory representation is useful for various applications such as for supporting the visually impaired, for actors to train emotional expression, and for supporting annotation of multi-modal behavioral corpora. In this paper we present a prototype system for interactive sonification of facial behavior that works both in realtime mode, using a webcam, and offline mode, analyzing a video file. The system is based on python and Jupyter notebooks, and relies on the python module sc3nb for sonification-related functionalities. Facial feature extraction is realized using OpenFace 2.0. Designing the system led to the development of a framework of reusable components to develop interactive sonification applications, called Panson, which can be used to easily design and adapt sonifications for different use cases. We present the main concepts behind the facial behavior sonification system and the Panson framework. Furthermore, we introduce and discuss novel sonifications developed using Panson, and demonstrate them with a set of sonified videos. The sonifications and Panson are Open Source reproducible research available on GitHub.
  • Item
    SONECULES: A PYTHON SONIFICATION ARCHITECTURE
    (Georgia Institute of Technology, 2023-06) Reinsch, Dennis ; Hermann, Thomas
    This paper introduces sonecules, a flexible, extensible, enduser friendly and open-source Python sonification toolkit to bring 'sonification to the masses'. The package comes with a basic set of what we define as sonecules which are sonification designs tailored for a given class of data, a selected internal logic for sonification and offering a set of functions to interact with data and sonification controls. This is a design-once-use-many approach as each sonecule can be reused on similarly structured data. The primary goal of sonecules is to enable novice users to rapidly get their data audible – by scaffolding their first steps into auditory display. All sonecules offer a description for the user as well as controls which can be adjusted easily and interactively to the selected data. Users are supported to get started as fast as possible using different sonification designs and they can even mix and match sonecules to create more complex aggregated sonecules. Advanced users are enabled to extend/modify any sonification design and thereby create new sonecules. The sonecules Python package is provided as open-source software, which enables others to contribute their own sonification designs as a sonecule – thus it seeds a growing/growable library of well-documented and easy-to-reuse sonifications designs. Sonecules is implemented in Python using mesonic as the sonification framework, which provides the path to renderingplatform agnostic sonifications.
  • Item
    AltAR/table: A Platform for Plausible Auditory Augmentation
    (Georgia Institute of Technology, 2022-06) Weger, Marian ; Hermann, Thomas ; Höldrich, Robert
    Auditory feedback from everyday interactions can be augmented to project digital information in the physical world. For that purpose, auditory augmentation modulates irrelevant aspects of already existing sounds while at the same time preserving relevant ones. A strategy for maintaining a certain level of plausibility is to metaphorically modulate the physical object itself. By mapping information to physical parameters instead of arbitrary sound parameters, it is assumed that even untrained users can draw on prior knowledge. Here we present AltAR/table, a hard- and software platform for plausible auditory augmentation of flat surfaces. It renders accurate augmentations of rectangular plates by capturing the structure-borne sound, feeding it through a physical sound model, and playing it back through the same object in real time. The implementation solves basic problems of equalization, active feedback control, spatialization, hand tracking, and low-latency signal processing. AltAR/table provides the technical foundations of object-centered auditory augmentations, for embedding sonifications into everyday objects such as tables, walls, or floors.