Person:
Freeman, Jason

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    A study of exploratory analysis in melodic sonification with structural and durational time scales
    (Georgia Institute of Technology, 2018-06) Tsuchiya, Takahiko ; Freeman, Jason
    Melodic sonification is one of the most common methods of sonification: data modulates the pitch of an audio synthesizer over time. This simple sonification, however, still raises questions about how we listen to a melody and perceive the motions and patterns characterized by the underlying data. We argue that analytical listening to such melodies may focus on different ranges of the melody at different times and discover the pitch (and data) relationships gradually over time and after repeated listening. To examine such behaviors in real-time listening to a melodic sonification, we conducted a user study employing interactive time and pitch resolution controls for the user. The study also examines the relationships of these changing time and pitch resolutions to perceived musicality. The results indicate a stronger general relationship between the time progression and the use of time-resolution control to analyze data characteristics, while the pitch resolution controls tend to have more correlation with subjective perceptions of musicality.
  • Item
    Spectral Parameter Encoding: Towards a Framework for Functional-Aesthetic Sonification
    (Georgia Institute of Technology, 2017-06) Tsuchiya, Takahiko ; Freeman, Jason
    Auditory-display research has had a largely unsolved challenge of balancing functional and aesthetic considerations. While functional designs tend to reduce musical expressivity for the fidelity of data, aesthetic or musical sound organization arguably has a potential for representing multi-dimensional or hierarchical data structure with enhanced perceptibility. Existing musical designs, however, generally employ nonlinear or interpretive mappings that hinder the assessment of functionality. The authors propose a framework for designing expressive and complex sonification using small timescale musical hierarchies, such as the harmony and timbral structures, while maintaining data integrity by ensuring a close-to-the-original recovery of the encoded data utilizing descriptive analysis by a machine listener.
  • Item
    Data-to-music API: Real-time data-agnostic sonification with musical structure models
    (Georgia Institute of Technology, 2015-07) Tsuchiya, Takahiko ; Freeman, Jason ; Lerner, Lee W.
    In sonification methodologies that aim to represent the underlying data accurately, musical or artistic approaches are often dismissed as being not transparent, likely to distort the data, not generalizable, or not reusable for different data types. Scientific applications for sonification have been, therefore, hesitant to use approaches guided by artistic aesthetics and musical expressivity. All sonifications, however, may have musical effects on listeners, as our trained ears with daily exposure to music tend to naturally distinguish musical and non-musical sound relationships, such as harmony, rhythmic stability, or timbral balance. This study proposes to take advantage of the musical effects of sonification in a systematic manner. Data may be mapped to high-level musical parameters rather than to one-to-one low-level audio parameters. An approach to create models that encapsulate modulatable musical structures is proposed in the context of the new DataTo- Music JavaScript API. The API provides an environment for rapid development of data-agnostic sonification applications in a web browser, with a model-based modular musical structure system. The proposed model system is compared to existing sonification frameworks as well as music theory and composition models. Also, issues regarding the distortion of original data, transparency, and reusability of musical models are discussed.
  • Item
    Sonification for the Installation Drawn Together
    (Georgia Institute of Technology, 2012-06) Bretan, Mason ; Weinberg, Gil ; Freeman, Jason
    This extended abstract describes Drawn Together, an interactive art installation in which a person takes turns drawing with a computer. We describe the process of the interaction and the methods used to creatively sonify the process and the animations. There are three main states in the interactive process that are sonically represented using audio samples in a mix of background and foreground sounds. The lines drawn by the computer are sonified using a set of features describing length, rate of time drawn, location, and curviness.