Person:
Freeman, Jason

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    A study of exploratory analysis in melodic sonification with structural and durational time scales
    (Georgia Institute of Technology, 2018-06) Tsuchiya, Takahiko ; Freeman, Jason
    Melodic sonification is one of the most common methods of sonification: data modulates the pitch of an audio synthesizer over time. This simple sonification, however, still raises questions about how we listen to a melody and perceive the motions and patterns characterized by the underlying data. We argue that analytical listening to such melodies may focus on different ranges of the melody at different times and discover the pitch (and data) relationships gradually over time and after repeated listening. To examine such behaviors in real-time listening to a melodic sonification, we conducted a user study employing interactive time and pitch resolution controls for the user. The study also examines the relationships of these changing time and pitch resolutions to perceived musicality. The results indicate a stronger general relationship between the time progression and the use of time-resolution control to analyze data characteristics, while the pitch resolution controls tend to have more correlation with subjective perceptions of musicality.
  • Item
    Spectral Parameter Encoding: Towards a Framework for Functional-Aesthetic Sonification
    (Georgia Institute of Technology, 2017-06) Tsuchiya, Takahiko ; Freeman, Jason
    Auditory-display research has had a largely unsolved challenge of balancing functional and aesthetic considerations. While functional designs tend to reduce musical expressivity for the fidelity of data, aesthetic or musical sound organization arguably has a potential for representing multi-dimensional or hierarchical data structure with enhanced perceptibility. Existing musical designs, however, generally employ nonlinear or interpretive mappings that hinder the assessment of functionality. The authors propose a framework for designing expressive and complex sonification using small timescale musical hierarchies, such as the harmony and timbral structures, while maintaining data integrity by ensuring a close-to-the-original recovery of the encoded data utilizing descriptive analysis by a machine listener.
  • Item
    Data-Driven Live Coding with DataToMusic API
    (Georgia Institute of Technology, 2016-04) Tsuchiya, Takahiko ; Freeman, Jason ; Lerner, Lee W.
    Creating interactive audio applications for web browsers often involves challenges such as time synchronization between non-audio and audio events within thread constraints and format-dependent mapping of data to synthesis parameters. In this paper, we describe a unique approach for these issues with a data-driven symbolic music application programming interface (API) for rapid and interactive development. We introduce DataToMusic (DTM) API, a data-sonification tool set for web browsers that utilizes the Web Audio API1 as the primary means of audio rendering. The paper demonstrates the possibility of processing and sequencing audio events at the audio-sample level by combining various features of the Web Audio API, without relying on the ScriptProcessorNode, which is currently under a redesign. We implemented an audio event system in the clock and synthesizer classes in the DTM API, in addition to a modular audio effect structure and a exible data-to-parameter mapping interface. For complex real-time configuration and sequencing, we also present a model system for creating reusable functions with a data-agnostic interface and symbolic musical transformations. Using these tools, we aim to create a seamless connection between high-level (musical structure) and low-level (sample rate) processing in the context of real-time data sonification.
  • Item
    Multi-Modal Web-Based Dashboards for Geo-Located Real-Time Monitoring
    (Georgia Institute of Technology, 2016-04) Winters, R. Michael ; Tsuchiya, Takahiko ; Lerner, Lee W. ; Freeman, Jason
    This paper describes ongoing research in the presentation of geo-located, real-time data using web-based audio and visualization technologies. Due to both the increase of devices and diversity of information being accumulated in real-time, there is a need for cohesive techniques to render this information in a useable and functional way for a variety of audiences. We situate web-sonification|sonification of web- based information using web-based technologies|as a particularly valuable avenue for display. When combined with visualizations, it can increase engagement and allow users to profit from the additional affordances of human hearing. This theme is developed in the description of two multi-modal dashboards designed for data in the context of the Internet of Things (IoT) and Smart Cities. In both cases, Web Audio provided the back-end for sonification, but a new API called DataToMusic (DTM) was used to make common sonification operations easier to implement. DTM provides a valuable framework for web-sonification and we highlight its use in the two dashboards. Following our description of the implementations, the dashboards are compared and evaluated, contributing to general conclusions on the use of web-audio for sonification, and suggestions for future dashboards.
  • Item
    Data-to-music API: Real-time data-agnostic sonification with musical structure models
    (Georgia Institute of Technology, 2015-07) Tsuchiya, Takahiko ; Freeman, Jason ; Lerner, Lee W.
    In sonification methodologies that aim to represent the underlying data accurately, musical or artistic approaches are often dismissed as being not transparent, likely to distort the data, not generalizable, or not reusable for different data types. Scientific applications for sonification have been, therefore, hesitant to use approaches guided by artistic aesthetics and musical expressivity. All sonifications, however, may have musical effects on listeners, as our trained ears with daily exposure to music tend to naturally distinguish musical and non-musical sound relationships, such as harmony, rhythmic stability, or timbral balance. This study proposes to take advantage of the musical effects of sonification in a systematic manner. Data may be mapped to high-level musical parameters rather than to one-to-one low-level audio parameters. An approach to create models that encapsulate modulatable musical structures is proposed in the context of the new DataTo- Music JavaScript API. The API provides an environment for rapid development of data-agnostic sonification applications in a web browser, with a model-based modular musical structure system. The proposed model system is compared to existing sonification frameworks as well as music theory and composition models. Also, issues regarding the distortion of original data, transparency, and reusability of musical models are discussed.