Person:
Freeman, Jason

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 12
Thumbnail Image
Item

Directed Evolution in Live Coding Music Performance

2020-10-24 , Dasari, Sandeep , Freeman, Jason

Genetic algorithms are extensively used to understand, simulate, and create works of art and music. In this paper, a similar approach is taken to apply basic evolutionary algorithms to perform music live using code. Often considered an improvisational or experimental performance, live coding music comes with its own set of challenges. Genetic algorithms offer potential to address these long-standing challenges. Traditional evolutionary applications in music focused on novelty search to create new sounds, sequences of notes or chords, and effects. In contrast, this paper focuses on live performance to create directed evolving musical pieces. The paper also details some key design decisions, implementation, and usage of a novel genetic algorithm API created for a popular live coding language.

Thumbnail Image
Item

Spectral Parameter Encoding: Towards a Framework for Functional-Aesthetic Sonification

2017-06 , Tsuchiya, Takahiko , Freeman, Jason

Auditory-display research has had a largely unsolved challenge of balancing functional and aesthetic considerations. While functional designs tend to reduce musical expressivity for the fidelity of data, aesthetic or musical sound organization arguably has a potential for representing multi-dimensional or hierarchical data structure with enhanced perceptibility. Existing musical designs, however, generally employ nonlinear or interpretive mappings that hinder the assessment of functionality. The authors propose a framework for designing expressive and complex sonification using small timescale musical hierarchies, such as the harmony and timbral structures, while maintaining data integrity by ensuring a close-to-the-original recovery of the encoded data utilizing descriptive analysis by a machine listener.

Thumbnail Image
Item

Multi-Modal Web-Based Dashboards for Geo-Located Real-Time Monitoring

2016-04 , Winters, R. Michael , Tsuchiya, Takahiko , Lerner, Lee W. , Freeman, Jason

This paper describes ongoing research in the presentation of geo-located, real-time data using web-based audio and visualization technologies. Due to both the increase of devices and diversity of information being accumulated in real-time, there is a need for cohesive techniques to render this information in a useable and functional way for a variety of audiences. We situate web-sonification|sonification of web- based information using web-based technologies|as a particularly valuable avenue for display. When combined with visualizations, it can increase engagement and allow users to profit from the additional affordances of human hearing. This theme is developed in the description of two multi-modal dashboards designed for data in the context of the Internet of Things (IoT) and Smart Cities. In both cases, Web Audio provided the back-end for sonification, but a new API called DataToMusic (DTM) was used to make common sonification operations easier to implement. DTM provides a valuable framework for web-sonification and we highlight its use in the two dashboards. Following our description of the implementations, the dashboards are compared and evaluated, contributing to general conclusions on the use of web-audio for sonification, and suggestions for future dashboards.

Thumbnail Image
Item

Data-to-music API: Real-time data-agnostic sonification with musical structure models

2015-07 , Tsuchiya, Takahiko , Freeman, Jason , Lerner, Lee W.

In sonification methodologies that aim to represent the underlying data accurately, musical or artistic approaches are often dismissed as being not transparent, likely to distort the data, not generalizable, or not reusable for different data types. Scientific applications for sonification have been, therefore, hesitant to use approaches guided by artistic aesthetics and musical expressivity. All sonifications, however, may have musical effects on listeners, as our trained ears with daily exposure to music tend to naturally distinguish musical and non-musical sound relationships, such as harmony, rhythmic stability, or timbral balance. This study proposes to take advantage of the musical effects of sonification in a systematic manner. Data may be mapped to high-level musical parameters rather than to one-to-one low-level audio parameters. An approach to create models that encapsulate modulatable musical structures is proposed in the context of the new DataTo- Music JavaScript API. The API provides an environment for rapid development of data-agnostic sonification applications in a web browser, with a model-based modular musical structure system. The proposed model system is compared to existing sonification frameworks as well as music theory and composition models. Also, issues regarding the distortion of original data, transparency, and reusability of musical models are discussed.

Thumbnail Image
Item

Promoting Intentions to Persist in Computing: An Examination of Six Years of the EarSketch Program

2020-01-21 , Wanzer, Dana Linnell , McKlin, Thomas (Tom) , Freeman, Jason , Magerko, Brian , Lee, Taneisha

Background and Context: EarSketch was developed as a program to foster persistence in computer science with diverse student populations. Objective: To test the effectiveness of EarSketch in promoting intentions to persist, particularly among female students and under-represented minority students. Method: Meta-analyses, structural equation modeling, multi-level modeling, and qualitative analyses were performed to examine how participation in EarSketch and other factors affect students’ intentions to persist in computing. Findings: Students significantly increased their intentions to persist in computing, g=.40[.25,54], but examination within just the five quasi-experimental studies did not result in a significant difference for students in EarSketch compared to students not in EarSketch, g=.08[-.07, .23]. Student attitudes towards computing and the perceived authenticity of the EarSketch environment significantly predicted intentions to persist in computing. Implications: Participation in computer science education can increase students’ intentions to persist in programming, and EarSketch is one such program that can aid in these intentions.

Thumbnail Image
Item

An interactive, graphical coding environment for EarSketch online using Blockly and Web Audio API

2016-04 , Mahadevan, Anand , Freeman, Jason , Magerko, Brian

This paper presents an interactive graphical programming environment for EarSketch, using Blockly and Web Audio API. This visual programming element sidesteps syntac- tical challenges common to learning text-based languages, thereby targeting a wider range of users in both informal and academic settings. The implementation allows seamless integration with the existing EarSketch web environment, saving block-based code to the cloud as well as exporting it to Python and JavaScript.

Thumbnail Image
Item

Live Coding With EarSketch

2016-04 , Freeman, Jason

EarSketch combines a Python / JavaScript API, a digital audio workstation (DAW) visualization, an audio loop library, and an educational curriculum into a web-based music programming environment. While it was designed originally as a classroom educational tool for music technology and computer science, it has recently been expanded to support live coding in concert performance. This live coding performance explores the artistic potential of algorithmic manipulations of audio loops in a multi-track DAW paradigm and explores the potential of DAW-driven visualizations to demystify live coding and algorithms for a concert audience.

Thumbnail Image
Item

A study of exploratory analysis in melodic sonification with structural and durational time scales

2018-06 , Tsuchiya, Takahiko , Freeman, Jason

Melodic sonification is one of the most common methods of sonification: data modulates the pitch of an audio synthesizer over time. This simple sonification, however, still raises questions about how we listen to a melody and perceive the motions and patterns characterized by the underlying data. We argue that analytical listening to such melodies may focus on different ranges of the melody at different times and discover the pitch (and data) relationships gradually over time and after repeated listening. To examine such behaviors in real-time listening to a melodic sonification, we conducted a user study employing interactive time and pitch resolution controls for the user. The study also examines the relationships of these changing time and pitch resolutions to perceived musicality. The results indicate a stronger general relationship between the time progression and the use of time-resolution control to analyze data characteristics, while the pitch resolution controls tend to have more correlation with subjective perceptions of musicality.

Thumbnail Image
Item

Data-Driven Live Coding with DataToMusic API

2016-04 , Tsuchiya, Takahiko , Freeman, Jason , Lerner, Lee W.

Creating interactive audio applications for web browsers often involves challenges such as time synchronization between non-audio and audio events within thread constraints and format-dependent mapping of data to synthesis parameters. In this paper, we describe a unique approach for these issues with a data-driven symbolic music application programming interface (API) for rapid and interactive development. We introduce DataToMusic (DTM) API, a data-sonification tool set for web browsers that utilizes the Web Audio API1 as the primary means of audio rendering. The paper demonstrates the possibility of processing and sequencing audio events at the audio-sample level by combining various features of the Web Audio API, without relying on the ScriptProcessorNode, which is currently under a redesign. We implemented an audio event system in the clock and synthesizer classes in the DTM API, in addition to a modular audio effect structure and a exible data-to-parameter mapping interface. For complex real-time configuration and sequencing, we also present a model system for creating reusable functions with a data-agnostic interface and symbolic musical transformations. Using these tools, we aim to create a seamless connection between high-level (musical structure) and low-level (sample rate) processing in the context of real-time data sonification.

Thumbnail Image
Item

A JavaScript Pitch Shifting Library for EarSketch with Asm.js

2016-04 , Martinez, Juan Carlos , Freeman, Jason

A JavaScript pitch shifting library based on asm.js was developed for the EarSketch website. EarSketch is a Web Audio API-based educational website that teaches computer science principles through music technology and composition. Students write code in Python and JavaScript to manipulate and transform audio loops in a multi-track digital audio workstation paradigm. The pitchshifting library provides a cross-platform, client-side pitchshifting service to EarSketch to change the pitch of audio loop files without modifying their playback speed. It replaces a previous server-side pitch-shifting service with a noticeable increase in performance. This paper describes the implementation and performance of the library transpiled from a set of basic DSP routines written in C and converted to Asm JavaScript using emscripten.