Person:
Freeman, Jason

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    An interactive, graphical coding environment for EarSketch online using Blockly and Web Audio API
    (Georgia Institute of Technology, 2016-04) Mahadevan, Anand ; Freeman, Jason ; Magerko, Brian
    This paper presents an interactive graphical programming environment for EarSketch, using Blockly and Web Audio API. This visual programming element sidesteps syntac- tical challenges common to learning text-based languages, thereby targeting a wider range of users in both informal and academic settings. The implementation allows seamless integration with the existing EarSketch web environment, saving block-based code to the cloud as well as exporting it to Python and JavaScript.
  • Item
    Data-Driven Live Coding with DataToMusic API
    (Georgia Institute of Technology, 2016-04) Tsuchiya, Takahiko ; Freeman, Jason ; Lerner, Lee W.
    Creating interactive audio applications for web browsers often involves challenges such as time synchronization between non-audio and audio events within thread constraints and format-dependent mapping of data to synthesis parameters. In this paper, we describe a unique approach for these issues with a data-driven symbolic music application programming interface (API) for rapid and interactive development. We introduce DataToMusic (DTM) API, a data-sonification tool set for web browsers that utilizes the Web Audio API1 as the primary means of audio rendering. The paper demonstrates the possibility of processing and sequencing audio events at the audio-sample level by combining various features of the Web Audio API, without relying on the ScriptProcessorNode, which is currently under a redesign. We implemented an audio event system in the clock and synthesizer classes in the DTM API, in addition to a modular audio effect structure and a exible data-to-parameter mapping interface. For complex real-time configuration and sequencing, we also present a model system for creating reusable functions with a data-agnostic interface and symbolic musical transformations. Using these tools, we aim to create a seamless connection between high-level (musical structure) and low-level (sample rate) processing in the context of real-time data sonification.
  • Item
    Multi-Modal Web-Based Dashboards for Geo-Located Real-Time Monitoring
    (Georgia Institute of Technology, 2016-04) Winters, R. Michael ; Tsuchiya, Takahiko ; Lerner, Lee W. ; Freeman, Jason
    This paper describes ongoing research in the presentation of geo-located, real-time data using web-based audio and visualization technologies. Due to both the increase of devices and diversity of information being accumulated in real-time, there is a need for cohesive techniques to render this information in a useable and functional way for a variety of audiences. We situate web-sonification|sonification of web- based information using web-based technologies|as a particularly valuable avenue for display. When combined with visualizations, it can increase engagement and allow users to profit from the additional affordances of human hearing. This theme is developed in the description of two multi-modal dashboards designed for data in the context of the Internet of Things (IoT) and Smart Cities. In both cases, Web Audio provided the back-end for sonification, but a new API called DataToMusic (DTM) was used to make common sonification operations easier to implement. DTM provides a valuable framework for web-sonification and we highlight its use in the two dashboards. Following our description of the implementations, the dashboards are compared and evaluated, contributing to general conclusions on the use of web-audio for sonification, and suggestions for future dashboards.
  • Item
    Live Coding With EarSketch
    (Georgia Institute of Technology, 2016-04) Freeman, Jason
    EarSketch combines a Python / JavaScript API, a digital audio workstation (DAW) visualization, an audio loop library, and an educational curriculum into a web-based music programming environment. While it was designed originally as a classroom educational tool for music technology and computer science, it has recently been expanded to support live coding in concert performance. This live coding performance explores the artistic potential of algorithmic manipulations of audio loops in a multi-track DAW paradigm and explores the potential of DAW-driven visualizations to demystify live coding and algorithms for a concert audience.
  • Item
    A JavaScript Pitch Shifting Library for EarSketch with Asm.js
    (Georgia Institute of Technology, 2016-04) Martinez, Juan Carlos ; Freeman, Jason
    A JavaScript pitch shifting library based on asm.js was developed for the EarSketch website. EarSketch is a Web Audio API-based educational website that teaches computer science principles through music technology and composition. Students write code in Python and JavaScript to manipulate and transform audio loops in a multi-track digital audio workstation paradigm. The pitchshifting library provides a cross-platform, client-side pitchshifting service to EarSketch to change the pitch of audio loop files without modifying their playback speed. It replaces a previous server-side pitch-shifting service with a noticeable increase in performance. This paper describes the implementation and performance of the library transpiled from a set of basic DSP routines written in C and converted to Asm JavaScript using emscripten.
  • Item
    Composer, Performer, Listener
    (Georgia Institute of Technology, 2010-11-18) Freeman, Jason
    Jason Freeman’s works break down conventional barriers between composers, performers, and listeners, using cutting-edge technology and unconventional notation to turn audiences and musicians into compositional collaborators. His music has been performed by the American Composers Orchestra, Speculum Musicae, the So Percussion Group, the Rova Saxophone Quartet, the Nieuw Ensemble, Le Nouvel Ensemble Moderne, and Evan Ziporyn; and his works have been featured at the Lincoln Center Festival, the Boston CyberArt Festival, 01SJ, and the Transmediale Festival and featured in the New York Times and on National Public Radio. N.A.G. (Network Auralization for Gnutella) (2003), a commission from Turbulence.org, was described by Billboard as “…an example of the web’s mind-expanding possibilities.”
  • Item
    Storage in Collaborative Networked Art
    (Georgia Institute of Technology, 2009) Freeman, Jason
    This chapter outlines some of the challenges and opportunities associated with storage in networked art. Using comparative analyses of collaborative networked music as a starting point, this chapter explores how networked storage can transform the relationship between composition and improvisation; how it can influence network designs focused on shared material or shared control; how it can actively and autonomously manipulate its own contents; how it can circumvent problems of network latency and facilitate asynchronous collaboration; and how it can exist as a core component of a work’s design without being at the core of every user’s experience.