Organizational Unit:
College of Design

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization

Publication Search Results

Now showing 1 - 10 of 12
  • Item
    BPMTimeline: JavaScript Tempo Functions and Time Mappings using an Analytical Solution
    (Georgia Institute of Technology, 2016-04) Dias, Bruno ; Pinto, H. Sofia ; Matos, David M.
    Time mapping is a common feature in many (commercial and/or open-source) Digital Audio Workstations, allowing the musician to automate tempo changes of a musical performance or work, as well as to visualize the relation between score time (beats) and real/performance time (seconds). Unfortunately, available music production, performance and remixing tools implemented with web technologies like JavaScript and Web Audio API do not offer any mechanism for exible, and seamless, tempo manipulation and automation. In this paper, we present BPMTimeline, a time mapping library, providing a seamless mapping between score and performance time. To achieve this, we model tempo changes as tempo functions (a well documented subject in literature) and realize the mappings through integral and inverse of integral of tempo functions.
  • Item
    Tune.js: A Microtonal Web Audio Library
    (Georgia Institute of Technology, 2016-04) Taylor, Benjamin ; Bernstein, Andrew
    The authors share Tune.js, a JavaScript library of over 3,000 microtonal tunings and historical temperaments for use with web audio. The current state of tuning in web audio is reviewed, followed by an explication of the library's creation and an overview of its potential applications. Finally, the authors share several small projects made with Tune.js and ponder future development opportunities.
  • Item
    Improving time travel experience by combining annotations
    (Georgia Institute of Technology, 2016-04) Vieilleribière, Adrien
    Since recorded audio material is played, navigating relevantly through it is a key expectation. This paper provides a formalism to introduce exible navigation systems based on sets of annotations applying to the same audio object. It aims to build web interfaces to explore audio in time, robust for large data-sets and long files. Introducing the concept of weights applied to annotations, it specifies a parameterized version of the functionality next/previous and presents an effective implementation.
  • Item
    Data-Driven Live Coding with DataToMusic API
    (Georgia Institute of Technology, 2016-04) Tsuchiya, Takahiko ; Freeman, Jason ; Lerner, Lee W.
    Creating interactive audio applications for web browsers often involves challenges such as time synchronization between non-audio and audio events within thread constraints and format-dependent mapping of data to synthesis parameters. In this paper, we describe a unique approach for these issues with a data-driven symbolic music application programming interface (API) for rapid and interactive development. We introduce DataToMusic (DTM) API, a data-sonification tool set for web browsers that utilizes the Web Audio API1 as the primary means of audio rendering. The paper demonstrates the possibility of processing and sequencing audio events at the audio-sample level by combining various features of the Web Audio API, without relying on the ScriptProcessorNode, which is currently under a redesign. We implemented an audio event system in the clock and synthesizer classes in the DTM API, in addition to a modular audio effect structure and a exible data-to-parameter mapping interface. For complex real-time configuration and sequencing, we also present a model system for creating reusable functions with a data-agnostic interface and symbolic musical transformations. Using these tools, we aim to create a seamless connection between high-level (musical structure) and low-level (sample rate) processing in the context of real-time data sonification.
  • Item
    Geolocation Adaptive Music Player
    (Georgia Institute of Technology, 2016-04) Perez-Carrillo, Alfonso ; Thalmann, Florian ; Fazekas, György ; Sandler, Mark
    We present a web-based cross-platform adaptive music player that combines music information retrieval (MIR) and audio processing technologies with the interaction capabilities offered by GPS-equipped mobile devices. The application plays back a list of music tracks, which are linked to geographic paths in a map. The music player has two main enhanced features that adjust to the location of the user, namely, adaptable length of the songs and automatic transitions between tracks. Music tracks are represented as data packages containing audio and metadata (descriptive and behavioral) that builds on the concept of Digital Music Object (DMO). This representation, in line with nextgeneration web technologies, allows for exible production and consumption of novel musical experiences. A content provider assembles a data pack with music, descriptive analysis and action parameters that users can experience and control within the restrictions and templates defined by the provider.
  • Item
    Time Stretching & Pitch Shifting with the Web Audio API: Where are we at?
    (Georgia Institute of Technology, 2016-04) Dias, Bruno ; Matos, David M. ; Davies, Matthew E. P. ; Pinto, H. Sofia
    Audio time stretching and pitch shifting are operations that all major commercial and/or open source Digital Audio Workstations, DJ Mixing Software and Live Coding Suites offer. These operations allow users to change the duration of audio files while maintaining the pitch and vice-versa. Such operations enable DJs to speed up or slow down songs in order to mix them by aligning the beats. Unfortunately, there are few (and experimental) client-side JavaScript implementations of these two operations. In this paper, we review the current state of the art for client-side implementations of time stretching and pitch shifting, their limitations, and describe new implementations for two well-known algorithms: (1) Phase Vocoder with Identity Phase Lock and (2) a modified version of Overlap & Add. Additionally, we discuss some issues related to the Web Audio API (WAA) and frequency-based audio processing regarding latency and audio quality in pitch shifting and time stretching towards raising awareness about possible changes in the WAA.
  • Item
    Crowd in C[loud] : Audience Participation Music with Online Dating Metaphor using Cloud Service
    (Georgia Institute of Technology, 2016-04) Lee, Sang Won ; de Carvalho, Antonio Deusany Jr. ; Essl, Georg
    In this paper, we introduce Crowd in C[loud], a networked music piece designed for audience participation at a music concert. We developed a networked musical instrument for the web browser where a casual smartphone user can play music as well as interact with other audience members. A participant composes a short tune with five notes and serving as a personal profile picture of each individual through- out the piece. The notion of musical profiles is used to form a social network that mimics an online-dating website. People browse the profiles of others, choose someone they like, and initiate interaction online and offline. We utilize a cloud service that helps build, without a server-side programming, a large-scale networked music ensemble on the web. This paper introduces the design choices for this distributed musical instrument. It describes details on how the crowd is orchestrated through the cloud service. We discuss how it facilitates mingling with one another. Finally we show how live coding is incorporated while maintaining the coherence of the piece. From rehearsal to actual performance, the crowd takes part in the process of producing the piece.
  • Item
    myMoodplay: An interactive mood-based music discovery app
    (Georgia Institute of Technology, 2016-04) Allik, Alo ; Fazekas, György ; Barthet, Mathieu ; Swire, Mark
    myMoodplay is a web app that allows users to interactively discover music by selecting desired emotions. The application uses the Web Audio API, JavaScript animation for visualisation, linked data formats and affective computing technologies. We explore how artificial intelligence, the Semantic Web and audio synthesis can be combined to provide new personalised online musical experiences. Users can choose degrees of energy and pleasantness to shape the desired musical mood trajectory. Semantic Web technologies have been embedded in the system to query mood coordinates from a triple store using a SPARQL endpoint and to connect to external linked data sources for metadata.
  • Item
    Using Empirical Analysis of Music Corpora to Optimize Web Audio Playback
    (Georgia Institute of Technology, 2016-04) Collins, Tom ; Coulon, Christian
    Due to feasibility issues and musical preferences, Web audio applications have tended to emphasize the use of synthesized instruments and short samples (e.g., drums) over large banks of longer files that sample other acoustic instruments such as a violin or piano. As the sounds generated by sampled acoustic instruments are quite realistic, they are likely to be of interest to many users of Web audio applications. Using the Tone.js Web Audio framework, this paper describes an initial investigation into load times when rendering music with such sampled instruments. A method is proposed for reducing load times, and hence optimizing Web Audio playback, based on empirical analysis of the note durations used across different music corpora. Experimental results for 400 randomly selected short music excerpts indicate that the proposed method does lead to significant load time reductions, from 3:87 s to 1:72 s. Researchers interested in replicating the results of these experiments or downloading and exploring our playback solution are pointed to http://tomcollinsresearch.net/research/wac/2016/
  • Item
    Musique Concrète Choir: An Interactive Performance Environment for Any Number of People
    (Georgia Institute of Technology, 2016-04) Walker, William ; Belet, Brian
    Using the Web Audio API, a roomful of smartphones becomes a platform on which to create novel musical experiences. As seen at WAC 2015, composers and performers are using this platform to create clouds of sound distributed in space through dozens of loudspeakers. This new platform offers an opportunity to reinvent the roles of audience, composer, and performer. It also presents new technology challenges; at WAC 2015 some servers crashed under load. We also saw difficulties creating and joining private WiFi networks. In this piece, building on the lessons of WAC 2015, we load all our sound resources onto each phone at the beginning of the piece from a stable, well-known web host. Where possible, we use the new Service Worker API to cache our resources locally on the phone. We also replace real-time streaming control of roomful of phones with real-time engagement of the audience members as performers.