Organizational Unit:
School of Music

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 42
  • Item
    reNotate: The Crowdsourcing and Gamification of Symbolic Music Encoding
    (Georgia Institute of Technology, 2016-04) Taylor, Benjamin ; Shanahan, Daniel ; Wolf, Matthew ; Allison, Jesse ; Baker, David John
    Musicologists and music theorists have, for quite some time, hoped to be able to make use of computational methods to examine large corpora of music. As far back as the 1940s, an IBM card-sorter was used to implement patternfinding in traditional British folk songs (Bronson 1949, 1959). Alan Lomax famously implemented statistical methods in his Cantometrics project (Lomax, 1968), which sought to collate a large corpus of folk music from across many cultures. In the 1980s and 90s, a number of encoding projects were instituted in an attempt to be able to make searchable music notation on a large scale. The Essen Folksong Collection (Schaffrath, 1995) collected ethnographic transcriptions, whereas projects at the Center for Computer Assisted Research in the Humanities (CCARH) focused on scores in the Western Art Music tradition (Bach chorales, Mozart sonatas, instrumental themes, etc.). Recently, scholars have focused on improving Optical Music Recognition, in the hopes of facilitating the acquisition of large numbers of musical scores (Fujinaga, et al., 2014), but non-notated music, such as improvisational jazz, is often overlooked. While there have been many advances in music information retrieval in recent years, parameters that would facilitate in-depth musicological analysis are still out of reach (for example, stream segregation to examine specific melodic lines, or the analysis of harmony at a resolution that would allow for an analysis of specific chord voicings). Our project seeks to implement methods similar to those used in CAPTCHA and RECAPTCHA technology to crowdsource the symbolic encoding of musical information through a web-based gaming interface. The introductory levels ask participants to tap along with an audio recording's tempo, giving us an approximate BPM, while the second level asks for participants to tap with onsets. The third level asks them to match a contour of a three-note segment, and the final stage asks for specific note matching within that contour. A social-gaming interface allows for users to compete against one another. It is our hope that this work can be generalized to many types of musical genres, and that a web-based framework might facilitate the encoding of musicological and music-theoretic datasets that might be underrepresented by current MIR work.
  • Item
    Live Looping Electronic Music Performance with MIDI Hardware
    (Georgia Institute of Technology, 2016-04) McKegg, Matt
    I have spent much of the last three years building a Web Audio based desktop application for live electronic music performance called Loop Drop. It was inspired by my own frustration with existing tools for live performance. I wanted a tool that would give me, as a performer, the level of control and expression desired but still feel like playing a musical instrument instead of programming a computer. My application was built using web technologies such as JavaScript and HTML5, leveraging existing experience as a web developer and providing an excellent workflow for quick prototyping and user interface design. In combination with Electron, a single developer can build a desktop audio application very efficiently. Loop Drop uses Web MIDI to interface with hardware such as Novation Launchpad. The software allows creation of sounds using synthesis and sampling, and arranges these into “chunks” which may be placed in any configuration across the midi controller’s button grid. These sounds may be triggered directly, or played quantised to the current tempo at a given rate using “beat repeat”. Everything the performer plays is collected in a buffer that at any time may be turned into a loop. This allows the performer to avoid recording anxiety — a common problem with most live looping systems. They can jam out ideas, then once happy with the sequence, press the loop button to lock it in. In my performance, I will use Loop Drop in conjunction with multiple Novation Launchpad midi controllers, to improvise 15 minutes of electronic music using sounds that I have organised ahead of time. The user interface will be visible to the audience as a projection. I will also be demonstrating the power of hosting Web Audio in Electron by interfacing with an external LED array connected over Serial Peripheral Interface Bus (SPI) to be used as an audio visualiser light show.
  • Item
    BPMTimeline: JavaScript Tempo Functions and Time Mappings using an Analytical Solution
    (Georgia Institute of Technology, 2016-04) Dias, Bruno ; Pinto, H. Sofia ; Matos, David M.
    Time mapping is a common feature in many (commercial and/or open-source) Digital Audio Workstations, allowing the musician to automate tempo changes of a musical performance or work, as well as to visualize the relation between score time (beats) and real/performance time (seconds). Unfortunately, available music production, performance and remixing tools implemented with web technologies like JavaScript and Web Audio API do not offer any mechanism for exible, and seamless, tempo manipulation and automation. In this paper, we present BPMTimeline, a time mapping library, providing a seamless mapping between score and performance time. To achieve this, we model tempo changes as tempo functions (a well documented subject in literature) and realize the mappings through integral and inverse of integral of tempo functions.
  • Item
    rMIXr: how we learned to stop worrying and love the graph
    (Georgia Institute of Technology, 2016-04) Fields, Ben ; Phippen, Sam
    In this talk we present a case study in the use of the web audio APIs. Specifically, our use of them for the creation of a rapidly developed prototype application. The app, called rMIXr (http://rmixr.com), is a simple digital audio workstation (DAW) for fan remix contests. We created rMIXr in 48 hours at the Midem Hack Day in June 2015. We’ll give a brief demo of the app and show multi-channel sync. We'll also show various effects as well as cutting/time-slicing.
  • Item
    Tune.js: A Microtonal Web Audio Library
    (Georgia Institute of Technology, 2016-04) Taylor, Benjamin ; Bernstein, Andrew
    The authors share Tune.js, a JavaScript library of over 3,000 microtonal tunings and historical temperaments for use with web audio. The current state of tuning in web audio is reviewed, followed by an explication of the library's creation and an overview of its potential applications. Finally, the authors share several small projects made with Tune.js and ponder future development opportunities.
  • Item
    A Novel Approach to Streaming and Client Side Rendering of Multichannel Audio with Synchronised Metadata
    (Georgia Institute of Technology, 2016-04) Paradis, Matthew ; Pike, Chris ; Day, Richard ; Melchior, Frank
    Object based audio broadcasting is an approach which combines audio with metadata that describes how the audio should be rendered. This metadata can include spatial positioning mixing parameters and descriptors to define the type of audio represented by the object. In this talk we show an approach to enabling the streaming of multichannel audio and synchronised metadata to the browser. Audio is rendered in the browser to multiple formats based on the information contained in the synchronised metadata channel. This allows adaptive mixing and rendering of content and user interaction. Based on the MPEG/DASH standard this approach allows an arbitrary number of audio channels to be presented as discrete inputs to the Web Audio API (dependent on any channel limit imposed by the browser). Binaural, 5.1 and stereo renders can be generated and selected for output by the user in real time without any change to the source media stream. Channels marked as being interactive can have their properties exposed to the user to adjust based on their preferences. The audio and metadata is originated from a single BWF file compliant with ITU-R BS 2076 (Audio Definition Model) with the audio being encoded using AAC (as per the MPEG/DASH standard) and the metadata represented in JSON format to the browser. This approach provides a flexible framework for the prototyping and presentation of new audio experiences to online audiences and provides a platform for delivery object based audio to online users.
  • Item
    Improving time travel experience by combining annotations
    (Georgia Institute of Technology, 2016-04) Vieilleribière, Adrien
    Since recorded audio material is played, navigating relevantly through it is a key expectation. This paper provides a formalism to introduce exible navigation systems based on sets of annotations applying to the same audio object. It aims to build web interfaces to explore audio in time, robust for large data-sets and long files. Introducing the concept of weights applied to annotations, it specifies a parameterized version of the functionality next/previous and presents an effective implementation.
  • Item
    Data-Driven Live Coding with DataToMusic API
    (Georgia Institute of Technology, 2016-04) Tsuchiya, Takahiko ; Freeman, Jason ; Lerner, Lee W.
    Creating interactive audio applications for web browsers often involves challenges such as time synchronization between non-audio and audio events within thread constraints and format-dependent mapping of data to synthesis parameters. In this paper, we describe a unique approach for these issues with a data-driven symbolic music application programming interface (API) for rapid and interactive development. We introduce DataToMusic (DTM) API, a data-sonification tool set for web browsers that utilizes the Web Audio API1 as the primary means of audio rendering. The paper demonstrates the possibility of processing and sequencing audio events at the audio-sample level by combining various features of the Web Audio API, without relying on the ScriptProcessorNode, which is currently under a redesign. We implemented an audio event system in the clock and synthesizer classes in the DTM API, in addition to a modular audio effect structure and a exible data-to-parameter mapping interface. For complex real-time configuration and sequencing, we also present a model system for creating reusable functions with a data-agnostic interface and symbolic musical transformations. Using these tools, we aim to create a seamless connection between high-level (musical structure) and low-level (sample rate) processing in the context of real-time data sonification.
  • Item
    Geolocation Adaptive Music Player
    (Georgia Institute of Technology, 2016-04) Perez-Carrillo, Alfonso ; Thalmann, Florian ; Fazekas, György ; Sandler, Mark
    We present a web-based cross-platform adaptive music player that combines music information retrieval (MIR) and audio processing technologies with the interaction capabilities offered by GPS-equipped mobile devices. The application plays back a list of music tracks, which are linked to geographic paths in a map. The music player has two main enhanced features that adjust to the location of the user, namely, adaptable length of the songs and automatic transitions between tracks. Music tracks are represented as data packages containing audio and metadata (descriptive and behavioral) that builds on the concept of Digital Music Object (DMO). This representation, in line with nextgeneration web technologies, allows for exible production and consumption of novel musical experiences. A content provider assembles a data pack with music, descriptive analysis and action parameters that users can experience and control within the restrictions and templates defined by the provider.
  • Item
    Time Stretching & Pitch Shifting with the Web Audio API: Where are we at?
    (Georgia Institute of Technology, 2016-04) Dias, Bruno ; Matos, David M. ; Davies, Matthew E. P. ; Pinto, H. Sofia
    Audio time stretching and pitch shifting are operations that all major commercial and/or open source Digital Audio Workstations, DJ Mixing Software and Live Coding Suites offer. These operations allow users to change the duration of audio files while maintaining the pitch and vice-versa. Such operations enable DJs to speed up or slow down songs in order to mix them by aligning the beats. Unfortunately, there are few (and experimental) client-side JavaScript implementations of these two operations. In this paper, we review the current state of the art for client-side implementations of time stretching and pitch shifting, their limitations, and describe new implementations for two well-known algorithms: (1) Phase Vocoder with Identity Phase Lock and (2) a modified version of Overlap & Add. Additionally, we discuss some issues related to the Web Audio API (WAA) and frequency-based audio processing regarding latency and audio quality in pitch shifting and time stretching towards raising awareness about possible changes in the WAA.