Organizational Unit:
Sonification Lab

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 114
  • Item
    Towards some organising principles for musical progrm auralisations
    (Georgia Institute of Technology, 1998-11) Vickers, Paul ; Alty, James L
    Early studies have shown that musical program auralisations can convey structural and run-time information about Turbo Pascal programs to listeners [3, 4, 10]. Auralisations were effected by mapping program events and structures to musical signature tunes, known as motifs. The design of the motifs was based around the taxonomical nature of the Turbo Pascal language constructs [3]. However, it became clear that as the musical complexity and grammatical rigour of the motifs increased, their discernability by the average user decreased. Therefore, from the lessons learnt from our work we propose a set of organising principles for the design and construction of musically-based program auralisations. These organising principles are aimed towards providing accessible auralisations to the average programmer who has no formal musical training.
  • Item
    Exploration of non-seen diagrams
    (Georgia Institute of Technology, 1998-11) Bennett, David J ; Edwards, Alistair D.N
    This paper describes an exploratory experiment investigating access to non-seen diagrams with a view to presenting such diagrams through an auditory interface. Sighted individuals asked questions of a human experimenter about diagrams they could not see, in order to learn about them. The dialogue was recorded and analysed. The analysis resulted in an insight into the strategies used by the participants and a handle on the information requirements of the participants. Results showed that participants could understand and internalise the simpler diagrams, though not with complete success, but faltered on the more complex diagram. Several strategies and points for further investigation emerged.
  • Item
    'The sound of silence': A preliminary experiment investigating non-verbal auditory representations in telephone-based automated spoken dialogues
    (Georgia Institute of Technology, 1998-11) Williams, David
    At the lexical level, a typical human-computer dialogue in an aural-only spoken language system consists of two stages, system output and user input. As with human-human conversation, a good proportion of turn taking clues are given by lapses in talk. Unfortunately, in telephone-based automated spoken dialogues, silences on the system's part may not be so easily resolved. A pilot experiment examined the recogniser listening and processing states and showed that auditory icons representing these caused fewer incorrect user responses than the control condition. However, where system prompts explicitly requested a response, icons were not necessary if talkover was provided. Also, the effectiveness of auditory representations had a strong interaction with the expertise of the caller suggesting that expert users may require a period of acclimatisation to the use of sounds as they tend to listen to them due to novelty. Conversely, novice users with no experience acted correctly.
  • Item
    Sonically-enhanced drag and drop
    (Georgia Institute of Technology, 1998-11) Brewster, Stephen A
    This paper describes an experiment to investigate if the addition of non-speech sounds to the drag and drop operation would increase usability. There are several problems with drag and drop that can result in the user not dropping a source icon over the target correctly. These occur because the source can visually obscure the target making it hard to see if the target is highlighted. Structured non-speech sounds called earcons were added to indicate when the source was over the target, when it had been dropped on the target and when it had not. Results from the experiment showed that subjective workload was significantly reduced, and overall preference significantly increased, without sonically-enhanced drag and drop being more annoying to use. Results also showed that time taken to do drag and drop was significantly reduced. Therefore, sonicenhancement can significantly improve the usability of drag and drop.
  • Item
    An investigation of using music to provide navigation cues
    (Georgia Institute of Technology, 1998-11) Leplatre, Gregory ; Brewster, Stephen A
    This paper describes an experiment that investigates new principles for representing hierarchical menus such as telephone-based interface menus, with non-speech audio. A hierarchy of 25 nodes with a sound for each node was used. The sounds were designed to test the efficiency of using specific features of a musical language to provide navigation cues. Participants (half musicians and half non-musicians) were asked to identify the position of the sounds in the hierarchy. The overall recall rate of 86% suggests that syntactic features of a musical language of representation can be used as meaningful navigation cues. More generally, these results show that the specific meaning of musical motives can be used to provide ways to navigate in a hierarchical structure such as telephone-based interfaces menus.
  • Item
    Using sonic hyperlinks in WebTV
    (Georgia Institute of Technology, 1998-11) Braun, Norbert ; Dorner, Ralf
    The transfer of Hypermedia features to audio in an audio-visual environment is discussed, introducing sonic hyperlinks. Sonic hyperlinks are links annotated using sound within an audio stream that lead to arbitrary multimedia content. As an example application, sonic hyperlinks have been integrated in interactive Web-TV which is broadcasted via the Internet. A system architecture and implementation relying on commercial WWW technology like RealMedia is presented. The system includes an authoring tool, as well as the necessary presentation plugin for an Internet browser.
  • Item
    After direct manipulation - direct sonification
    (Georgia Institute of Technology, 1998-11) Fernstrom, Mikael ; McNamara, Caolan
    The effectiveness of providing multiple-stream audio to support browsing on a computer was investigated through the iterative development and evaluation of a series of sonic browser prototypes. The data set used was a database containing music. Interactive sonification1 was provided in conjunction with simplified human-computer interaction sequences. It was investigated to what extent interactive sonification with multiple-stream audio could enhance browsing tasks, compared to interactive sonification with single-stream audio support. With ten users it was found that with interactive multiple-stream audio the users could accurately complete the browsing tasks significantly faster than those who had single-stream audio support.
  • Item
    Making progress with sounds - the design & evaluation of an audio progress bar
    (Georgia Institute of Technology, 1998-11) Crease, Murray ; Brewster, Stephen
    This paper describes an experiment to investigate the effectiveness of adding sound to progress bars. Progress bars have usability problems because they present temporal information graphically and if the user wants to keep abreast of this information, he/she must constantly visually scan the progress bar. The addition of sounds to a progress bar allows users to monitor the state of the progress bar without using their visual focus. Nonspeech sounds called earcons were used to indicate the current state of the task as well as the completion of the download. Results showed a significant reduction in the time taken to perform the task in the audio condition. The participants were aware of the state of the progress bar without having to remove the visual focus from their foreground task.
  • Item
    Sound traffic control: An interactive 3-D audio system for live musical performance
    (Georgia Institute of Technology, 1998-11) Humon, Naut ; Thibault, Bill ; Galloway, Vance ; Willis, Garnet ; Wing, Jessica Grace
    Sound Traffic Control (STC) is a system for interactively controlled 3-D audio, displayed using a loudspeaker array. The intended application is live musical performance. Goals of the system include flexibility, ease of use, fault tolerance, audio quality, and synchronization with external media sources such as MIDI, audio feeds from musicians, and video. It uses a collection of both commercial and custom components. The development and design of the current system is described, embodying ideas developed during over a decade of experimentation, and is evaluated based on the experiences of users and developers.
  • Item
    Data collection and analysis techniques for evaluating the perceptual qualities of auditory stimuli
    (Georgia Institute of Technology, 1998-11) Bonebright, Terri L ; Miner, Nadine E ; Goldsmith, Timothy E ; Caudell, Thomas P
    This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.