Person:
Starner, Thad

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, exibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or stands at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole earth 3D visualization environment which presents navigation interface challenges due to the large magnitude of scale and extended spaces that is available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.
  • Item
    Evaluation of a Multimodal Interface for 3D Terrain Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preferences were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identifed and areas for improvement are discussed.
  • Item
    The Perceptive Workbench: Towards Spontaneous and Natural Interaction in Semi-Immersive Virtual Environments
    (Georgia Institute of Technology, 1999) Leibe, Bastian ; Starner, Thad ; Ribarsky, William ; Wartell, Zachary Justin ; Krum, David Michael ; Singletary, Bradley Allen ; Hodges, Larry F.
    The Perceptive Workbench enables a spontaneous, natural, and unimpeded interface between the physical and virtual world. It is built on vision-based methods for interaction that remove the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the objectUs 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity as either preloaded objects or those selected on the spot by the user can become physical icons. Integrated into the same vision- based interface is the ability to identify 3D hand position, pointing direction, and sweeping arm gestures. Such gestures can support selection, manipulation, and navigation tasks. In this paper the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.