Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, exibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or stands at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole earth 3D visualization environment which presents navigation interface challenges due to the large magnitude of scale and extended spaces that is available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.
  • Item
    Evaluation of a Multimodal Interface for 3D Terrain Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preferences were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identifed and areas for improvement are discussed.
  • Item
    Situational Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Ribarsky, William ; Shaw, Christopher D. ; Hodges, Larry F. ; Faust, Nick L. (Nickolas Lea)
    In this paper, we introduce a new style of visualization called Situational Visualization, in which the user of a robust, mobile visualization system uses mobile computing resources to enhance the experience and understanding of the surrounding world. Additionally, a Situational Visualization system allows the user to add to the visualization and any underlying simulation by inputting the user's observations of the phenomena of interest, thus improving the quality of visualization for the user and for any other users that may be connected to the same database. Situational Visualization allows many users to collaborate on a common set of data with real-time acquisition and insertion of data. In this paper, we present a Situational Visualization system we are developing called Mobile VGIS, and present two sample applications of Situational Visualization
  • Item
    Direct Manipulation on the Virtual Workbench: Two Hands Aren't Always Better Than One
    (Georgia Institute of Technology, 2000) Seay, A. Fleming ; Krum, David Michael ; Hodges, Larry F. ; Ribarsky, William
    This paper reports on the investigation of the differential levels of effectiveness of various interaction techniques on a simple rotation and translation task on the virtual workbench. Manipulation time and number of collisions were measured for subjects using four device sets (unimanual glove, bimanual glove, unimanual stick, and bimanual stick). Participants were also asked to subjectively judge each device's effectiveness. Performance results indicated a main effect for device (better performance for users of the stick(s)), but not for number of hands. Subjective results supported these findings, as users expressed a preference for the stick(s).
  • Item
    The Perceptive Workbench: Towards Spontaneous and Natural Interaction in Semi-Immersive Virtual Environments
    (Georgia Institute of Technology, 1999) Leibe, Bastian ; Starner, Thad ; Ribarsky, William ; Wartell, Zachary Justin ; Krum, David Michael ; Singletary, Bradley Allen ; Hodges, Larry F.
    The Perceptive Workbench enables a spontaneous, natural, and unimpeded interface between the physical and virtual world. It is built on vision-based methods for interaction that remove the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the objectUs 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity as either preloaded objects or those selected on the spot by the user can become physical icons. Integrated into the same vision- based interface is the ability to identify 3D hand position, pointing direction, and sweeping arm gestures. Such gestures can support selection, manipulation, and navigation tasks. In this paper the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.