Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, exibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or stands at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole earth 3D visualization environment which presents navigation interface challenges due to the large magnitude of scale and extended spaces that is available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.
  • Item
    Evaluation of a Multimodal Interface for 3D Terrain Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preferences were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identifed and areas for improvement are discussed.
  • Item
    Situational Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Ribarsky, William ; Shaw, Christopher D. ; Hodges, Larry F. ; Faust, Nick L. (Nickolas Lea)
    In this paper, we introduce a new style of visualization called Situational Visualization, in which the user of a robust, mobile visualization system uses mobile computing resources to enhance the experience and understanding of the surrounding world. Additionally, a Situational Visualization system allows the user to add to the visualization and any underlying simulation by inputting the user's observations of the phenomena of interest, thus improving the quality of visualization for the user and for any other users that may be connected to the same database. Situational Visualization allows many users to collaborate on a common set of data with real-time acquisition and insertion of data. In this paper, we present a Situational Visualization system we are developing called Mobile VGIS, and present two sample applications of Situational Visualization
  • Item
    A Geometric Comparison of Algorithms for Fusion Control in Stereoscopic HTDs
    (Georgia Institute of Technology, 2001) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    This paper concerns stereoscopic virtual reality displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. These are called stereoscopic HTDs (Head-Tracked Display). Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pair into a single, 3D perceived image. Unfortunately users often have difficulty fusing the stereo image pair. Researchers use a number of software techniques to reduce fusion problems. This paper geometrically examines and compares a number of these techniques and reaches the following conclusions. In interactive stereoscopic applications, the combination of view placement, scale and either false eye separation or ?-false eye separation can provide fusion control geometrically similar to image shifting and image scaling. However, in stereo HTDs image shifting and image scaling also generate additional geometric artifacts not generated by the other methods. We anecdotally link some of these artifacts to exceeding perceptual limitations of human vision. While formal perceptual studies are still needed, geometric analysis suggests that image shifting and image scaling may be less appropriate for interactive, stereo HTDs than the other methods.
  • Item
    Direct Manipulation on the Virtual Workbench: Two Hands Aren't Always Better Than One
    (Georgia Institute of Technology, 2000) Seay, A. Fleming ; Krum, David Michael ; Hodges, Larry F. ; Ribarsky, William
    This paper reports on the investigation of the differential levels of effectiveness of various interaction techniques on a simple rotation and translation task on the virtual workbench. Manipulation time and number of collisions were measured for subjects using four device sets (unimanual glove, bimanual glove, unimanual stick, and bimanual stick). Participants were also asked to subjectively judge each device's effectiveness. Performance results indicated a main effect for device (better performance for users of the stick(s)), but not for number of hands. Subjective results supported these findings, as users expressed a preference for the stick(s).
  • Item
    An Analytic Comparison of Alpha-False Eye Separation, Image Scaling and Image Shifting in Stereoscopic Displays
    (Georgia Institute of Technology, 2000) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    Stereoscopic display is a fundamental part of many virtual reality systems. Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pairs and the user perceives a single 3D image. In practice, however, users can have difficulty fusing the stereo image pairs into a single 3D image. Researchers have used a number of software methods to reduce fusion problems. Some fusion algorithms act directly on the 3D geometry while others act indirectly on the projected 2D images or the view parameters. Compared to the direct techniques, the indirect techniques tend to alter the projected 2D images to a lesser degree. However while the 3D image effects of the direct techniques are algorithmically specified, the 3D effects of the indirect techniques require further analysis. This is important because fusion techniques were developed in non-head-tracked displays that have distortion properties not found in the modern head-tracked variety. In non-head-tracked displays, the non-head-tracked distortions can mask the stereoscopic image artifacts induced by fusion techniques but in head-tracked displays distracting effects of a fusion technique may become apparent. This paper is concerned with stereoscopic displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. This paper rigorously and analytically compares the distortion artifacts of three indirect fusion techniques, alpha-false eye separation, image scaling and image shifting. We show that the latter two methods have additional artifacts not found in alpha-false eye separation and we conclude that alpha-false eye separation is the best indirect method for these displays.