Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 56
  • Item
    Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, exibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or stands at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole earth 3D visualization environment which presents navigation interface challenges due to the large magnitude of scale and extended spaces that is available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.
  • Item
    Evaluation of a Multimodal Interface for 3D Terrain Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preferences were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identifed and areas for improvement are discussed.
  • Item
    Situational Visualization
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Ribarsky, William ; Shaw, Christopher D. ; Hodges, Larry F. ; Faust, Nick L. (Nickolas Lea)
    In this paper, we introduce a new style of visualization called Situational Visualization, in which the user of a robust, mobile visualization system uses mobile computing resources to enhance the experience and understanding of the surrounding world. Additionally, a Situational Visualization system allows the user to add to the visualization and any underlying simulation by inputting the user's observations of the phenomena of interest, thus improving the quality of visualization for the user and for any other users that may be connected to the same database. Situational Visualization allows many users to collaborate on a common set of data with real-time acquisition and insertion of data. In this paper, we present a Situational Visualization system we are developing called Mobile VGIS, and present two sample applications of Situational Visualization
  • Item
    A Geometric Comparison of Algorithms for Fusion Control in Stereoscopic HTDs
    (Georgia Institute of Technology, 2001) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    This paper concerns stereoscopic virtual reality displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. These are called stereoscopic HTDs (Head-Tracked Display). Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pair into a single, 3D perceived image. Unfortunately users often have difficulty fusing the stereo image pair. Researchers use a number of software techniques to reduce fusion problems. This paper geometrically examines and compares a number of these techniques and reaches the following conclusions. In interactive stereoscopic applications, the combination of view placement, scale and either false eye separation or ?-false eye separation can provide fusion control geometrically similar to image shifting and image scaling. However, in stereo HTDs image shifting and image scaling also generate additional geometric artifacts not generated by the other methods. We anecdotally link some of these artifacts to exceeding perceptual limitations of human vision. While formal perceptual studies are still needed, geometric analysis suggests that image shifting and image scaling may be less appropriate for interactive, stereo HTDs than the other methods.
  • Item
    Direct Manipulation on the Virtual Workbench: Two Hands Aren't Always Better Than One
    (Georgia Institute of Technology, 2000) Seay, A. Fleming ; Krum, David Michael ; Hodges, Larry F. ; Ribarsky, William
    This paper reports on the investigation of the differential levels of effectiveness of various interaction techniques on a simple rotation and translation task on the virtual workbench. Manipulation time and number of collisions were measured for subjects using four device sets (unimanual glove, bimanual glove, unimanual stick, and bimanual stick). Participants were also asked to subjectively judge each device's effectiveness. Performance results indicated a main effect for device (better performance for users of the stick(s)), but not for number of hands. Subjective results supported these findings, as users expressed a preference for the stick(s).
  • Item
    An Analytic Comparison of Alpha-False Eye Separation, Image Scaling and Image Shifting in Stereoscopic Displays
    (Georgia Institute of Technology, 2000) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    Stereoscopic display is a fundamental part of many virtual reality systems. Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pairs and the user perceives a single 3D image. In practice, however, users can have difficulty fusing the stereo image pairs into a single 3D image. Researchers have used a number of software methods to reduce fusion problems. Some fusion algorithms act directly on the 3D geometry while others act indirectly on the projected 2D images or the view parameters. Compared to the direct techniques, the indirect techniques tend to alter the projected 2D images to a lesser degree. However while the 3D image effects of the direct techniques are algorithmically specified, the 3D effects of the indirect techniques require further analysis. This is important because fusion techniques were developed in non-head-tracked displays that have distortion properties not found in the modern head-tracked variety. In non-head-tracked displays, the non-head-tracked distortions can mask the stereoscopic image artifacts induced by fusion techniques but in head-tracked displays distracting effects of a fusion technique may become apparent. This paper is concerned with stereoscopic displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. This paper rigorously and analytically compares the distortion artifacts of three indirect fusion techniques, alpha-false eye separation, image scaling and image shifting. We show that the latter two methods have additional artifacts not found in alpha-false eye separation and we conclude that alpha-false eye separation is the best indirect method for these displays.
  • Item
    Efficient Ray Intersection for Visualization and Navigation of Global Terrain using Spheroidal Height-Augmented Quadtrees
    (Georgia Institute of Technology, 1999) Wartell, Zachary Justin ; Ribarsky, William ; Hodges, Larry F.
    We present an algorithm for efficiently computing ray intersections with multi-resolution global terrain partitioned by spheroidal height-augmented quadtrees. While previous methods support terrain defined on a Cartesian coordinate system, our methods support terrain defined on a two-parameter ellipsoidal coordinate system. This curvilinear system is necessary for an accurate model of global terrain. Supporting multi-resolution terrain and quadtrees on this curvilinear coordinate system raises a surprising number of complications. We describe the complexities and present solutions. The final algorithm is suited for interactive terrain selection, collision detection and simple LOS (line-of-site) queries on global terrain.
  • Item
    Balancing Fusion, Image Depth and Distortion in Stereoscopic Head-Tracked Displays
    (Georgia Institute of Technology, 1999) Wartell, Zachary Justin ; Ribarsky, William ; Hodges, Larry F.
    Stereoscopic display is a fundamental part of virtual reality HMD systems and HTD (head-tracked display) systems such as the virtual workbench and the CAVE. A common practice in stereoscopic systems is deliberate incorrect modeling of user eye separation. Underestimating eye separation is frequently necessary for the human visual system to fuse stereo image pairs into single 3D images, while overestimating eye separation enhances image depth. Unfortunately, false eye separation modeling also distorts the perceived 3D image in undesirable ways. This paper makes three fundamental contributions to understanding and controlling this stereo distortion. (1) We analyze the distortion using a new analytic description. This analysis shows that even with perfect head tracking, a user will perceive virtual objects to warp and shift as she moves her head. (2) We present a new technique for counteracting the shearing component of the distortion. (3) We present improved methods for managing image fusion problems for distant objects and for enhancing the depth of flat scenes.
  • Item
    The Perceptive Workbench: Towards Spontaneous and Natural Interaction in Semi-Immersive Virtual Environments
    (Georgia Institute of Technology, 1999) Leibe, Bastian ; Starner, Thad ; Ribarsky, William ; Wartell, Zachary Justin ; Krum, David Michael ; Singletary, Bradley Allen ; Hodges, Larry F.
    The Perceptive Workbench enables a spontaneous, natural, and unimpeded interface between the physical and virtual world. It is built on vision-based methods for interaction that remove the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the objectUs 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity as either preloaded objects or those selected on the spot by the user can become physical icons. Integrated into the same vision- based interface is the ability to identify 3D hand position, pointing direction, and sweeping arm gestures. Such gestures can support selection, manipulation, and navigation tasks. In this paper the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.
  • Item
    Can Audio Enhance Visual Perception and Performance in a Virtual Environment?
    (Georgia Institute of Technology, 1999) Davis, Elizabeth T. ; Scott, Kevin ; Pair, Jarrell ; Hodges, Larry F. ; Oliverio, James
    Does the addition of audio enhance visual perception and performance within a virtual environment? To address this issue we used both a questionnaire and an experimental test of the effect of audio on recall and recognition of visual objects within different rooms of a virtual environment. We tested 60 college-aged students who had normal visual acuity, color vision, and hearing. The between-participants factor was audio condition (none, low fidelity, and high fidelity). The questionnaire results showed that ambient sounds enhanced the sense of presence (or "being there") and the subjective 3D quality of the visual display, but not the subjective dynamic interaction with the display. We also showed that audio can enhance recall and recognition of visual objects and their spatial locations within the virtual environment. These results have implications for the design and use of virtual environments, where audio sometimes can be used to compensate for the quality of the visual display.