Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 11
  • Item
    Precision Markup Modeling and Display in a Global Geospatial Environment
    (Georgia Institute of Technology, 2003) Wartell, Zachary Justin ; Ribarsky, William ; Faust, Nick L. (Nickolas Lea)
    A data organization, scalable structure, and multiresolution visualization approach is described for precision markup modeling in a global geospatial environment. The global environment supports interactive visual navigation from global overviews to details on the ground at the resolution of inches or less. This is a difference in scale of 10 orders of magnitude or more. To efficiently handle details over this range of scales while providing accurate placement of objects, a set of nested coordinate systems is used, which always refers, through a series of transformations, to the fundamental world coordinate system (with its origin at the center of the earth). This coordinate structure supports multi-resolution models of imagery, terrain, vector data, buildings, moving objects, and other geospatial data. Thus objects that are static or moving on the terrain can be displayed without inaccurate positioning or jumping due to coordinate round-off. Examples of high resolution images, 3D objects, and terrain-following annotations are shown.
  • Item
    Rendering Vector Data Over Global, Multi-resolution 3D Terrain
    (Georgia Institute of Technology, 2003) Wartell, Zachary Justin ; Kang, Eunjung ; Wasilewski, Anthony A. ; Ribarsky, William ; Faust, Nick L. (Nickolas Lea)
    Modern desktop PCs are capable of taking 2D Geographic Information System (GIS) applications into the realm of interactive 3D virtual worlds. In prior work we developed and presented graphics algorithms and data management methods for interactive viewing of a 3D global terrain system for desktop and virtual reality systems. In this paper we present a key data structure and associated render-time algorithm for the combined display of multi-resolution 3D terrain and traditional GIS polyline vector data. Such vector data is traditionally used for representing geographic entities such as political borders, roads, rivers and cadastral information.
  • Item
    Visual Query of Time-Dependent 3D Weather in a Global Geospatial Environment
    (Georgia Institute of Technology, 2002) Ribarsky, William ; Faust, Nick L. (Nickolas Lea) ; Wartell, Zachary Justin ; Shaw, Christopher D. ; Jang, Justin
    A multi-key data organization is developed for handling a continuous stream of large scale, time-dependent, 3D weather data in a global environment. The structure supports inserting the data in real-time as they arrive or retrieving weather events at desired times and locations from archived weather histories. In either case data are organized for interactive visualization and visual query.
  • Item
    A Geometric Comparison of Algorithms for Fusion Control in Stereoscopic HTDs
    (Georgia Institute of Technology, 2001) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    This paper concerns stereoscopic virtual reality displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. These are called stereoscopic HTDs (Head-Tracked Display). Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pair into a single, 3D perceived image. Unfortunately users often have difficulty fusing the stereo image pair. Researchers use a number of software techniques to reduce fusion problems. This paper geometrically examines and compares a number of these techniques and reaches the following conclusions. In interactive stereoscopic applications, the combination of view placement, scale and either false eye separation or ?-false eye separation can provide fusion control geometrically similar to image shifting and image scaling. However, in stereo HTDs image shifting and image scaling also generate additional geometric artifacts not generated by the other methods. We anecdotally link some of these artifacts to exceeding perceptual limitations of human vision. While formal perceptual studies are still needed, geometric analysis suggests that image shifting and image scaling may be less appropriate for interactive, stereo HTDs than the other methods.
  • Item
    An Analytic Comparison of Alpha-False Eye Separation, Image Scaling and Image Shifting in Stereoscopic Displays
    (Georgia Institute of Technology, 2000) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    Stereoscopic display is a fundamental part of many virtual reality systems. Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pairs and the user perceives a single 3D image. In practice, however, users can have difficulty fusing the stereo image pairs into a single 3D image. Researchers have used a number of software methods to reduce fusion problems. Some fusion algorithms act directly on the 3D geometry while others act indirectly on the projected 2D images or the view parameters. Compared to the direct techniques, the indirect techniques tend to alter the projected 2D images to a lesser degree. However while the 3D image effects of the direct techniques are algorithmically specified, the 3D effects of the indirect techniques require further analysis. This is important because fusion techniques were developed in non-head-tracked displays that have distortion properties not found in the modern head-tracked variety. In non-head-tracked displays, the non-head-tracked distortions can mask the stereoscopic image artifacts induced by fusion techniques but in head-tracked displays distracting effects of a fusion technique may become apparent. This paper is concerned with stereoscopic displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. This paper rigorously and analytically compares the distortion artifacts of three indirect fusion techniques, alpha-false eye separation, image scaling and image shifting. We show that the latter two methods have additional artifacts not found in alpha-false eye separation and we conclude that alpha-false eye separation is the best indirect method for these displays.
  • Item
    Efficient Ray Intersection for Visualization and Navigation of Global Terrain using Spheroidal Height-Augmented Quadtrees
    (Georgia Institute of Technology, 1999) Wartell, Zachary Justin ; Ribarsky, William ; Hodges, Larry F.
    We present an algorithm for efficiently computing ray intersections with multi-resolution global terrain partitioned by spheroidal height-augmented quadtrees. While previous methods support terrain defined on a Cartesian coordinate system, our methods support terrain defined on a two-parameter ellipsoidal coordinate system. This curvilinear system is necessary for an accurate model of global terrain. Supporting multi-resolution terrain and quadtrees on this curvilinear coordinate system raises a surprising number of complications. We describe the complexities and present solutions. The final algorithm is suited for interactive terrain selection, collision detection and simple LOS (line-of-site) queries on global terrain.
  • Item
    Balancing Fusion, Image Depth and Distortion in Stereoscopic Head-Tracked Displays
    (Georgia Institute of Technology, 1999) Wartell, Zachary Justin ; Ribarsky, William ; Hodges, Larry F.
    Stereoscopic display is a fundamental part of virtual reality HMD systems and HTD (head-tracked display) systems such as the virtual workbench and the CAVE. A common practice in stereoscopic systems is deliberate incorrect modeling of user eye separation. Underestimating eye separation is frequently necessary for the human visual system to fuse stereo image pairs into single 3D images, while overestimating eye separation enhances image depth. Unfortunately, false eye separation modeling also distorts the perceived 3D image in undesirable ways. This paper makes three fundamental contributions to understanding and controlling this stereo distortion. (1) We analyze the distortion using a new analytic description. This analysis shows that even with perfect head tracking, a user will perceive virtual objects to warp and shift as she moves her head. (2) We present a new technique for counteracting the shearing component of the distortion. (3) We present improved methods for managing image fusion problems for distant objects and for enhancing the depth of flat scenes.
  • Item
    The Perceptive Workbench: Towards Spontaneous and Natural Interaction in Semi-Immersive Virtual Environments
    (Georgia Institute of Technology, 1999) Leibe, Bastian ; Starner, Thad ; Ribarsky, William ; Wartell, Zachary Justin ; Krum, David Michael ; Singletary, Bradley Allen ; Hodges, Larry F.
    The Perceptive Workbench enables a spontaneous, natural, and unimpeded interface between the physical and virtual world. It is built on vision-based methods for interaction that remove the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the objectUs 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity as either preloaded objects or those selected on the spot by the user can become physical icons. Integrated into the same vision- based interface is the ability to identify 3D hand position, pointing direction, and sweeping arm gestures. Such gestures can support selection, manipulation, and navigation tasks. In this paper the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.
  • Item
    The Analytic Distortion Induced by False-Eye Separation in Head-Tracked Stereoscopic Displays
    (Georgia Institute of Technology, 1999) Wartell, Zachary Justin ; Hodges, Larry F. ; Ribarsky, William
    Stereoscopic display is a fundamental part of virtual reality systems such as the virtual workbench, the CAVE and HMD systems. A common practice in stereoscopic systems is deliberate incorrect modeling of user eye separation. Under estimating eye separation can help the human visual system fuse stereo image pairs into single 3D images, while over estimating eye separation enhances image depth. Unfortunately, false eye separation modeling also distorts the perceived 3D image in undesirable ways. We present a novel analytic expression and quantitative analysis of this distortion for eyes at an arbitrary location and orientation.
  • Item
    Third-Person Navigation of Whole-Planet Terrain in a Head-tracked Stereoscopic Environment
    (Georgia Institute of Technology, 1998) Wartell, Zachary Justin ; Ribarsky, William ; Hodges, Larry F.
    Navigation and interaction in stereoscopic virtual environments with head-tracking for very large data sets present several challenges beyond those encountered with smaller or monoscopic data sets. First, zooming by approaching or retreating from a target must be augmented by integrating scale as a seventh degree of freedom. Second, in order to maintain good stereoscopic imagery, the interface must: maintain stereo image pairs that the user perceives as a single 3D image, minimize loss of perceived depth since stereoscopic imagery cannot properly occlude the screen's frame, provide maximum depth information, and place objects at distances where they are best manipulated. Finally, the navigation interface must work when the environment is displayed at any scale. This paper addresses these problems for god's-eye-view or third person navigation of a specific large-scale virtual environment: a high-resolution terrain database covering an entire planet.