Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 16
Thumbnail Image
Item

Direct Manipulation on the Virtual Workbench: Two Hands Aren't Always Better Than One

2000 , Seay, A. Fleming , Krum, David Michael , Hodges, Larry F. , Ribarsky, William

This paper reports on the investigation of the differential levels of effectiveness of various interaction techniques on a simple rotation and translation task on the virtual workbench. Manipulation time and number of collisions were measured for subjects using four device sets (unimanual glove, bimanual glove, unimanual stick, and bimanual stick). Participants were also asked to subjectively judge each device's effectiveness. Performance results indicated a main effect for device (better performance for users of the stick(s)), but not for number of hands. Subjective results supported these findings, as users expressed a preference for the stick(s).

Thumbnail Image
Item

Discovery Visualization and Visual Data Mining

1999 , Ribarsky, William , Katz, Jochen , Jiang, Frank , Holland, Aubrey

This paper describes discovery visualization, a new visual data mining approach that has as a key element the heightened awareness of the user by the machine. Discovery visualization promotes the concept of continuous interaction with constant feedback between man and machine and constant unfolding of the data. It does this by providing a combination of automated response and user selection to achieve and sustain animated action while the user explores time-dependent data. The process begins by automatically generating an overview using a fast clustering approach, where the clusters are then followed as time-dependent features. Discovery visualization is applied to both test data and real application data. The results show that the method is accurate and scalable, and it offers a straightforward, error-based process for improvement of accuracy.

Thumbnail Image
Item

The Perceptive Workbench: Towards Spontaneous and Natural Interaction in Semi-Immersive Virtual Environments

1999 , Leibe, Bastian , Starner, Thad , Ribarsky, William , Wartell, Zachary Justin , Krum, David Michael , Singletary, Bradley Allen , Hodges, Larry F.

The Perceptive Workbench enables a spontaneous, natural, and unimpeded interface between the physical and virtual world. It is built on vision-based methods for interaction that remove the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the objectUs 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity as either preloaded objects or those selected on the spot by the user can become physical icons. Integrated into the same vision- based interface is the ability to identify 3D hand position, pointing direction, and sweeping arm gestures. Such gestures can support selection, manipulation, and navigation tasks. In this paper the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.

Thumbnail Image
Item

Intent, Perception, and Out-of-Core Visualization Applied to Terrain

1998 , Davis, Douglass , Jiang, Tian-Yue , Ribarsky, William , Faust, Nick L. (Nickolas Lea)

This paper considers how out-of-core visualization applies to terrain datasets, which are among the largest now presented for interactive visualization and can range to sizes of 20 GB and more. It is found that a combination of out-of-core visualization, which tends to focus on 3D data, and visual simulation, which places an emphasis on visual perception and real-time display of multiresolution data, results in interactive terrain visualization with significantly improved data access and quality of presentation. Further, the visual simulation approach provides qualities that are useful for general data, not just terrain.

Thumbnail Image
Item

An Analytic Comparison of Alpha-False Eye Separation, Image Scaling and Image Shifting in Stereoscopic Displays

2000 , Wartell, Zachary Justin , Hodges, Larry F. , Ribarsky, William

Stereoscopic display is a fundamental part of many virtual reality systems. Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pairs and the user perceives a single 3D image. In practice, however, users can have difficulty fusing the stereo image pairs into a single 3D image. Researchers have used a number of software methods to reduce fusion problems. Some fusion algorithms act directly on the 3D geometry while others act indirectly on the projected 2D images or the view parameters. Compared to the direct techniques, the indirect techniques tend to alter the projected 2D images to a lesser degree. However while the 3D image effects of the direct techniques are algorithmically specified, the 3D effects of the indirect techniques require further analysis. This is important because fusion techniques were developed in non-head-tracked displays that have distortion properties not found in the modern head-tracked variety. In non-head-tracked displays, the non-head-tracked distortions can mask the stereoscopic image artifacts induced by fusion techniques but in head-tracked displays distracting effects of a fusion technique may become apparent. This paper is concerned with stereoscopic displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. This paper rigorously and analytically compares the distortion artifacts of three indirect fusion techniques, alpha-false eye separation, image scaling and image shifting. We show that the latter two methods have additional artifacts not found in alpha-false eye separation and we conclude that alpha-false eye separation is the best indirect method for these displays.

Thumbnail Image
Item

Balancing Fusion, Image Depth and Distortion in Stereoscopic Head-Tracked Displays

1999 , Wartell, Zachary Justin , Ribarsky, William , Hodges, Larry F.

Stereoscopic display is a fundamental part of virtual reality HMD systems and HTD (head-tracked display) systems such as the virtual workbench and the CAVE. A common practice in stereoscopic systems is deliberate incorrect modeling of user eye separation. Underestimating eye separation is frequently necessary for the human visual system to fuse stereo image pairs into single 3D images, while overestimating eye separation enhances image depth. Unfortunately, false eye separation modeling also distorts the perceived 3D image in undesirable ways. This paper makes three fundamental contributions to understanding and controlling this stereo distortion. (1) We analyze the distortion using a new analytic description. This analysis shows that even with perfect head tracking, a user will perceive virtual objects to warp and shift as she moves her head. (2) We present a new technique for counteracting the shearing component of the distortion. (3) We present improved methods for managing image fusion problems for distant objects and for enhancing the depth of flat scenes.

Thumbnail Image
Item

Semi-Automated and Interactive Construction of 3D Urban Terrains

1999 , Wasilewski, Anthony A. , Faust, Nick L. (Nickolas Lea) , Ribarsky, William

We have developed a set of tools that attack the problem of rapid construction of 3D urban terrains containing buildings, roads, trees, and other features. Heretofore, the process of creating such databases has been painstaking, with no integrated set of tools to model individual buildings, apply textures, place objects accurately with respect to other objects, and insert them into a database structure appropriate for real-time display. Since fully automated techniques for routinely building 3D urban environments using machine vision have not yet been entirely successful, our approach has been to build a set of semi-automated tools that support and make efficient a human interpreter, running on a PC under Windows NT. The tools use remote sensing technologies and thus are applicable to the general case of not having close access to urban data (e.g., collections of buildings may be in foreign or hostile environments), but can use close-up image data if provided. Once we have the 3D urban model, we face the problems of final precise alignment of objects and real-time visualization. We attack both problems by providing an interface to VGIS1, our high-resolution global terrain visualization system. Typically data from different sources, such as phototextures, building models, maps, and terrain elevations, do not register precisely when put together. VGIS provides accurate, real-time display of all these data products. Our tools provide a porting mechanism for bringing the urban data into VGIS where it can be interactively aligned. The data are then organized into a VGIS database for real-time display.

Thumbnail Image
Item

Efficient Ray Intersection for Visualization and Navigation of Global Terrain using Spheroidal Height-Augmented Quadtrees

1999 , Wartell, Zachary Justin , Ribarsky, William , Hodges, Larry F.

We present an algorithm for efficiently computing ray intersections with multi-resolution global terrain partitioned by spheroidal height-augmented quadtrees. While previous methods support terrain defined on a Cartesian coordinate system, our methods support terrain defined on a two-parameter ellipsoidal coordinate system. This curvilinear system is necessary for an accurate model of global terrain. Supporting multi-resolution terrain and quadtrees on this curvilinear coordinate system raises a surprising number of complications. We describe the complexities and present solutions. The final algorithm is suited for interactive terrain selection, collision detection and simple LOS (line-of-site) queries on global terrain.

Thumbnail Image
Item

Real-Time Visualization of Scalably Large Collections of Heterogeneous Objects

1999 , Davis, Douglass , Ribarsky, William , Jiang, Tian-Yue , Faust, Nick L. (Nickolas Lea) , Ho, Sean

This paper presents results for real-time visualization of out-of-core collections of 3D objects. This is a significant extension of previous methods and shows the generality of hierarchical paging procedures applied both to global terrain and any objects that reside on it. Applied to buildings, the procedure shows the effectiveness of using a screen-based paging and display criterion within a hierarchical framework. The results demonstrate that the method is scalable since it is able to handle multiple collections of buildings (e.g., cities) placed around the earth with full interactivity and without extensive memory load. Further the method shows efficient handling of culling and is applicable to larger, extended collections of buildings. Finally, the method shows that levels of detail can be incorporated to provide improved detail management.

Thumbnail Image
Item

The Analytic Distortion Induced by False-Eye Separation in Head-Tracked Stereoscopic Displays

1999 , Wartell, Zachary Justin , Hodges, Larry F. , Ribarsky, William

Stereoscopic display is a fundamental part of virtual reality systems such as the virtual workbench, the CAVE and HMD systems. A common practice in stereoscopic systems is deliberate incorrect modeling of user eye separation. Under estimating eye separation can help the human visual system fuse stereo image pairs into single 3D images, while over estimating eye separation enhances image depth. Unfortunately, false eye separation modeling also distorts the perceived 3D image in undesirable ways. We present a novel analytic expression and quantitative analysis of this distortion for eyes at an arbitrary location and orientation.