Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 13
Thumbnail Image
Item

Subdomain Aware Contour Trees and Contour Evolution in Time-Dependent Scalar Fields

2005 , Szymczak, Andrzej

For time-dependent scalar fields, one is often interested in topology changes of contours in time. In this paper, we focus on describing how contours split and merge over a certain time interval. Rather than attempting to describe all individual contour splitting and merging events, we focus on the simpler and therefore more tractable in practice problem: describing and querying the cumulative effect of the splitting and merging events over a user-specified time interval. Using our system one can, for example, find all contours at time tº that continue to two contours at time t¹ without hitting the boundary of the domain. For any such contour, there has to be a bifurcation happening to it somewhere between the two times, but, in addition to that, many other events may possibly happen without changing the cumulative outcome (e.g. merging with several contours born after tº or splitting off several contours that disappear before t¹). Our approach is flexible enough to enable other types of queries, if they can be cast as counting queries for numbers of connected components of intersections of contours with certain simply connected domains. Examples of such queries include finding contours with large life spans, contours avoiding certain subset of the domain over a given time interval or contours that continue to two at a later time and then merge back to one some time later. Experimental results show that our method can handle large 3D (2 space dimensions plus time) and 4D (3D+time) datasets. Both preprocessing and query algorithms can easily be parallelized.

Thumbnail Image
Item

Extraction of Topologically Simple Isosurfaces from Volume Datasets

2003 , Szymczak, Andrzej , Vanderhyde, James

There are numerous algorithms in graphics and visualization whose performance is known to decay as the topological complexity of the input increases. On the other hand, the standard pipeline for 3D geometry acquisition often produces 3D models that are topologically more complex than their real forms. We present a simple and efficient algorithm that allows us to simplify the topology of an isosurface by altering the values of some number of voxels. Its utility and performance are demonstrated on several examples, including signed distance functions from polygonal models and CT scans.

Thumbnail Image
Item

Edgebreaker: A Simple Compression for Surfaces with Handles

2002 , Rossignac, Jarek , Lopes, Helio , Safanova, Alla , Tavares, Geovan , Szymczak, Andrzej

The Edgebreaker is an efficient scheme for compressing triangulated surfaces. A surprisingly simple implementation of Edgebreaker has been proposed for surfaces homeomorphic to a sphere. It uses the Corner-Table data structure, which represents the connectivity of a triangulated surface by two tables of integers, and encodes them with less than 2 bits per triangle. We extend this simple formulation to deal with triangulated surfaces with handles and present the detailed pseudocode for the encoding and decoding algorithms (which take one page each). We justify the validity of the proposed approach using the mathematical formulation of the Handlebody theory for surfaces, which explains the topological changes that occur when two boundary edges of a portion of a surface are identified.

Thumbnail Image
Item

Wrap&zip: Linear decoding of planar triangle graphs

1999 , Rossignac, Jarek , Szymczak, Andrzej

The Edgebreaker compression technique, introduced by Rossignac, encodes any unlabeled triangulated planar graph of t triangles using a string of 2t bits. The string contains a sequence of t letters from the set {C, L, E, R, S} and 50% of these letters are C. Exploiting constraints on the sequence, we show that the string may in practice be further compressed to 1.6t bits using model independent codes and even more using model specific entropy codes. These results improve over the 2.3t bits needed by Keeler and Westbrook and over the various 3D triangle mesh compression techniques published recently, which all exhibit larger constants or non-linear worst case storage costs. As in Edgebreaker, we compress the mesh using a spiraling triangle-spanning tree and generate the same sequence of letters. Edgebreaker's decompression uses a look-ahead procedure to identify the third vertex of split triangles (S letter) by counting letter occurrences in the remaining part of the sequences. We introduce here a new decompression technique, which eliminates this look-ahead and thus exhibits a linear asymptotic time complexity. Wrap&zip converts the string into the corresponding triangle-spanning tree and assigns orientations to each one of its free edges. During that "wrapping" process, whenever two consecutive edges point to the same vertex, it glues them together, possibly continuing the "zip" along the next pair of edges that just became adjacent. By labeling the vertices according to the order in which they first appear in the triangle-spanning tree, this compression approach may be used to encode the connectivity (incidence of labeled graphs) of three-dimensional triangle meshes that are homeomorphic to a sphere. Being able to decompress connectivity prior to vertex locations is essential for the most advanced geometry compression schemes, which use connectivity to predict the location of a vertex from the location of its previously decoded neighbors.

Thumbnail Image
Item

Coronary Vessel Cores From 3D Imagery: A Topological Approach

2005 , Mischaikow, Konstantin , Tannenbaum, Allen R. , Szymczak, Andrzej

We propose a simple method for reconstructing thin, low-contrast blood vessels from three-dimensional greyscale images. Our algorithm first extracts persistent maxima of the intensity on all axis-aligned two-dimensional slices through the input volume. Those maxima tend to concentrate along one-dimensional intensity ridges, in particular along blood vessels. Persistence (which can be viewed as a measure of robustness of a local maximum with respect to perturbations of the data) allows to filter out the 'unimportant' maxima due to noise or inaccuracy in the input volume. We then build a minimum forest based on the persistent maxima that uses edges of length smaller than a certain threshold. Because of the distribution of the robust maxima, the structure of this forest already reflects the structure of the blood vessels. We apply three simple geometric filters to the forest in order to improve its quality. The first filter removes short branches from the forest's trees. The second filter adds edges, longer than the edge length threshold used earlier, that join what appears (based on geometric criteria) to be pieces of the same blood vessel to the forest. Such disconnected pieces often result from non-uniformity of contrast along a blood vessel. Finally, we let the user select the tree of interest by clicking near its root (point from which blood would flow out into the tree). We compute the blood flow direction assuming that the tree is of the correct structure and cut it in places where the vessel's geometry would force the blood flow direction to change abruptly. Experiments on clinical CT scans show that our technique can be a useful tool for segmentation of thin and low contrast blood vessels. In particular, we successfully applied it to extract coronary arteries from heart CT scans. Volumetric 3D models of blood vessels can be obtained from the graph described above by adaptive thresholding.

Thumbnail Image
Item

Out-of-Core Compression and Decompression of Large n-dimensional Scalar Fields

2003 , Ibarria, Lorenzo (Lawrence) , Lindstrom, Peter , Rossignac, Jarek , Szymczak, Andrzej

We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularly attractive when the entire data set does not fit in memory and when the sampling rate is high relative to the feature size of the scalar field in all dimensions. Although we report results for R³ and R⁴ data sets, the proposed approach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introduced here, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predicted values are exact when the n-dimensional scalar field is an implicit polynomial of degree n — 1. Surprisingly, when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding, the proposed method often outperforms wavelet compression in an L ∞ sense. The proposed approach may be used both for lossy and lossless compression and is well suited for out-of-core compression and decompression, because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only a small buffer in core memory, whose size barely exceeds a single (n — 1)-dimensional slice of the data.

Thumbnail Image
Item

Edgebreaker on a Corner Table: A Simple Technique for Representing and Compressing Triangulated Surfaces

2001 , Rossignac, Jarek , Safanova, Alla , Szymczak, Andrzej

A triangulated surface S with V vertices is sometimes stored as a list of T independent triangles, each described by the 3 floating-point coordinates of its vertices. This representation requires about 576V bits and provides no explicit information regarding the adjacency between neighboring triangles or vertices. A variety of boundary-graph data structures may be derived from such a representation in order to make explicit the various adjacency and incidence relations between triangles, edges, and vertices. These relations are stored to accelerate algorithms that visit the surface in a systematic manner and access the neighbors of each vertex or triangle. Instead of these complex data structures, we advocate a simple Corner Table, which explicitly represents the triangle/vertex incidence and the triangle/triangle adjacency of any manifold or pseudo-mainfold triangle mesh, as two tables of integers. The Corner Table requires about 12VlogxV bits and must be accompanied by a vertex table, which requires 96V bits, of Floats are used. The Corner Table may be derived from the list of independent triangles. For meshes homeomorphic to a sphere, it may be compressed to less than 4V bits by storing the "clers" sequence of triangle-labels from the set {C,L,E,$,S}. Further compression to 3.6V bits may be guaranteed by using context-based codes for the clers symbols. Entropy codes reduce the storage for large meshes to less than 2V bits. Meshes with more complex topologies may require O(log2V) additional bits per handle of hole. We present here a publicly available, simple, state-machine implementation of the Edgebreaker compression, which traverses the corner table, computes the CLERS symbols, and constructs an ordered list of vertex references. Vertices are encoded, in the order in which they appear on the list, as corrective displacements between their predicted and actual locations. Quantizing vertex coordinates to 12 bits and predicting each vertex as a linear combinations of its previously encoded neighbors leads to short displacements, for which entropy codes drop the total vertex location storage for heavily sampled typical meshes below 16V bits.

Thumbnail Image
Item

Simplifying the Topology of Volume Datasets: An Opportunistic Approach Authors

2005 , Vanderhyde, James , Szymczak, Andrzej

Understanding isosurfaces and contours (their connected components) is important for the analysis as well as effective visualization of 3D scalar fields. The topological changes that the contours undergo as the isovalue varies are typically represented using the contour tree, which can be obtained from the input scalar field by collapsing every contour to a single point. Contour trees are known to provide useful information, allowing one to find interesting isovalues and contours, speed up computations involving isosurfaces or contours, or analyze or visualize the scalar field's qualitative structure. However, the applicability of contour trees can, in many cases, be problematic because of their large size. Morse theory relates the contour topology changes to critical points in the underlying scalar fields. We describe a simple algorithm that can decrease the number of critical points in a regularly sampled volume dataset. The procedure produces a perturbed version of the input volume that has fewer critical points but, at the same time, is guaranteed to be less than a user-specified threshold away from the input volume (in the supremum norm sense). Because the input and output volumes are close, the algorithm preserves the most stable topological features of the scalar field. Although we do not guarantee that the number of critical points in the output volume is minimum among all volumes within the threshold away from the input dataset, our experiments demonstrate that the procedure is quite effective for a variety of input data types. Apart from reducing the size of the contour tree, it also reduces the topological complexity of individual isosurfaces.

Thumbnail Image
Item

On Coherent Rotation Angles for As-Rigid-As-Possible Shape Interpolation

2003 , Choi, Jaeil , Szymczak, Andrzej

Morphing algorithms, attempting to construct visually pleasing interpolations between shapes, have numerous applications in computer graphics. One of the desirable properties of such interpolations is avoidance of self-intersections in the deforming shapes. A local variant of this property precludes the triangles from becoming degenerate. We discuss a topological invariant of such non-degenerating morphs in the plane and discuss a simple improvement of the as rigid as possible shape interpolation algorithm that it motivates.

Thumbnail Image
Item

Connectivity Compression for Irregular Quadrilateral Meshes

1999 , King, Davis , Szymczak, Andrzej , Rossignac, Jarek

Many 3D models used in engineering, scientific, and visualization applications are represented by an irregular mesh of bounding quadrilaterals. We propose a scheme for compressing the connectivity of irregular quadrilateral meshes in 0.26-1.7 bits/quad, a 25-45% savings over randomly splitting quads into triangles and applying triangle mesh compression. Our approach is an extension of the Edgebreaker compression approach and of the Wrap&Zip decompression technique.