Out-of-Core Compression and Decompression of Large n-dimensional Scalar Fields

Thumbnail Image
Ibarria, Lorenzo (Lawrence)
Lindstrom, Peter
Rossignac, Jarek
Szymczak, Andrzej
Associated Organization(s)
Organizational Unit
Supplementary to
We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularly attractive when the entire data set does not fit in memory and when the sampling rate is high relative to the feature size of the scalar field in all dimensions. Although we report results for R³ and R⁴ data sets, the proposed approach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introduced here, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predicted values are exact when the n-dimensional scalar field is an implicit polynomial of degree n — 1. Surprisingly, when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding, the proposed method often outperforms wavelet compression in an L ∞ sense. The proposed approach may be used both for lossy and lossless compression and is well suited for out-of-core compression and decompression, because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only a small buffer in core memory, whose size barely exceeds a single (n — 1)-dimensional slice of the data.
Date Issued
431966 bytes
Resource Type
Resource Subtype
Technical Report
Rights Statement
Rights URI