Seismic exploration has entered the 3-D age. With the help of high-performance computers, we can image the structure of the subsurface more and more accurately. Meanwhile, the size of 3-D seismic dataset is also increasing dramatically. The typical volume of 3-D seismic dataset has reached terabyte levels. The huge volume not only taxes the storage and I/O capacities of computer systems, but also makes managing large datasets a formidable problem. However, the large difference between the volume of prestack dataset and poststack dataset implies there is some redundancy in the prestack dataset. This redundancy provides us the possibility of compressing the prestack dataset at high compression ratios.

At SEP, we also face the problems of handling large dataset. One is that we usually do not have enough disk space to store the whole 3-D prestack dataset. The other is the retrieval of a dataset from tapes to disk needs a large amount of time. Moreover, since we have begun to issue a CD-ROM version of the SEP reports, compressing the dataset means we can store more on one compact disk.

Ergas and Donoho et al. from Chevron 1995a 1995b have developed a new seismic data compression technique based on wavelet transform. This compression algorithm performs wavelet decomposition on the seismic data and represents them in the wavelet domain characterized by a number of subbands consisting of different temporal and spatial frequency components. The coherency existing in the multidimensional seismic datasets, ranging from prestack to poststack data, allows efficient quantization of the data in each wavelet-transform subband, such that the original data can be precisely represented by a very small average number of bits per sample. Many tests show that this technique can compress the 3-D prestack dataset at a compression ratio as high as 100:1.

11/12/1997