next up previous print clean
Next: What is AMO? Up: Introduction Previous: Introduction

Overview

The growing need for detailed reservoir descriptions has led to the careful handling of amplitudes from 3D reflection data. Seismic reflections are characterized by their arrival times and amplitudes. Though both are functions of the medium velocity, they are observed over different spectral windows of good quality reflection data. Amplitudes are interpreted through reflectivities which describe the high frequency part of the ``real velocity'' in the subsurface. Therefore, the processing of seismic data for amplitude inversion has found many applications in fluid imaging, reservoir monitoring, AVO analysis, anisotropy detection, and other related applications.

However, acquiring, processing and interpreting seismic data for true amplitude is a complex process. The difficulties arise from the definition itself of ``true amplitude'', and the sensitivity of the process to acquisition geometry, processing algorithms, and interpretation goals. Over the past decade, ``true-amplitude'' processing has emerged as a new field of research and re-evaluation of conventional techniques. Most amplitude studies have focused on the development of amplitude-preserving operators such as DMO Beasley and Mobley (1988); Black et al. (1993a) and migration Bleistein (1987); Schleicher et al. (1993); Sullivan and Cohen (1987). The algorithms are derived so that amplitude variations as a function of offset are not distorted by the process. However, focusing solely on algorithmic accuracy ignores an issue often more detrimental to amplitude preservation: the effect of sparse and irregular sampling Beasley and Klotz (1992); Canning and Gardner (1995). Many important and sometimes conflicting issues influence 3D survey design; we need to be concerned about adequate midpoint sampling, sufficient offset distribution and high fold for best quality images. Recently, the issue of azimuth also became an important consideration in the design of 3D surveys and led to the controversial question of whether to collect data with a wide or narrow range of azimuths. Wide-azimuth surveys are often considered challenging; the challenge is not the azimuth range itself, but the way in which wide azimuth surveys are acquired. Most oftenly each azimuth is not sufficiently sampled in offsets and midpoints. In reality, there is no unique answer to the question of azimuth; at best, there are only factors that need to be considered and balanced against others during the design of each survey for optimum balance of cost and quality. With current recording geometries, it is difficult to sample the five-dimensional wavefield completely. In addition, during the acquisition stage, obstructions, cable feathering, environmental objectives, economic constraints and many other factors cause seismic data to be sampled in sparse and irregular fashion. The irregularities are observed in the form of variations in the fold coverage, which can manifest itself as an acquisition footprint on prestack data or even the stacked image. If not accounted for, irregular sampling can affect data analysis, and introduce noise, amplitude distortions and even structural distortions in the final image. Beasley (1994); Black and Schleicher (1989); Gardner and Canning (1994).

Many techniques with varying cost and accuracy have been proposed for processing irregularly-sampled data; among them equalized DMO Beasley and Klotz (1992), geometrically calibrated DMO Ronen et al. (1995) and spatial dealiasing Ronen and Liner (1987). The goal is to avoid aliasing by interpolating missing data and to equalize the imaging process for the effects of fold variations. In this context, the term fold refers to the variation of coverage at the recording surface. In areas of complex structure, velocity variations significantly distort seismic wave propagation and result in variation of coverage at target zones. The depth varying coverage is referred to as illumination. Generally, it is expected that depth migration in complex structures produces an optimally stacked image positioned at the correct depth. Given the size of modern 3D surveys, the process is computationally challenging. It also fails quite often because of the difficulty of determining the velocity field, the irregular sampling of seismic data, and, in many cases, the lack of illumination of target zones beneath complex structures.

Therefore, assessing the quality of survey designs, reducing the size of the prestack data volume, and regularizing its coverage are essential requirements for practical and efficient prestack imaging of 3D data.

In the dissertation, I present a new approach for imaging irregularly sampled 3D prestack data that uniquely addresses the issues of economic efficiency, accuracy of algorithms, and proper handling of irregular geometry. The strategy is to regularize the coverage of 3D surveys and reduce the size of 3D prestack data by partial stacking. Both objectives make use of a new partial prestack imaging operator, azimuth-moveout (AMO), first introduced by Biondi and Chemingui 1994. For adequately sampled 3D data, I propose a new imaging sequence that applies AMO to regularize the data into common-azimuth common offset cubes before imaging. For spatially aliased data, I pose partial stacking as an optimization process to improve the quality of partial stacks and regularize the geometry of the data before imaging.

The second Chapter of the dissertation introduces the AMO operator and discusses its application for partial stacking to reduce the cost of 3D imaging. It illustrates an example of applying AMO to reduce the size of a real 3D marine survey from the North Sea. In the third Chapter, I show how we can derive an amplitude-preserving function for AMO so that the offset and azimuth information are preserved by the transformation. The fourth Chapter focuses on Kirchhoff theory and the application of AMO as an integral operator to regularize 3D geometries for common-azimuth processing. In the last Chapter of the dissertation, I introduce a new wave-equation inversion technique that is suitable for data with irregular geometry and takes advantage of the abundance of seismic data in multichannel recording to interpolate beyond aliasing. I demonstrate the application of the inversion to regularize the coverage of 3D surveys and reduce the cost of 3D acquisition in general.


next up previous print clean
Next: What is AMO? Up: Introduction Previous: Introduction
Stanford Exploration Project
1/18/2001