next up previous print clean
Next: Approximation of by the Up: Biondi and Vlad: Imaging Previous: Biondi and Vlad: Imaging

Normalized partial stacking and inverse theory

  Our goal is to create uniformly sampled common offset/azimuth cubes, that can be migrated using an amplitude-preserving algorithm. The main tool to create these common offset/azimuth cubes is partial stacking the data recorded with irregular geometries within offset and azimuth ranges.

Stacking is the operation of averaging seismic traces by summation. It is an effective way to reduce the size of data sets and to enhance reflections while attenuating noise. To avoid attenuating the signal together with the noise, the reflections need to be coherent among the traces that are being stacked. A common method to increase trace coherency is to apply Normal Moveout (NMO). NMO is a first-order correction for the differences in timings among the reflections in traces recorded at different offsets. Global stacking of all the traces recorded at the same midpoint location, no matter their offset and azimuth, is the most common type of stacking. Partial stacking averages only those traces with offset and azimuth within a given range.

The first problem that we encounter when stacking 3-D prestack data is that, because of acquisition geometry irregularities, data traces do not share the same exact midpoint location. Stacking 3-D prestack data is thus the combination of two processes: spatial interpolation followed by averaging. To start our analysis we define a simple linear model that links the recorded traces (at arbitrary midpoint locations) to the stacked volume (defined on a regular grid). Each data trace is the result of interpolating the stacked traces, and it is equal to the weighted sum of the neighboring stacked traces. The interpolation weights are functions of the distance between the midpoint location of the model trace and the midpoint location of the data trace. The sum of all the weights corresponding to one data trace is usually equal to one. Because the weights are independent from time along the seismic traces, for sake of notation simplicity, we collapse the time axis and consider each element di of the data space (recorded data) ${\bf d}$, and each element mj of the model space ${\bf m}$(stacked volume), as representing a whole trace. The relationship between data and model is linear and can be expressed as,  
 \begin{displaymath}
d_{i} = \Sigma_{j} \; a_{ij} \; m_{j};
\;\; {\rm subject}\;\...
 ...\;\;{\rm the}\;\;{\rm constraint} \;\;
\Sigma_{j} \; a_{ij} =1.\end{displaymath} (1)
In matrix notation, equation (1) becomes  
 \begin{displaymath}
{\bf d}={\bf A}{\bf m}.\end{displaymath} (2)
where the model vector ${\bf m}$ is the composite of all the offset/azimuth cubes ${\bf m}_{{\bf h}}$,that is
\begin{eqnarray}
{\bf m}=
\left[ { \matrix{
{\bf m}_{{\bf h}_1} \cr
\vdots \cr
{...
 ...m}_{{\bf h}_i} \cr
\vdots \cr
{\bf m}_{{\bf h}_n} \cr
} } \right].\end{eqnarray} (3)

Stacking is the summing of the data traces into the model traces weighted by the interpolation weights. In operator notation, stacking can be represented as the application of the adjoint operator ${\bf A^{'}}$ to the data traces Claerbout (1998); that is,
\begin{displaymath}
{\bf m}={\bf A^{'}}{\bf d}.\end{displaymath} (4)
The application of simple stacking as an adjoint operator does not yield satisfactory results when the fold is unevenly distributed among midpoint bins. In the stack, the amplitudes of the bins with higher fold will be artificially higher than the amplitudes of the bins with lower fold. To compensate for this uneveness in the fold, it is common practice to divide the stacked traces by the inverse of the fold. This fold normalization can be also expressed in operator notation when a diagonal operator ${\bf W}_{m}$ is added in front of the adjoint operator in the computation of the stack:  
 \begin{displaymath}
{\bf m}={\bf W}_{m}{\bf A^{'}}{\bf d}=
{\bf d}.\end{displaymath} (5)
The weights wjm are given by the inverse of the fold, that can be simply computed by a summation of the elements in each column of ${\bf A}$; that is,  
 \begin{displaymath}
w_{j}^{m} = 
\left(\Sigma_{i} \; a_{ij}\right)^{-1}.\end{displaymath} (6)
Data gaps in the fold present a problem for fold normalization because they make the weights diverge to infinity in equation (6). To avoid instability, it is common practice to add a small number $\epsilon_w$ to the actual fold, or to set the weights to zero when the fold is smaller than $\epsilon_w$,as in the following revised expression for the weights:  
 \begin{displaymath}
w_{j}^{m} = 
\left\{
\begin{array}
{ll}
\left(\Sigma_{i} \; ...
 ...eq \epsilon_w \\ &\\ 0 & {\rm elsewhere}. \\ \end{array}\right.\end{displaymath} (7)

We derived the fold normalization by a simple heuristic, and it may seem an ad hoc solution to the problem of normalizing the output of stacking. However, it can be shown that the weights used by fold normalization can be derived from applying the general theory of inverse least-squares to the stacking normalization problem Biondi (1999). The least-squares problem is  
 \begin{displaymath}
0 \approx {\bf d}- {\bf A}{\bf m},\end{displaymath} (8)
and its formal solution can be written as  
 \begin{displaymath}
{\bf m}= {\bf A^{\ddagger}}_m {\bf d}= \left({\bf A^{'}}{\bf A}\right)^{-1} {\bf A^{'}}{\bf d}.\end{displaymath} (9)

where the operator ${\bf A^{\ddagger}}_m$ is often referred as the pseudoinverse Strang (1986).

Applying the least-squares inverse is equivalent to applying the adjoint operator ${\bf A^{'}}$followed by a spatial filtering of the model space given by the inverse of ${\bf A^{'}}{\bf A}$.The fold normalization can be seen as a particular approximation of the inverse of ${\bf A^{'}}{\bf A}$ with a diagonal operator. Because of the size of the problem, computing the exact inverse of ${\bf A^{'}}{\bf A}$ is not straightforward. We have thus two choices: 1) to compute an analytical approximation to the inverse; 2) to use an iterative method to compute a numerical approximation to the inverse. Even if we follow the second strategy, the availability of an analytical approximation to the inverse is useful, because the approximate inverse can be used as a preconditioner to accelerate the convergence of the iterative inversion.

We will discuss two methods for approximating the inverse of ${\bf A^{'}}{\bf A}$.The first method is algebraic and it is based on the direct manipulation of the elements of ${\bf A^{'}}{\bf A}$,such as extracting its diagonal or summing the elements in its columns (or rows). The second method is based on the idea that for capturing the most significant properties of ${\bf A^{'}}{\bf A}$ by measuring its effects when applied to a reference model (${\bf m}_{\rm ref}$)Claerbout and Nichols (1994); Rickett (2001).

Although these two methods seems unrelated, they yield equivalent results for specific choices of ${\bf m}_{\rm ref}$.Therefore, the second method can be used to analyze the assumptions that underly the possible choices of approximations.



 
next up previous print clean
Next: Approximation of by the Up: Biondi and Vlad: Imaging Previous: Biondi and Vlad: Imaging
Stanford Exploration Project
9/18/2001