next up previous print clean
Next: PS regularization Up: Data Regularization Previous: Data Regularization

AMO regularization overview

Partial stacking the data recorded with irregular geometries within offset and azimuth ranges yields uniformly sampled common offset/azimuth cubes. In order to enhance the signal and reduce the noise, the reflections should be coherent among the traces to be stacked. Normal Moveout (NMO) is a common method to create this coherency among the traces.

Let's define a simple linear model that links the recorded traces (at arbitrary midpoint locations) to the stacked volume (defined on a regular grid). Each data trace is the result of interpolating the stacked traces and equal to the weighted sum of the neighboring stacked traces. In matrix notation, this transforms to:
\begin{displaymath}
{\bold d} = {\bold A}{\bold m},\end{displaymath} (96)
where ${\bold d}$ is the data space, ${\bold m}$ is the model space, and ${\bold A}$ is the linear interpolation operator. Stacking can be represented as the application of the adjoint operator ${\bold A'}$ to the data traces,
\begin{displaymath}
{\bold m} = {\bold A'}{\bold d}.\end{displaymath} (97)

This simple operation does not yield satisfactory results for an uneven fold distribution. To compensate for this uneveness, it is common practice to normalize the stacked traces by the inverse of the fold (${\bold W_m}$), thus:  
 \begin{displaymath}
{\bold m} = {\bold W_m} {\bold A'} {\bold d}.

\end{displaymath} (98)

Alternatively, it is possible to apply the general theory of inverse least-squares to the stacking normalization problem. The formal solution of the inverse least-squares problem takes the form:
\begin{displaymath}
{\bold m} = \left( {\bold A'}{\bold A} \right)^{-1} {\bold A'}{\bold d}.\end{displaymath} (99)
() show that the fold normalization (${\bold W_m}$) can be approximated as the inverse of ${\bold {A'A}}$.

With the knowledge of model regularization in the least-squares inversion theory, it is possible to introduce smoothing along offset/azimuth in the model space. The simple least-squares problem becomes:
\begin{eqnarray}
0 & \approx & {\bold {d - Am}} \nonumber \ 
0 & \approx & \epsilon_D {\bold D}'{\bold {D_h m}},
\end{eqnarray}
(100)
where the roughener operator ${\bold D_h}$ can be a leaky integration operator. However, the use of a leaky integration operator may yield the loss of resolution when geological dips are present. The substitution of the identity matrix in the lower diagonal of ${\bold D_h}$ with the AMO operator correctly transforms a common offset-azimuth cube into an equivalent cube with a different offset and azimuth. This transformation also preserves the geological dip.

The fold, which normalizes the data based on the traces distribution, is introduced by a diagonal scaling factor. The weights, for the regularized and preconditioned problem, are thus computed as:
\begin{displaymath}
{\bold W_I}^{-1} = \frac{{\bold {diag}} \left\{ \left[ \left...
 ...ld I} \right] \bold {p_{ref}} \right\}}{\bold{diag(p_{ref})}},
\end{displaymath} (101)
where $\bold {p_{ref}}={\bold {D'_h D_h m}}$. This fold calculation can be simplified more as:
\begin{displaymath}
{\bold {W_I}}^{-1} = \frac{\bold {diag} \left\{ \left[ \left...
 ...ilon_D {\bold I} \right] {\bold 1} \right\}}{\bold {diag(1)}}.
\end{displaymath} (102)


next up previous print clean
Next: PS regularization Up: Data Regularization Previous: Data Regularization
Stanford Exploration Project
11/11/2002