next up previous print clean
Next: PS regularization Up: Data Regularization Previous: Data Regularization

AMO regularization overview

Partial stacking the data recorded with irregular geometries within offset and azimuth ranges yields uniformly sampled common offset/azimuth cubes. In order to enhance the signal and reduce the noise, the reflections should be coherent among the traces to be stacked. Normal Moveout (NMO) is a common method to create this coherency among the traces.

Let's define a simple linear model that links the recorded traces (at arbitrary midpoint locations) to the stacked volume (defined on a regular grid). Each data trace is the result of interpolating the stacked traces and equal to the weighted sum of the neighboring stacked traces. In matrix notation, this transforms to:
{\bold d} = {\bold A}{\bold m},\end{displaymath} (1)
where ${\bold d}$ is the data space, ${\bold m}$ is the model space, and ${\bold A}$ is the linear interpolation operator. Stacking can be represented as the application of the adjoint operator ${\bold A'}$ to the data traces,
{\bold m} = {\bold A'}{\bold d}.\end{displaymath} (2)

This simple operation does not yield satisfactory results for an uneven fold distribution. To compensate for this uneveness, it is common practice to normalize the stacked traces by the inverse of the fold (${\bold W_m}$), thus:  
{\bold m} = {\bold W_m} {\bold A'} {\bold d}.

\end{displaymath} (3)

Alternatively, it is possible to apply the general theory of inverse least-squares to the stacking normalization problem. The formal solution of the inverse least-squares problem takes the form:
{\bold m} = \left( {\bold A'}{\bold A} \right)^{-1} {\bold A'}{\bold d}.\end{displaymath} (4)
Biondi and Vlad (2001) show that the fold normalization (${\bold W_m}$) can be approximated as the inverse of ${\bold {A'A}}$.

With the knowledge of model regularization in the least-squares inversion theory, it is possible to introduce smoothing along offset/azimuth in the model space. The simple least-squares problem becomes:
0 & \approx & {\bold {d - Am}} \nonumber \\ 
0 & \approx & \epsilon_D {\bold D}'{\bold {D_h m}},
where the roughener operator ${\bold D_h}$ can be a leaky integration operator. However, the use of a leaky integration operator may yield the loss of resolution when geological dips are present. The substitution of the identity matrix in the lower diagonal of ${\bold D_h}$ with the AMO operator correctly transforms a common offset-azimuth cube into an equivalent cube with a different offset and azimuth. This transformation also preserves the geological dip.

The fold, which normalizes the data based on the traces distribution, is introduced by a diagonal scaling factor. The weights, for the regularized and preconditioned problem, are thus computed as:
{\bold W_I}^{-1} = \frac{{\bold {diag}} \left\{ \left[ \left...
 ...ld I} \right] \bold {p_{ref}} \right\}}{\bold{diag(p_{ref})}},
\end{displaymath} (6)
where $\bold {p_{ref}}={\bold {D'_h D_h m}}$. This fold calculation can be simplified more as:
{\bold {W_I}}^{-1} = \frac{\bold {diag} \left\{ \left[ \left...
 ...ilon_D {\bold I} \right] {\bold 1} \right\}}{\bold {diag(1)}}.
\end{displaymath} (7)

next up previous print clean
Next: PS regularization Up: Data Regularization Previous: Data Regularization
Stanford Exploration Project