next up previous print clean
Next: Regularization by AMO Up: Biondi and Vlad: Imaging Previous: Approximation of by application

Model regularization and preconditioning

In the previous section we have seen that data gaps are a challenge for simple fold normalization. To fill the gaps, we want to use the information from traces recorded with geometry similar to the missing ones. The challenge is to devise a method that maximizes the image resolution and minimizes artifacts.

Given no a priori knowledge on the reflectors geometry, using the information from traces from the surrounding midpoints and same offset-azimuth range can cause a resolution loss because it may smooth true reflectivity changes. On the other hand, because of physical constraints on the reflection mechanism, the reflection amplitudes can be assumed to be a smooth function of the reflection angle and azimuth. This observation leads to the idea that smoothing the data over offset and azimuth could be performed without losing resolution. Ideally, such smoothing should be done over aperture angles (dip and azimuth) at the reflection location, not over offset and azimuth at the surface. However, smoothing at the reflectors would require a full migration of the data. The migration step would make the method dependent on the accurate knowledge of the interval velocity model. This reliance on the velocity model is inescapable when the imaging problems are caused by the complexities in velocity model itself, (e.g. subsalt illumination Prucha et al. (2001)), but it ought to be avoided when the imaging problems are caused by irregularities in the acquisition geometries.

In the context of least-squares inversion, smoothing along offset/azimuth in the model space (e.g. uniformly sampled offset/azimuth cubes) can be accomplished by introducing a model regularization term that penalizes variations of the seismic traces between the cubes. The simple least-squares problem of equation (8) then becomes
   \begin{eqnarray}
0 & \approx & {\bf d}- {\bf A}{\bf m}\nonumber \\ 0 & \approx &...
 ...lon_D {\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}{\bf m}, \nonumber \\ \end{eqnarray}
where the roughener operator ${\bf D}_{{\bf h}}$ is
   \begin{eqnarray}
{\bf D}_{{\bf h}}=
\frac{1}{1-\rho_D}
\left[ { \matrix{
1-\rho_...
 ... 0 & ... & \ddots &-\rho_D{\bf I} &{\bf I} \cr
} } \right].
\;\;\;\end{eqnarray} (12)
The coefficient $\rho_D$ must be between 0 and 1. It determines the range over which we smooth the offset/azimuth cubes. Smaller the value we set for $\rho_D$,narrower the smoothing range is.

Regularization with a roughener operator such as ${\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}$ has the computational drawback that it substantially worsens the conditioning of the problem, making the solution more expensive. However, the problem is easy to precondition because ${\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}$ is easy to invert, since it is already factored in a lower block-diagonal operator ${\bf D}_{{\bf h}}$ and in an upper block-diagonal operator ${\bf D}_{{\bf h}}^{'}$,that can be inverted by recursion. Therefore, we can write the preconditioned least-squares problem
   \begin{eqnarray}
0 & \approx & {\bf d}- {\bf A}\left({\bf D}_{{\bf h}}^{'}{\bf D...
 ...onumber \\ 0 & \approx & \epsilon_D {\bf I} {\bf p}, \nonumber \\ \end{eqnarray}
where ${\bf p}={\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}{\bf m}$is the preconditioned model vector.

To take into account fold variations we can introduce a diagonal scaling factor, by applying the same theory discussed in the previous section. The weights for the regularized and preconditioned problem are thus computed as  
 \begin{displaymath}
{\bf W}_{\bf I}^{-1}=
\frac
{ {\rm\bf diag} \left\{\left[\le...
 ...f}\right\} } 
{ {\rm\bf diag} \left({\bf p}_{\rm ref}\right) }.\end{displaymath} (13)
Notice that because of the nature of ${\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}$,${\bf p}_{\rm ref}=
{\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}{\bf m}_{\rm ref}=
{\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}{\bf 1}=
{\bf 1}$,and $\left({\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}\right)^{-1}{\bf 1} = {\bf 1}$,and some computation can be saved by computing the weights as  
 \begin{displaymath}
{\bf W}_{\bf I}^{-1}=
\frac
{ {\rm\bf diag} \left\{\left[\le...
 ... \; {\bf 1}\right\} } 
{ {\rm\bf diag} \left({\bf 1} \right) }.\end{displaymath} (14)
In this case the computational cost of applying twice the leaky integrator $\left({\bf D}_{{\bf h}}^{'}{\bf D}_{{\bf h}}\right)^{-1}$to the model vector is small, thus the computational saving is trivial. However, when we introduce more expensive operators to smooth the data over offset/azimuth, substituting equation (15) with equation (16) halves the computational cost of evaluating the weights.

The solution of the problem obtained by normalizing the preconditioned adjoint is  
 \begin{displaymath}
{\widetilde {\bf m}} = 
\left({\bf D}_{{\bf h}}^{'}{\bf D}_{...
 ..._{{\bf h}}{\bf D}_{{\bf h}}^{'}\right)^{-1} {\bf A^{'}}{\bf d}.\end{displaymath} (15)
The normalization weights could be used for another preconditioning step, by using them in another change of variables ${\bf q}={\bf W}_{\bf I}^{-\frac{1}{2}}{\bf p}$.This would yield the following least-squares problem,
   \begin{eqnarray}
0 & \approx & {\bf d}- {\bf A}\left({\bf D}_{{\bf h}}^{'}{\bf D...
 ...lon_D {\bf I} {\bf W}_{\bf I}^{\frac{1}{2}} {\bf q}. \nonumber \\ \end{eqnarray}
If the problem expressed in equations (18) were to be solved iteratively, it is likely that it would converge faster than either the original regularized problem [equations (12)] or the preconditioned problem [equations (14)]. However, we prefer a non-iterative solution both because it is cheaper, and because it is somewhat more predictable.



 
next up previous print clean
Next: Regularization by AMO Up: Biondi and Vlad: Imaging Previous: Approximation of by application
Stanford Exploration Project
9/18/2001