next up previous print clean
Next: Migration results Up: Theory Previous: Theory

Approximating the Hessian

In equation (4), I define ${\bf L'd}$ as the migrated image ${\bf
 m_1}$ after a first migration such that  
{\bf \hat{m} = (L'L)^{-1}m_1}.\end{displaymath} (5)
In equation (5), ${\bf \hat{m}}$ and ${\bf L'L}$ are unknown. Since I am looking for an approximate of the Hessian, I need to find two known images that are related by the same expression as in equation (5). This can be easily achieved by remodeling the data from ${\bf
 m_1}$ with ${\bf L}$
{\bf d_1 = Lm_1}\end{displaymath} (6)
and remigrating them with ${\bf L'}$ as follows:  
{\bf m_2 = L'd_1 = L'Lm_1}.\end{displaymath} (7)
Notice a similarity between equations (5) and (7) except that in equation (7), only ${\bf L'L}$ is unknown. Notice that ${\bf m_2}$ has a mathematical significance: it is a vector of the Krylov subspace for the model ${\bf \hat{m}}$. Now, I assume that we can write the inverse Hessian as a linear operator ${\bf B}$ such that  
{\bf \hat{m} = Bm_1}\end{displaymath} (8)
{\bf m_1 = Bm_2}.\end{displaymath} (9)
Equation (9) can be approximated as a fitting goal for a matching filter estimation problem where ${\bf B}$ is the convolution matrix with a bank of non-stationary filters Rickett et al. (2001). This choice is rather arbitrary but reflects the general idea that the Hessian is a transform operator between two similar images. My hope is not to ``perfectly'' represent the Hessian, but to improve the migrated image at a lower cost than least-squares migration. In addition in equations (8) and (9), the deconvolution process becomes a convolution, which makes it much more stable and easy to apply. Hence, I can rewrite equation (9) such that the matrix ${\bf B}$ becomes a vector and ${\bf m_2}$ becomes a convolution matrix Robinson and Treitel (1980):  
{\bf m_1 = M_2b}.\end{displaymath} (10)
The goal now is to minimize the residual  
{\bf 0 \approx r_{m_1} = m_1 - M_2 b}\end{displaymath} (11)
in a least-squares sense. Because we have many unknown filter coefficients in ${\bf b}$, I introduce a regularization term that penalizes differences between filters as follows:  
 {\bf 0} & \approx &{\bf r_{m_1}} & = ...
 ... \\  {\bf 0} & \approx &{\bf r_b } & = & {\bf R b}
 \end{array}\end{displaymath} (12)
where ${\bf R}$ is the Helix derivative Claerbout (1998). The objective function for equation (12) becomes
f({\bf b}) = \Vert{\bf r_{m_1}}\Vert^2+\epsilon^2 \Vert{\bf r_b }\Vert^2.\end{displaymath} (13)
where $\epsilon$ is a constant. The least-squares inverse is thus given by
{\bf \hat{b}}=({\bf M_2'M_2}+\epsilon^2 {\bf R'R})^{-1}{\bf M_2'm_1}.\end{displaymath} (14)
Once ${\bf \hat{b}}$ is estimated, the final image is obtained by computing
{\bf \hat{m} = m_1 * \hat{b}}\end{displaymath} (15)
where (*) is the convolution operator.

Therefore, I propose computing first a migrated image ${\bf
 m_1}$, then computing a migrated image ${\bf m_2}$ (equation (7)), and finally estimating a bank of non-stationary matching filters ${\bf b}$, e.g., equation (12). The final improved image is obtained by applying the matching filters to the first image ${\bf
 m_1}$, e.g., equation (8). In the next section, I illustrate this idea with the Marmousi dataset. I show that an image similar to the least-squares migration image can be effectively obtained.

next up previous print clean
Next: Migration results Up: Theory Previous: Theory
Stanford Exploration Project