previous up next print clean
Next: REFERENCES Up: Claerbout: Iterative migration Previous: DEFINITIONS

FINDING A CONVINCING DIRECTION

It remains to be shown that I can bootstrap the covariance matrix. I feel that this is possible because this is effectively what I did in PVI Claerbout (1992b) figures 8.8 to 8.10.

Given some initial data d0 with missing traces filled by gapfill() [*] I use lomoplan() [*] to solve for the array of prediction-error filters A in
\begin{eqnarray}
0 &\approx & \ \, A_m K d_0 \\ 0 &\approx & \lambda A_m \end{eqnarray} (3)
(4)
I tried this with various values of $\lambda$.At first I tried $\lambda = 0$,but when I looked at the gradient $\Delta d = MK'{A'}_m A_m,K d_0$,and compared it to the desired difference, I despaired of iterative migration ever reaching its goal. I understand the small valued regions of A as controlling the ultimate solution (d can be large where A is small). I fear the large-valued regions of A (particularly if they are at strange frequencies, strange dips, or strange locations in data space) because they are probably impediments to rapid convergence of the iterative migration.

I will not have courage to begin the iterative migrations until I am able to define everything so that the $\Delta d$-planes seem to have the general character of the removed data. A problem above is that $\lambda$ beats down all the coefficients in the model A, and I wish it was more selective. Below is the filter I began with.
\begin{displaymath}
\begin{array}
{cc}
 a_{ 3, 0} & a_{ 3, 1} \\  a_{ 2, 0} & a_...
 ...{-1, 1} \\  \cdot & a_{-2, 1} \\  \cdot & a_{-3, 1} \end{array}\end{displaymath} (5)

There was much too much high frequency in $\Delta d$,because, as you see, the inverse covariance matrix appears twice in $\Delta d = MK'{A'}_m A_m,K d_0$.In ordinary deconvolution, we may gap the filter to avoid an output dominated by near Nyquist frequencies. Thus I experimented with gapping this filter, and I decided to work with
\begin{displaymath}
\begin{array}
{cc}
 \cdot & a_{ 3, 1} \\  \cdot & a_{ 2, 1} ...
 ...{-1, 1} \\  \cdot & a_{-2, 1} \\  \cdot & a_{-3, 1} \end{array}\end{displaymath} (6)

Inspecting the $\Delta d$-plane I saw too much unrealistically steep dip. First I thought, maybe the semicircular smiles need it. Then I realized, our concern is defects in the data plane, not the model plane. So I decided to cut back on the number of filter coefficients to match that in Claerbout Claerbout (1992c). Thus, finally, I am working with the filter
\begin{displaymath}
\begin{array}
{cc}
 \cdot & a_{ 2, 1} \\  \cdot & a_{ 1, 1} ...
 ..._{ 0, 1} \\  \cdot & a_{-1, 1} \\  \cdot & a_{-2, 1}\end{array}\end{displaymath} (7)
But then I saw little change, leading me to wonder about the interactions of everything I did up till now. Inspecting some more results, I was disappointed that $\Delta d$ was so small near the the ``bow tie'' so I turned off AGC and that seemed to help. I am still uncertain that was the right thing to do, and I may go back and turn it on again.

Figures 1 and 2 show the ingredients of $\Delta d$.Since figure captions do not seem tolerant of mathematics these days I will explain here that the left column is without missing data. The 6 panels, in vertical order, are Md, d+g (g is unknown data estimated by the gapfill program), K(d+g), AK(d+g), A'AK(d+g) and $\Delta d = K'A'AK(d+g)$.

 
delta
delta
Figure 2
Top res. Middle dmod. Bottom ddat.


view burn build edit restore


previous up next print clean
Next: REFERENCES Up: Claerbout: Iterative migration Previous: DEFINITIONS
Stanford Exploration Project
11/18/1997