In Chapter 3, I present a new method which attempts to take advantage of the fact that events in prestack seismic data do not tend to have dips that are distibuted independently over any particularly-sized region. Rather, events tend to be large curving things, with dips that change gradually as you observe them at different offsets and times.
Instead of dividing the data into assumed-stationary patches, I assume the data have gradually varying slopes, and formulate the problem so that PEFs calculated from adjacent portions of the data look similar to each other. I estimate a PEF for every data sample, or on very small patches, so that an individual patch is too small to determine all the coefficients of a single PEF. The problem becomes underdetermined. To control the null space, I use a smoothing operator as a preconditioner to encourage a smoothly-varying batch of PEFs. The smoothly-varying PEFs suit the smoothly-varying dip spectrum of the data.
I estimate a large number of PEFs, so the model space is enormous. Nonetheless, the filter estimation step converges quickly, in about ten iterations. In the tests I have run, it takes longer to estimate a smaller number of PEFs in independent patches. The cost of convolutions of the PEFs across the data do not depend on the number of filters, but only on their size, so the convolution operator is not any more expensive for many filters than for a single filter. The preconditioning operator is a smoother, which does add some cost. However, the cost of the smoother is small, because it is implemented as division (deconvolution) by a tiny three point roughening filter, rather than convolution with a larger smoothing filter. Using Claerbout's helical coordinate Claerbout (1998) allows a tiny operator to have a large, multidimensional impulse response. The large set of smoothly-varying PEFs produce more accurate interpolation results than other methods I tested, particularly in data with complicated moveouts.