next up previous [pdf]

Next: Conclusions Up: Claerbout and Guitton: Pyramid Previous: FIRST PRINCIPLES

APPLICATION: AN ALGORITHM FOR INTERPOLATION

In seismology we often deal with instruments spaced more widely than they should be, more widely than they should be for typical data processing such as Fourier transform, more widely than is suitable for data display. Fundamentally something is lost, but that does not detract from our goal of regular spaced data on a dense enough mesh. Given regular data, finding a PEF is a linear problem. Given a PEF, interpolating data is a linear problem (demonstrated by many examples in Claerbout's free on-line book, ``Image Estimation by Example''). In both cases we are minimizing the energy of filtered data. A more generalized approach minimizes energy in the filtered data where some unknowns are in the PEF while others are missing data values among the knowns. This minimization is nonlinear (because the PEF multiplies missing data).

The main difficulty of trying to utilize Ronen's pyramid in practice is the issue of bringing the $ x$-space to the $ u$-space. On first glance it seems to require interpolation. On trial, this interpolation seems to need to be done extremely carefully. An alternate approach, which we take here, is to sample the $ u$-space very densely. This, of course, introduces many locations not touched directly by the data. We have traded the interpolation problem for a missing data problem, nonlinear because we must estimate this new missing data at the same time we estimate the PEF. We'd like to come up with a reliable pyramid method of interpolating aliased data that is devoid of low frequency information. An attractive feature is that the pyramid concept does not require the original data on a regular mesh in $ x$-space. Our method will build a dense regular mesh in model $ u$-space.

The problem is non-linear because of the product of unknowns, the PEF multiplying the missing data. Here we approach the problem by multistage linear least squares which can be iterated to solve the nonlinear problem. In any nonlinear problem the initial guess must be ``near enough''. Hopefully the proposed method will not demand unaliased low frequency information.

Imagine five unevenly spaced traces on the $ x$-axis. The data space is defined as $ d(\omega, x)$ at five known values of the coordinate $ x$. Define a model space $ m(\omega, u=\omega x)$ that is dense (many uniformly spaced points) on $ u$-space. We are interested in fitting

$\displaystyle \boldsymbol{0} \ \approx\ \boldsymbol{L} \boldsymbol{m} - \boldsymbol{d}
$

The operator $ \boldsymbol{L}$ linearly interpolates from the dense $ u$-axis to the sparse $ x$-space (which need not be regularly sampled on $ x$). In the limit of an extremely dense $ u$-space we might choose $ \boldsymbol{L}$ to be ``extraction'' basically ``nearest-neighbor inverse binning''. A zeroth order model space is $ \boldsymbol{m_0}= \boldsymbol{L}' \boldsymbol{d}$. This model is simply dropping the several data traces into radial traces in pyramid space. Because $ u$-space has very many points, $ m_0(\omega, u)$ has many empty regions (triangularly shaped). Thus we use preconditioning. Let us precondition with the convolutional roughener $ A0 = (1,-1)$ on the $ u$-axis. This particular $ A0$ leads to a solution $ m(u)$ that linearly interpolates the given data. Such preconditioning should be good at large $ \omega $ where the radial traces are far apart. Thus $ \boldsymbol{p} = \boldsymbol{A_0} \boldsymbol{m}$ or $ \boldsymbol{m}= \boldsymbol{A_0}^{-1}\boldsymbol{p}$. To find $ \boldsymbol{p}$ and $ \boldsymbol{m}$ (independently for each $ \omega $) we iterate on the regression

$\displaystyle \boldsymbol{0} \ \approx\ \boldsymbol{L} \boldsymbol{A_0}^{-1}\boldsymbol{p} - \boldsymbol{d}
$

The resulting $ \boldsymbol{m}$ we will call $ \boldsymbol{m_1}$ (because we will eventually improve on it getting an $ \boldsymbol{m_2}$).

Next let us upgrade $ \boldsymbol{A_0}$. At each $ \omega $ from the model space $ \boldsymbol{m_1}$, make an operator $ \boldsymbol{M_1}$ for convolution over the $ \boldsymbol{u}$ axis. Simultaneously for all $ \omega $ we find the regression for an upgraded PEF $ \boldsymbol{a}$ (which is constant over $ \omega $).

$\displaystyle \boldsymbol{0} \ \approx\ \boldsymbol{W} \boldsymbol{M_1} \boldsymbol{a}
$

Here $ \boldsymbol{W}$ is a diagonal weighting matrix defined next. As mentioned earlier, there are large empty spaces in the zeroth order model space $ \boldsymbol{m_0}= \boldsymbol{L}' \boldsymbol{d}$. Although our improved model space $ m_1(\omega, u)$ has filled the holes with artificial data, we don't want to use regression equations except where the model space points directly to real data, namely where $ m_0(\omega, u)$ is non-zero. Thus we define $ \boldsymbol{W}$ to be a diagonal matrix of ones and zeros, zeros where $ m_0(\omega, u)$ is zero.

Although we are planning to iterate, we will never change $ \boldsymbol{W}$. From solution of the regression above we have the vector $ \boldsymbol{a}$ which we use to make the filter operator $ \boldsymbol{A_1}$. Use it in place of $ \boldsymbol{A_0}$ in the regression for $ \boldsymbol{p}$ above. Iterate to get an $ \boldsymbol{m}$ that is improved over $ \boldsymbol{m_1}$. Call it $ \boldsymbol{m_2}$. Iterate.


next up previous [pdf]

Next: Conclusions Up: Claerbout and Guitton: Pyramid Previous: FIRST PRINCIPLES

2009-04-13