next up previous print clean
Next: Conclusions Up: Fomel, et al.: Interpolation Previous: Examples

Discussion

The result of this work can be interpreted in a broader context of geophysical estimation (often called inversion). The basic formulation of a geophysical estimation problem consists of setting up two goals, one for data fitting, and the other for model smoothing. These two goals may be written as:
\begin{eqnarray}
\bold 0 &\approx& \bold L \bold m - \bold d \ \bold 0 &\approx& \bold A \bold m\end{eqnarray} (7)
(8)
which defines two residuals, a so-called ``data residual'' and a ``model residual'' that are usually minimized by conjugate-gradient, least-squares methods.

Perhaps the most straightforward application is geophysical mapping. Then $\bold d$ is data sprinkled randomly around in space, $\bold L$is the linear interpolation operator, and $\bold m$ is the vector of unknown map values on a cartesian mesh. Many map pixels have no data values; and they are determined by the model-residual goal (damping) which is generally specified by a ``roughening operator'' $\bold A$.Our experience shows that binning is often a useful approximation to interpolation $\bold L$. With binning, our fitting goals look formally the same, but they are a little easier to understand
      \begin{eqnarray}
 \bold 0 &\approx& \bold K \bold m - \bold b \ 
 \bold 0 &\approx& \bold A \bold m \quad = \quad \bold x\end{eqnarray} (9)
(10)
where $\bold b$ denotes binned data values and $\bold K$ is an identity matrix for nonempty bins and zero for empty ones. Also, we introduce the roughened model $\bold x=\bold A\bold m$.Claerbout (1992, 1997) shows how to estimate the PEF $\bold A$ and shows that the roughened model $\bold
x$ is indeed a ``spectrally whitened'' model. It is white in the multidimensional space of the model and white in the space of the unwound helix. In other words, the autocorrelation of $\bold
x$ is an impulse function in either one-dimensional ``unwound'' space or in multidimensional physical space.

A good preconditioner is one that somehow allows iterative solvers to obtain their solutions in a fewer numbers of iterations. It is easy to guess a preconditioner and try it to see if it helps. Start from the fitting goals (9) and (10) for finding the model $\bold m$, and any transformation $\bold B$.Implicitly define a new variable $\bold y$ by $\bold m = \bold B \bold
y$; insert it into the goals (9) and (10); iteratively solve for $\bold y$; and finally convert $\bold y$ back to $\bold m$ with $\bold m = \bold B \bold
y$.You have found a good preconditioner if you have solved the problem in fewer iterations.

The helix enters the picture because it offers us another guess for the operator $\bold B$. As we have shown in this paper, the guess $\bold B
= \bold A^{-1}$ is an outstanding choice, speeding by an order of magnitude (or more) the solution to the first problem we tried.

The spectacularly successful guess is this: Instead of iteratively fitting the goals (9) and (10) for the model $\bold m$ we recast those goals for the whitened model $\bold
x$. Substituting $\bold m=\bold A^{-1}\bold x$ into the fitting goals we get
      \begin{eqnarray}
\bold 0 &\approx& \bold K \bold A^{-1} \bold x - \bold b \ 
\bold 0 &\approx& \bold x\end{eqnarray} (11)
(12)

When a fitting task is large and the iterations cannot go to completion then it is often suggested that we simply omit the damping (12) and regard the results of each iteration as the result of decreasing the amount of model damping. We find this idea to have merit when the model goal is cast as $\bold 0 \approx \bold x$but not when the model smoothing goal is cast in the equivalent form $\bold 0 \approx \bold A \bold m$.

To move towards the fitting goal (11), we start at $\bold x=\bold x_0$ where often $\bold x_0=\bold 0$. For each iteration, we apply polynomial division by the PEF on the helix, $\bold A^{-1}$. It is very quick. At the end, we can plot the deconvolved map $\bold
x$, the map itself $\bold m = \bold A\bold x$,or the known bin values with the empties replaced by their prediction, $\bold K\bold m +(\bold I-\bold K)\bold A\bold x$. Our first results are exciting because they solve the problem so rapidly that we anticipate success with problems of industrial scale.

This example suggests that the philosophy of image creation by optimization has a dual orthonormality: First, Gauss (and common sense) tells us that the data residuals should be roughly equal in size. Likewise in Fourier space they should be roughly equal in size, which means they should be roughly white, i.e. orthonormal. (I use the word ``orthonormal'' because white means the autocorrelation is an impulse, which means the signal is statistically orthogonal to shifted versions of itself.) Second, to speed convergence of iterative methods, we need a whiteness, another othonormality, in the solution. The map image, the physical function that we seek, might not be itself white, so we should solve first for another variable, the whitened map image, and as a final step, transform it to the ``natural colored'' map.

Often geophysicists create a preconditioning matrix $\bold B$ by inventing columns that ``look like'' the solutions that they seek. Then the space $\bold
x$ has many fewer components than the space of $\bold m$. This approach is touted as a way of introducing geological and geophysical prior information into the solution. Indeed, it strongly imposes the form of the solution. Perhaps this approach deserves the diminutive term ``curve fitting'' instead of the grandiloquent ``geophysical inverse theory.'' Our preferred approach is not to invent the columns of the preconditioning matrix, but to estimate the prediction-error filter of the model and use its inverse.


next up previous print clean
Next: Conclusions Up: Fomel, et al.: Interpolation Previous: Examples
Stanford Exploration Project
9/12/2000