The result of this work can be interpreted in a broader context of geophysical estimation (often called inversion). The basic formulation of a geophysical estimation problem consists of setting up two goals, one for data fitting, and the other for model smoothing. These two goals may be written as:
![]() |
(7) | |
(8) |
Perhaps the most straightforward application is geophysical mapping.
Then is data sprinkled randomly around in space,
is the linear interpolation operator, and
is the vector of
unknown map values on a cartesian mesh. Many map pixels have no data
values; and they are determined by the model-residual goal (damping)
which is generally specified by a ``roughening operator''
.Our experience shows that binning is often a useful approximation to
interpolation
. With binning, our fitting goals look
formally the same, but they are a little easier to understand
![]() |
(9) | |
(10) |
A good preconditioner is one that somehow allows iterative solvers to
obtain their solutions in a fewer numbers of iterations. It is easy
to guess a preconditioner and try it to see if it helps. Start from
the fitting goals (9) and (10) for
finding the model , and any transformation
.Implicitly define a new variable
by
; insert it into the goals (9) and
(10); iteratively solve for
; and finally
convert
back to
with
.You have found a good preconditioner if you have solved the problem in
fewer iterations.
The helix enters the picture because it offers us another guess for
the operator . As we have shown in this paper, the guess
is an outstanding choice, speeding by an order of
magnitude (or more) the solution to the first problem we tried.
The spectacularly successful guess is this: Instead of iteratively
fitting the goals (9) and (10) for the
model we recast those goals for the whitened model
. Substituting
into the fitting goals
we get
![]() |
(11) | |
(12) |
When a fitting task is large and the iterations cannot go to
completion then it is often suggested that we simply omit the damping
(12) and regard the results of each iteration as the
result of decreasing the amount of model damping. We find this idea
to have merit when the model goal is cast as but not when the model smoothing goal is cast in the equivalent form
.
To move towards the fitting goal (11), we start at
where often
. For each
iteration, we apply polynomial division by the PEF on the helix,
. It is very quick. At the end, we can plot the
deconvolved map
, the map itself
,or the known bin values with the empties replaced by their prediction,
. Our first results
are exciting because they solve the problem so rapidly that we
anticipate success with problems of industrial scale.
This example suggests that the philosophy of image creation by optimization has a dual orthonormality: First, Gauss (and common sense) tells us that the data residuals should be roughly equal in size. Likewise in Fourier space they should be roughly equal in size, which means they should be roughly white, i.e. orthonormal. (I use the word ``orthonormal'' because white means the autocorrelation is an impulse, which means the signal is statistically orthogonal to shifted versions of itself.) Second, to speed convergence of iterative methods, we need a whiteness, another othonormality, in the solution. The map image, the physical function that we seek, might not be itself white, so we should solve first for another variable, the whitened map image, and as a final step, transform it to the ``natural colored'' map.
Often geophysicists create a preconditioning matrix by
inventing columns that ``look like'' the solutions that they seek.
Then the space
has many fewer components than the space of
. This approach is touted as a way of introducing geological
and geophysical prior information into the solution. Indeed, it
strongly imposes the form of the solution. Perhaps this approach
deserves the diminutive term ``curve fitting'' instead of the
grandiloquent ``geophysical inverse theory.'' Our preferred approach
is not to invent the columns of the preconditioning matrix, but to
estimate the prediction-error filter of the model and use its inverse.