data
Figure 5 The input data are irregularly sampled. | ![]() |
The first example is a simple synthetic test for 1-D inverse
interpolation. The input data were randomly subsampled (with
decreasing density) from a sinusoid (Figure ). The
forward operator
in this case is linear interpolation. We seek
a regularly sampled model that could predict the data with a
forward linear interpolation. Sparse irregular distribution of the
input data makes the regularization enforcement a necessity.
I applied convolution with the simple (1,-1)
difference filter as the operator
that forces model continuity
(the first-order spline).
An appropriate preconditioner
in this
case is recursive causal integration.
![]() |
As expected,
preconditioning provides a much faster rate of convergence.
Since iteration to the exact solution
is never achieved in large-scale problems,
the results of iterative optimization may turn out quite differently.
Bill Harlan points out that the two goals
in (8) conflict with each other:
the first one enforces ``details'' in the model,
while the second one tries to smooth them out.
Typically, regularized optimization creates
a complicated model at early iterations.
At first, the data fitting goal (8) plays a more important role.
Later, the regularization goal (8) comes into play
and simplifies (smooths) the model as much as needed.
Preconditioning acts differently.
The very first iterations create a simplified (smooth) model.
Later, the data fitting goal adds more details into the model.
If we stop the iterative process early,
we end up with an insufficiently complex model,
not in an insufficiently simplified one.
Figure provides a clear illustration of Harlan's observation.
Figure
measures the rate of convergence by the model residual,
which is a distance from the current model to the final solution.
It shows that preconditioning saves many iterations.
Since the cost of each iteration for each method is roughly equal,
the efficiency of preconditioning is evident.
schwab1
Figure 7 Convergence of the iterative optimization, measured in terms of the model residual. The ``p'' points stand for preconditioning; the ``r'' points, regularization. | ![]() |
The module invint2
invokes the solvers to make
Figures
and
.
We use convolution with
helicon
for the regularization
and we use deconvolution with
polydiv
for the preconditioning.
The code looks fairly straightforward except for
the oxymoron
known=aa%mis.
invint2Inverse linear interpolation