Next: Preconditioning, IRLS, and Ji's
Up: Crawley: Interpolation
Previous: INTERPOLATION AND LEAST SQUARES
Interpolation seems to call for as spikey a model
as possible.
One way to a spikier model is a change of variables in the optimization,
or preconditioning Claerbout and Nichols (1994).
Define a new model n, such that:
The earlier optimization can be rewritten as
and after solving for n the original model m is regained by
applying equation 2.
The question is what operator W makes this added transformation
worthwhile.
After Nichols 1994 and Ji 1994,
I choose
W to be a diagonal weighting matrix.
The desire to reduce smoothness
suggests that the weights be chosen to raise the model to some power.
In fact it turns out that choosing the power 1.5
corresponds to minimizing the L1 norm of the model;
one can choose an Lp norm by raising the model to
the (2-p)/2 power Nichols (1994).
With W chosen, the gradient direction of the solution in n
is
| |
(4) |
where r is the residual and
prime notation connotes an adjoint operator.
Of course, since W is a diagonal matrix, W = W'.
The use of a power of the model as its own weighting function
introduces nonlinearity into the problem.
The problem can be solved in steps, using a given weighting function
for several iterations, or the weighting function can be updated
with every iteration.
Next: Preconditioning, IRLS, and Ji's
Up: Crawley: Interpolation
Previous: INTERPOLATION AND LEAST SQUARES
Stanford Exploration Project
11/12/1997