Next: Standard methods Up: THE WORLD OF CONJUGATE Previous: Statistical nonlinearity

## Coding nonlinear fitting problems

We can solve nonlinear least-squares problems in about the same way as we do iteratively reweighted ones. A simple adaptation of a linear method gives us a nonlinear solver if the residual is recomputed at each iteration. Omitting the weighting function (for simplicity) the template is:

iterate {

Define .

}


A formal theory for the optimization exists, but we are not using it here. The assumption we make is that the step size will be small, so that familiar line-search and plane-search approximations should succeed in reducing the residual. Unfortunately this assumption is not reliable. What we should do is test that the residual really does decrease, and if it does not we should revert to steepest descent with a smaller step size. Perhaps we should test an incremental variation on the status quo: where inside solver , we check to see if the residual diminished in the previous step, and if it did not, restart the iteration (choose the current step to be steepest descent instead of CD). I am planning to work with some mathematicians to gain experience with other solvers.

Experience shows that nonlinear problems have many pitfalls. Start with a linear problem, add a minor physical improvement or unnormal noise, and the problem becomes nonlinear and probably has another solution far from anything reasonable. When solving such a nonlinear problem, we cannot arbitrarily begin from zero as we do with linear problems. We must choose a reasonable starting guess, and then move in a stable and controlled manner. A simple solution is to begin with several steps of steepest descent and then switch over to do some more steps of CD. Avoiding CD in earlier iterations can avoid instability. Strong linear regularization'' discussed later can also reduce the effect of nonlinearity.

Next: Standard methods Up: THE WORLD OF CONJUGATE Previous: Statistical nonlinearity
Stanford Exploration Project
4/27/2004