Given the fitting goals (4), the direct solution for our model would be
It is common to set for preconditioned problems. The inversion will still be affected by the regularization operator because it is present as in the data fitting goal. However, it is known that after a number of iterations, will cause the not to have an effect on the data fitting goal. Additionally, Crawley (2000) showed that a non-zero can improve the convergence rate of an iterative inversion problem. For these reasons, we would like to find an inexpensive way to choose a non-zero .
Much work has been done on the idea of selecting based on L-curve analysis Calvetti et al. (1999, 2000). Essentially, an L-curve is created by plotting the residual of the model styling goal versus the residual of the data fitting goal using different values of . This curve is shaped like an L. The vertex of the L is the point that will minimize both of the residuals. This point tells us what the ideal is - assuming that we iterate the problem to convergence. This brings us to the question of the number of iterations (niter).
Ideally, an iterative least-squares inversion should be allowed to iterate to convergence, that is, until the error residual is not getting any smaller. Paige and Saunders (1982) show that when we use a conjugate gradient scheme to minimize our problem, it is guaranteed to converge in no more iterations than there are parameters being inverted for. Preconditioning the problem can reduce this number, but even then our seismic inversions are too large to allow them to iterate to convergence. Therefore, we have to decide how many iterations are needed given our available computer resources. We are forced to decide when a model is ``good enough''. Naturally, since we are not iterating to convergence, our L-curve will be different so selecting an is now dependent on the choice of niter. In the next section, I will demonstrate the effects of and niter for a synthetic imaging problem.