next up previous print clean
Next: Multiscale regularization Up: Background Previous: Regularization

Preconditioning

We can precondition equation (3) by making a simple change of variables:  
 \begin{displaymath}
\bf m = Sx.
 \end{displaymath} (5)
Analogous to equation (4), we can write the least squares inverse for the preconditioned model $\bf x$:  
 \begin{displaymath}
\bold B^{\dagger} = (\bold S^T \bold B^T {\bf BS} 
 + \epsilon^2 \bold S^T \bold A^T {\bf AS})^{-1} \bold B^T.
 \end{displaymath} (6)
If $\bf S$ is the left inverse of $\bf A$ ($\bf SA=I$), then equation (6) reduces to the classic damped least squares problem Menke (1989):  
 \begin{displaymath}
\bold B^{\dagger} = (\bold S^T \bold B^T {\bf BS} 
 + \epsilon^2 \bold I)^{-1} \bold B^T.
 \end{displaymath} (7)
If $\bf A$ is a differential operator, $\bf S$ is then a smoothing operator, and it follows that the smallest eigenvalues of $\bold S^T \bold B^T {\bf BS}$ correspond to the complex (high frequency) model components. In contrast to equation (4), smooth, useful models will appear in early iterations of the preconditioned problem of equation (7), although absolute rate of convergence to the same final result should not change.

Spectral factorization Sava et al. (1998) and the Helix transform Claerbout (1998) permit multidimensional, recursive, approximate inverse filtering, so it is indeed possible to compute ${\bf S \approx A}^{-1}$ for many choices of $\bf A$. One downside of recursive filter preconditioning is that the operator is difficult to parallelize. For large problems, the cost of a single least squares iteration may be considerable, so the parallelization issue should be kept in mind.


next up previous print clean
Next: Multiscale regularization Up: Background Previous: Regularization
Stanford Exploration Project
9/5/2000