next up previous print clean
Next: Preconditioning Up: Background Previous: Background

Regularization

One method of solving so-called ``mixed-determined'' problems is to force the problem to be purely overdetermined by applying regularization, in which case Equation (1) becomes  
 \begin{displaymath}
\bf
 \left[ \begin{array}
{c}
 \bf B \  \epsilon \bf A
 \en...
 ...ft[ \begin{array}
{c}
 \bf d \  \bf 0 
 \end{array} \right].
 \end{displaymath} (3)
$\bf A$ is the regularization operator; usually convolution with a compact differential filter. $\epsilon$ is a scaling factor. The least squares inverse is then  
 \begin{displaymath}
\bold B^{\dagger} = (\bold B^T \bold B + \epsilon^2 \bold A^T \bold A)^{-1} \bold B^T.
 \end{displaymath} (4)
The regularization term, $\epsilon^2 \bold A^T \bf A$, is nonsingular with positive eigenvalues, so it stabilizes singularities in $\bold B^T \bf B$, but it is poorly-conditioned for many common choices of $\bf A$, i.e., Laplacian or gradient. The smallest eigenvalues of $\epsilon^2 \bold A^T \bf A$ correspond to smooth (low-frequency) model components, so iterative methods of solving equation (4), including the conjugate-direction method used in this paper, require many iterations to obtain smooth estimates of the model Shewchuk (1994).
next up previous print clean
Next: Preconditioning Up: Background Previous: Background
Stanford Exploration Project
9/5/2000