next up previous print clean
Next: Acknowledgments Up: Data regularization as an Previous: Model-space regularization

Data-space regularization (model preconditioning)

The data-space regularization approach is closely related to the concept of model preconditioning Nichols (1994). Regarding the operator $\bold{P}$ from equation ([*]) as a preconditioning operator, we can introduce a new model $\bold{p}$ with the equality  
 \begin{displaymath}
\bold{m = P p}\;.\end{displaymath} (21)
The residual vector $\bold{r}$ for the data-fitting equation ([*]) can be defined by the relationship  
 \begin{displaymath}
\epsilon \bold{r = d - L m = d - L P p}\;,\end{displaymath} (22)
where $\epsilon$ is the scaling parameter from equation ([*]). Let us consider a compound model $\hat{\bold{p}}$, composed of the preconditioned model vector $\bold{p}$ and the residual $\bold{r}$. With respect to the compound model, we can rewrite equation ([*]) as  
 \begin{displaymath}
\left[\begin{array}
{cc} \bold{L P} & \epsilon \bold{I} \end...
 ...} \end{array}\right] = 
\bold{G_d} \hat{\bold{p}} = \bold{d}\;,\end{displaymath} (23)
where $\bold{G_d}$ is a row operator:  
 \begin{displaymath}
\bold{G_d} = \left[\begin{array}
{cc} \bold{L P} & 
\epsilon \bold{I} \end{array}\right]\;,\end{displaymath} (24)
and $\bold{I}$ represents the data-space identity operator.

System ([*]) is clearly underdetermined with respect to the compound model $\hat{\bold{p}}$. If from all possible solutions of this system we seek the one with the minimal power $\hat{\bold{p}}^T
\hat{\bold{p}}$, the formal (ideal) result takes the well-known form  
 \begin{displaymath}
<\!\!\hat{\bold{p}} = \left[\begin{array}
{c} 
<\!\!\bold{p}...
 ... \epsilon^2 \bold{I}\right)^{-1} \bold{d}\end{array} \right]\;.\end{displaymath} (25)
Applying equation ([*]), we obtain the corresponding estimate $<\!\!\bold{m}\!\!\gt$ for the initial model $\bold{m}$, which is precisely equivalent to equation ([*]). This proves the legitimacy of the alternative data-space approach to data regularization: the model estimation is reduced to least-square minimization of the specially constructed compound model $\hat{\bold{p}}$ under the constraint ([*]).

Although the two approaches lead to similar theoretical results, they behave quite differently in the process of iterative optimization. In Chapter [*], I illustrate this fact with many examples and show that in the case of incomplete optimization, the second (preconditioning) approach is generally preferable.

The next chapter addresses the choice of the forward interpolation operator $\bold{L}$ - the necessary ingredient of the iterative data regularization algorithms.


next up previous print clean
Next: Acknowledgments Up: Data regularization as an Previous: Model-space regularization
Stanford Exploration Project
12/28/2000