previous up next print clean
Next: Data-space regularization Up: Fomel: Regularization Previous: Introduction

Model-space regularization

Let us denote the linear forward modeling operator by L. Then the basic matrix equation to be inverted is  
L m = d \;,\end{displaymath} (1)
where m stands for the model vector, and d represents the data vector.

Quite often the size of the data space is smaller than the desired size of the model space. This is typical for some interpolation problems Claerbout (1992, 1994), but may also be the case in tomographic problems. Even if the data size is larger than the model size, certain components of the model m may not be fully constrained by equation (1). In interpolation applications, this situation corresponds to empty bins in the model space. In tomography applications, it corresponds to shadow zones in the model, not illuminated by the tomographic rays.

Model-space regularization suggests adding equations to system (1) to obtain a fully constrained (well-posed) inverse problem. These additional equations are based on prior assumptions about the model and typically take the form  
D m \approx 0 \;,\end{displaymath} (2)
where D represents the imposed condition in the form of a linear operator. In many applications, D can be thought of as a filter, enhancing ``bad'' components in the model, or as a differential equation that we assume the model should satisfy.

The full system of equations (1)-(2) can be written in a short notation as  
G_m m = \left[\begin{array}
{c} L \\  \mbox{\unboldmath$\lam...
{c} d \\  0 \end{array}\right] = \hat{d}\;,\end{displaymath} (3)
where $\hat{d}$ is the effective data vector:  
\hat{d} = \left[\begin{array}
{c} d \\  0 \end{array}\right]\;,\end{displaymath} (4)
Gm is a column operator:  
G_m = \left[\begin{array}
{c} L \\  \mbox{\unboldmath$\lambda$}D \end{array}\right]\;,\end{displaymath} (5)
and $\mbox{\unboldmath$\lambda$}$ is a scaling parameter. The subscript m stands for model space to help us distinguish Gm from the analogous data-space operator, introduced in the next section.

Now that the inverse problem (3) is fully constrained, we can solve it by means of unconstrained least-square optimization, minimizing the squared power $\hat{r}^T \hat{r}$ of the compound residual vector  
\hat{r} = \hat{d} - G_m m =
{c} d - L m\\  - \mbox{\unboldmath$\lambda$}D m \end{array}\right]\;.\end{displaymath} (6)
The formal solution of the regularized optimization problem has the known form  
<\!\!m\!\!\gt = \left(G_m^T G_m\right)^{-1} G_m^T \hat{d} =
 ...^T L + \mbox{\unboldmath$\lambda$}^2 D^T D\right)^{-1} L^T d\;.\end{displaymath} (7)
To recall the derivation of formula (7), consider the objective function

\hat{r}^T \hat{r} = \left(\hat{d} - G_m m\right)^T\left(\hat{d} - G_m m\right)\end{displaymath}

and take its partial derivative with respect to the model vector m. Setting the derivative equal to zero leads to the normal equations  
G_m^T G_m m = G_m^T \hat{d}\;,\end{displaymath} (8)
whose solution has the form of formula (7).

For the sake of simplicity, we will consider separately a ``trivial'' regularization, which seeks the smallest possible model from all the models, defined by equation (1). For this form of regularization, DT D is an identity operator. If we denote the model-space identity operator by Im, the least-square estimate in this case takes the form  
<\!\!m\!\!\gt = \left(L^T L + \mbox{\unboldmath$\lambda$}^2 I_m\right)^{-1} L^T d\;.\end{displaymath} (9)

previous up next print clean
Next: Data-space regularization Up: Fomel: Regularization Previous: Introduction
Stanford Exploration Project