next up previous print clean
Next: Automatic adjoints Up: ADJOINT DEFINED: DOT-PRODUCT TEST Previous: Matrix versus operator

Inverse operator

A common practical task is to fit a vector of observed data $\bold d_{\rm obs}$to some theoretical data $\bold d_{\rm theor}$by the adjustment of components in a vector of model parameters $\bold m$. 
\bold d_{\rm obs}
\bold d_{\rm theor}
\bold F \bold m\end{displaymath} (30)
A huge volume of literature establishes theory for two estimates of the model, $\hat {\bold m}_1$ and $\hat {\bold m}_2$, where
\hat {\bold m}_1 &=& (\bold F'\bold F)^{-1}\bold F'\bold d
\\ \hat {\bold m}_2 &=& \bold F'(\bold F\bold F')^{-1}\bold d\end{eqnarray} (31)
Some reasons for the literature being huge are the many questions about the existence, quality, and cost of the inverse operators. Before summarizing that, let us quickly see why these two solutions are reasonable. Inserting equation (31) into equation (32), and inserting equation (33) into equation (31), we get the reasonable statements:
\hat {\bold m}_1 &=& (\bold F'\bold F)^{-1}(\bold F'\bold F)\bo...
 ...or} &=&
 (\bold F\bold F')(\bold F\bold F')^{-1}\bold d \eq\bold d\end{eqnarray} (33)
Equation (34) says that the estimate $\hat {\bold m}_1$gives the correct model $\bold m$if you start from the theoretical data. Equation (35) says that the model estimate $\hat {\bold m}_2$gives the theoretical data if we derive $\hat {\bold m}_2$from the theoretical data. Both of these statements are delightful. Now let us return to the problem of the inverse matrices.

Strictly speaking, a rectangular matrix does not have an inverse. Surprising things often happen, but commonly, when $\bold F$ is a tall matrix (more data values than model values) then the matrix for finding $\hat {\bold m}_1$is invertible while that for finding $\hat {\bold m}_2$is not, and when the matrix is wide instead of tall (the number of data values is less than the number of model values) it is the other way around. In many applications neither $\bold F'\bold F$ nor $\bold F\bold F'$ is invertible. This difficulty is solved by ``damping'' as we will see in later chapters. The point to notice in this chapter on adjoints is that in any application where $\bold F\bold F'$or $\bold F'\bold F$ equals $\bold I$ (unitary operator), that the adjoint operator $\bold F'$ is the inverse $\bold F^{-1}$by either equation (32) or (33).

Theoreticians like to study inverse problems where $\bold m$is drawn from the field of continuous functions. This is like the vector $\bold m$ having infinitely many components. Such problems are hopelessly intractable unless we find, or assume, that the operator $\bold F'\bold F$is an identity or diagonal matrix.

In practice, theoretical considerations may have little bearing on how we proceed. Current computational power limits matrix inversion jobs to about 104 variables. This book specializes in big problems, those with more than about 104 variables, but the methods we learn are also excellent for smaller problems.

next up previous print clean
Next: Automatic adjoints Up: ADJOINT DEFINED: DOT-PRODUCT TEST Previous: Matrix versus operator
Stanford Exploration Project