next up previous print clean
Next: Nearest-neighbor NMO Up: NORMAL MOVEOUT AND OTHER Previous: Looping over output space

Formal inversion

We have thought of equation (10) as a formula for finding $\bold y$ from $\bold x$.Now consider the opposite problem, finding $\bold x$ from $\bold y$.Begin by multiplying equation (11) by the transpose matrix to define a new quantity $\tilde\bold x$:
\begin{displaymath}
\left[ 
 \begin{array}
{c}
 \tilde x_1 \  
 \tilde x_2 \  ...
 ...\  
 y_2 \  y_3 \  y_4 \  y_5 \  y_6
 \end{array} \right] \end{displaymath} (12)
Obviously, $\tilde\bold x$ is not the same as $\bold x$,but at least these two vectors have the same dimensionality. This turns out to be the first step in the process of finding $\bold x$from $\bold y$.Formally, the problem is  
 \begin{displaymath}
\bold y \eq \bold B \, \bold x\end{displaymath} (13)
And the formal solution to the problem is  
 \begin{displaymath}
\bold x \eq ({\bf B'\, B})^{-1} \, {\bf B'} \, \bold y\end{displaymath} (14)
Formally, we verify this solution by substituting (13) into (14).
\begin{displaymath}
\bold x \eq ( {\bf B' \, B} )^{-1} \, ({\bf B'} \, \bold B) \,\bold x
 \eq \bold I \bold x \eq \bold x\end{displaymath} (15)
In applications, the possible nonexistance of an inverse for the matrix $( {\bf B' \, B} )$is always a topic for discussion. For now we simply examine this matrix for the interpolation problem. We see that it is diagonal:
\begin{displaymath}
\bold B' \, \bold B
\eq
 \left[ 
 \begin{array}
{ccccccc}
 1...
 ...& 0 \  0 & 0 & 1 & 0 \  0 & 0 & 0 & 2
 \end{array} \right] \;\end{displaymath} (16)
So, ${\bf \tilde x}_1 = \bold x_1$; but ${\bf \tilde x}_2 = 2 \bold x_2$.To recover the original data, we need to divide ${\bf \tilde x}$ by the diagonal matrix $\bold B'\,\bold B$.Thus, matrix inversion is easy here.

Equation (14) has an illustrious reputation, which arises in the context of ``least squares.'' Least squares is a general method for solving sets of equations that have more equations than unknowns.

Recovering $\bold x$ from $\bold y$ using equation (14) presumes the existence of the inverse of $\bold B'\,\bold B$.As you might expect, this matrix is nonsingular when ${\bf B}$stretches the data, because then a few data values are distributed among a greater number of locations. Where the transformation squeezes the data, $\bold B'\,\bold B$must become singular, since returning uniquely to the uncompressed condition is impossible.

We can now understand why an adjoint operator is often an approximate inverse. This equivalency happens in proportion to the nearness of the matrix $\bold B'\,\bold B$to an identity matrix. The interpolation example we have just examined is one in which $\bold B'\,\bold B$differs from an identity matrix merely by a scaling.


next up previous print clean
Next: Nearest-neighbor NMO Up: NORMAL MOVEOUT AND OTHER Previous: Looping over output space
Stanford Exploration Project
10/21/1998