next up previous print clean
Next: Inverse operator Up: ADJOINT DEFINED: DOT-PRODUCT TEST Previous: The word ``adjoint''

Matrix versus operator

operator Here is a short summary of where we have been and where we are going: Start from the class of linear operators, add subscripts and you get matrices. Examples of operators without subscripts are routines that solve differential equations and routines that do fast Fourier transform. What people call ``sparse matrices'' are often not really matrices but operators, because they are not defined by data structures but by routines that apply them to a vector. With sparse matrices you easily can do $\bold A(\bold B(\bold C\bold x))$but not $(\bold A\bold B\bold C)\bold x$.

Although a linear operator does not have defined subscripts, you can determine what would be the operator value at any subscript: by applying the operator to an impulse function, you would get a matrix column. The adjoint operator is one from which we can extract the transpose matrix. For large spaces this extraction is unwieldy, so to test the validity of adjoints, we probe them with random vectors, say $\bold x$ and $\bold y$,to see whether $\bold y'(\bold A\bold x)=(\bold A'\bold y)'\bold x$.Mathematicians define adjoints by this test, except that instead of using random vectors, they say ``for all functions,'' which includes the continuum.

This defining test makes adjoints look mysterious. Careful inspection of operator adjoints, however, generally reveals that they are built up from simple matrices. Given adjoints $\bold A'$,$\bold B'$,and $\bold C '$,the adjoint of $\bold A\bold B\bold C$is $\bold C'\bold B'\bold A'$.Fourier transforms and linear-differential-equation solvers are chains of matrices, so their adjoints can be assembled by the application of adjoint components in reverse order. The other way we often see complicated operators being built from simple ones is when operators are put into components of matrices, typically a $1\times 2$or $2\times 1$matrix containing two operators. An example of the adjoint of a two-component column operator is
\begin{displaymath}
\left[
 \begin{array}
{c}
 \bold A \\  \bold B
 \end{array} ...
 ...[
 \begin{array}
{cc}
 \bold A' & \bold B'
 \end{array} \right]\end{displaymath} (29)

Although in practice an operator might be built from matrices, fundamentally, a matrix is a data structure whereas an operator is a procedure. A matrix is an operator if its subscripts are hidden but it can be applied to a space, producing another space.

As matrices have inverses, so do linear operators. You don't need subscripts to find an inverse. The conjugate-gradient method and conjugate-direction method explained in the next chapter are attractive methods of finding them. They merely apply $\bold A$and $\bold A'$and use inner products to find coefficients of a polynomial in $\bold A\bold A'$that represents the inverse operator.

Whenever we encounter a positive-definite matrix we should recognize its likely origin in a nonsymmetric matrix $\bold F$times its adjoint. Those in natural sciences often work on solving simultaneous equations but fail to realize that they should return to the origin of the equations which is often a fitting goal; i.e., applying an operator to a model should yield data, i.e., $\bold d \approx \bold d_0 + \bold F(\bold m-\bold m_0)$where the operator $\bold F$is a partial derivative matrix (and there are potential underlying nonlinearities). This begins another story with new ingredients, weighting functions and statistics.


next up previous print clean
Next: Inverse operator Up: ADJOINT DEFINED: DOT-PRODUCT TEST Previous: The word ``adjoint''
Stanford Exploration Project
4/27/2004