** Next:** About this document ...
** Up:** Preconditioning
** Previous:** SCALING THE ADJOINT

In mathematics, adjoints are defined a little differently than
we have defined them here
(as matrix transposes).^{}
The mathematician begins by telling us that we cannot simply
form any dot product we want.
We are not allowed to take the dot product of any two vectors
in model space
or data space
.Instead, we must first transform them to a preferred coordinate system.
Say
and
, etc for other vectors.
We complain we do not know and .They reply that we do not really need to know them
but we do need to have the inverses (aack!) of
and
.A pre-existing common notation is
and
.Now the mathematician buries the mysterious new positive-definite
matrix inverses in the definition of dot product
and likewise with
.This suggests a total reorganization of our programs.
Instead of computing
we could compute .Indeed, this is the ``conventional'' approach.
This definition of dot product would be buried in the solver code.
The other thing that would change would be the search direction
.Instead of being the gradient as we have defined it
,it would be
.A mathematician would
*define* the adjoint of to be
.(Here remains matrix transpose.)
You might notice this approach nicely incorporates
both residual weighting and preconditioning
while yet evading the issue of where we get the matrices
and or how we invert them.
Fortunately, upcoming chapter
suggests how,
in image estimation problems,
to obtain sensible estimates of the elusive operators and .Paranthetically, modeling calculations in physics and engineering
often use similar mathematics
in which the role of is not so mysterious.
Kinetic energy is mass times velocity squared.
Mass can play the role of .
So, should we continue to use
or should we take the conventional route and go with
?
One day while benchmarking a wide variety of computers I was shocked
to see some widely differing numerical results. Now I know why.
Consider adding 10^{7} identical positive floating point numbers, say 1.0's,
in an arithmetic with precision of 10^{-6}.
After you have added in the first 10^{6} numbers,
the rest will all truncate in the roundoff
and your sum will be wrong by a factor of ten.
If the numbers were added in pairs,
and then the pairs added, etc, there would be no difficulty.
Precision is scary stuff!

It is my understanding and belief that there is nothing wrong
with the approach of this book, in fact,
it seems to have some definite advantages.
While the conventional approach requires one
to compute the adjoint correctly, we do not.
The method of this book
(which I believe is properly called conjugate directions)
has a robustness that, I'm told,
has been superior in some important geophysical applications.
The conventional approach seems to get in trouble when
transpose operators are computed with insufficient precision.

** Next:** About this document ...
** Up:** Preconditioning
** Previous:** SCALING THE ADJOINT
Stanford Exploration Project

4/27/2004