Although practitioners prefer the summation operators,
and , and are only approximately adjoint to each other
(or are approximately adjoint to each other.)
This conflicts with the requirements for sparse solvers
which work with an operator pair, either (*H*,*H*') or (*C*,*C*').
Paige and Saunders 1982
say that their well known LSQR program
is expected to fail if the given adjoint is not truely the adjoint,
such as the ``smooth'' pair (*H*',*C*').

For data processing we prefer the pull operator into model space rather than a push from data space. For inversion, however, we are matching observed data to theoretical data. For this matching we need accurate (not rough) theoretical data which implies we need the pull operator into data space and the solver needs its adjoint, the push operator. The operator we do not seem to require is which sprays a model point into a possibly subsampled hyperbola. In summary:

(5) | ||

(6) | ||

(7) | ||

(8) |

Although the requirement of a correct adjoint
can be expected of all the standard
conjugate-gradient data fitting algorithms,
I believe
my `cgplus()` conjugate-gradient subroutine
is immune to that requirement.
I believe this because `cgplus()` uses the adjoint only to suggest
a direction for descent
using the operator itself to choose the distance moved.
In principle, the search line can be random
as long as the correct (signed) distance is chosen.
Perhaps when used in this way, my `cgplus()`
is properly called a ``conjugate-direction'' algorithm.
This conjecture should be resolved.

Do we get better and quicker inversions by using correct adjoints for search directions? or could we get better results with pull-type adjoint approximations that do not yield rough or aliased output? |

11/12/1997