previous up next print clean
Next: DIP MOVEOUT Up: Claerbout: Ellipsoids versus hyperboloidsEllipsoids Previous: INTRODUCTION


A natural question is which of the possibilities is more correct or better. The question of ``more correct'' applies to modeling and is best answered by theoreticians (who will find more than simply a hyperbola; they will find its waveform including its amplitude and phase as a function of frequency). The question of ``better'' is something else. An important practical issue is that the transformation should not leave miscellaneous holes in the output. It is typically desireable to write programs that loop over all positions in the output space, ``pulling'' in whatever inputs are required. It is usually less desireable to loop over all positions in the input space, ``pushing'' or ``spraying'' each input value to the appropriate location in the output space. Programs that push the input data to the output space might leave the output too sparsely distributed. Also, because of gridding, the output data might be irregularly positioned. Thus, to produce smooth outputs, we usually prefer the summation operators $\bold H'$ for migration and $\bold C'$ for diffraction modeling. Since one could always force smooth outputs by lowpass filtering, what we really seek is the highest possible resolution.

Although practitioners prefer the summation operators, $\bold H'$ and $\bold C'$,$\bold H$ and $\bold C'$are only approximately adjoint to each other (or $\bold C \approx \bold H'$ are approximately adjoint to each other.) This conflicts with the requirements for sparse solvers which work with an operator pair, either (H,H') or (C,C'). Paige and Saunders 1982 say that their well known LSQR program is expected to fail if the given adjoint is not truely the adjoint, such as the ``smooth'' pair (H',C').

For data processing we prefer the pull operator into model space rather than a push from data space. For inversion, however, we are matching observed data to theoretical data. For this matching we need accurate (not rough) theoretical data which implies we need the pull operator into data space and the solver needs its adjoint, the push operator. The operator we do not seem to require is $\vec {\bold t} = \bold H\ \vec {\bold z}$which sprays a model point into a possibly subsampled hyperbola. In summary:
\vec {\bold z} &=& \bold H' \; \vec {\bold t}
 \quad\quad {\rm ...
 ... &=& \bold H\ \vec {\bold z}
 \quad\quad {\rm make\ aliased\ data}\end{eqnarray} (5)

Although the requirement of a correct adjoint can be expected of all the standard conjugate-gradient data fitting algorithms, I believe my cgplus() conjugate-gradient subroutine is immune to that requirement. I believe this because cgplus() uses the adjoint only to suggest a direction for descent using the operator itself to choose the distance moved. In principle, the search line can be random as long as the correct (signed) distance is chosen. Perhaps when used in this way, my cgplus() is properly called a ``conjugate-direction'' algorithm. This conjecture should be resolved.

Do we get better and quicker inversions by using correct adjoints for search directions? or could we get better results with pull-type adjoint approximations that do not yield rough or aliased output?

previous up next print clean
Next: DIP MOVEOUT Up: Claerbout: Ellipsoids versus hyperboloidsEllipsoids Previous: INTRODUCTION
Stanford Exploration Project