The convergence rate of the conjugate gradient algorithm depends
on the condition number of the matrix to be inverted.
For ill-conditioned matrices a preconditioner is often necessary.
The design of a good preconditioner
depends directly on the structure of the matrix.
In the inversion relation equ1, the number of equations is
the number of traces in the input data
and the number of unknowns is the number of output traces or bins.
Due to irregular sampling, the rows and columns of are
badly scaled.
Since is essentially a Kirchhoff-type matrix, its condition
can be improved by the diagonal weighting described in the previous
chapter.
This implies pre- and post-multiplying the
operator by a diagonal matrix whose diagonal entries are
the inverse of the sum of the rows or columns of .Similar approaches for diagonal scaling are
discussed in the mathematical literature using different norms of
the rows and columns. They are often referred to
as left and right preconditioners; I prefer to call them
*data-space* and *model-space* preconditioners. The rationale in
the terminology is based on the fact that the scaled adjoint is the
first step of the inversion. For left preconditioning, the adjoint operator
is applied after the data have been normalized by the diagonal operator.
I therefore refer to this weighting as *data-space* preconditioning.
Right preconditioning is equivalent to applying the adjoint operator
followed by a scaling of the model by the diagonal
operator. Consequently, I refer to this weighting as
*model-space* preconditioning.

1/18/2001