next up previous print clean
Next: INVERSE NMO STACK Up: ITERATIVE METHODS Previous: First conjugate-gradient program

Preconditioning

 Like steepest descent, CG methods can be accelerated if a nonsingular matrix $\bold M$with known inverse can be found to approximate $\bold A$.Then, instead of solving $ \bold A \bold x \approx \bold y$,we solve $\bold M ^{-1} \bold A \bold x \approx \bold M ^{-1} \bold y = \bold c$,which should converge much faster since $\bold M^{-1}\bold A \approx \bold I$.This is called ``preconditioning.''

In my experience the matrix $\bold M$ is rarely available, except in the crude approximation of scaling columns, so the unknowns have about equal magnitude. As with signals and images, spectral balancing should be helpful.

EXERCISES:

  1. Remove lines from the conjugate-gradient program to convert it to a program that solves simultaneous equations by the method of steepest descent. Per iteration, how many dot products are saved, and how much is the memory requirement reduced?
  2. A precision problem can arise with the CG method when many iterations are required. What happens is that R drifts away from $\bold A r$and X drifts away from $\bold A x$.Revise the program cgmeth() to restore consistency every twentieth iteration.

next up previous print clean
Next: INVERSE NMO STACK Up: ITERATIVE METHODS Previous: First conjugate-gradient program
Stanford Exploration Project
10/21/1998