previous up next print clean
Next: References Up: Claerbout: Eigenvectors for missing Previous: GRADIENT

CODING

Finally, I introduce the specialized notation I like for optimization manipulations. First, I omit bold on vectors. Second, when a vector is transformed by the operator $\bold A$,I denote the transformed vector by an upper-case letter. Thus $D = \bold A d$ and $G = \bold A g$.Let the scalar $\alpha$ denote the distance moved along the gradient. In this notation, perturbations of $\lambda$ are  
 \begin{displaymath}
\lambda(\alpha ) \ \ \ =\ \ \
{(D + \alpha G)'(D+\alpha G)
\over
(d + \alpha g)'(d+\alpha g)}\end{displaymath} (6)
A steepest descent method amounts to:
1.
Find the gradient $\bold g$ using (5)
2.
Compute $D = \bold A d$ and $G = \bold A g$
3.
Maximize the ratio of scalars in (6) by any crude method such as interval division.
4.
Repeat.
A conjugate-gradient-like method is like the steepest descent method supplementing the gradient by another vector, the vector of the previous step. Michael Saunders suggested the Pollack-Ribier (sp) method and gave me a Fortran subroutine for the line search.


previous up next print clean
Next: References Up: Claerbout: Eigenvectors for missing Previous: GRADIENT
Stanford Exploration Project
12/18/1997