previous up next print clean
Next: CODING Up: Claerbout: Eigenvectors for missing Previous: EIGENVECTOR FORMULATION OF MISSING

GRADIENT

The gradient of $\lambda$ will assist us in finding the maximum. First recall the derivative of a quotient:
   \begin{eqnarray}
\lambda &=& {n \over d} \\ {d \lambda \over dx}
&=&
{n_x \over ...
 ... \ d_x
\ \ \ =\ \ \
{1\over d}\ \left( {n_x - \lambda d_x} \right)\end{eqnarray} (2)
(3)
Using (3) we set to zero the gradient of the quadratic form $\lambda(\bold d )=(\bold d' \bold A'\bold A \bold d)/(\bold d' \bold d)$getting $\bold A'\bold A \bold d =\lambda \bold d$which proves the eigenvector assertion I made above.

Now restate the gradient while distinguishing missing data from known data and taking the gradient only with respect to the missing. To exhibit this most clearly, we write the $\bold A$ operator with its columns separated into two parts, one for missing data and one for known, $ \bold A = [ \bold B \bold C ]$ 
 \begin{displaymath}
{ \partial \lambda \over \partial \bold m'}
\ \ \ =\ \ \
\bo...
 ...
\bold m \\ \bold k\end{array}\right]
\ \ - \ \ \lambda \bold m\end{displaymath} (4)
Another expression for the gradient offers more insight and is often more practical during coding: Let $\bold M$ be a square matrix with ones and zeros on the main diagonal where ones are found at locations of missing data and zeros otherwise. Using $\bold M$,the gradient is  
 \begin{displaymath}
\bold g \ \ \ =\ \ \\ bold M
\left[ \bold A'\bold A \bold d ...
 ...arallel \over
 \parallel \bold d \parallel
 } \ \bold d
\right]\end{displaymath} (5)


previous up next print clean
Next: CODING Up: Claerbout: Eigenvectors for missing Previous: EIGENVECTOR FORMULATION OF MISSING
Stanford Exploration Project
12/18/1997