next up previous print clean
Next: comparison among migration/inversion methods Up: Iterative inversion imaging algorithms Previous: [2] L norm or

[3] Iterative algorithms

Many algorithms can be chosen to solve the minimizing problem which is determined by the property of matrix A. The matrix A is hoped to be positive definite. Here only Newton's iterative algorithms are listed. [A] Initial Newton's approach Performing a Taylor expansion of $f\left( \delta \textbf{m}\right) $ near the point $\delta \textbf{m}^{\left( k \right) }$ yields
   \begin{eqnarray}
f\left(\delta \textbf{m}\right) &\approx & \phi\left( \delta \t...
 ...)} \right)\left(\delta \textbf{m}-\delta \textbf{m}^{(k)} \right).\end{eqnarray}
(30)
Letting $\partial \phi\left(\delta \textbf{m} \right)/\partial \delta \textbf{m} = 0$ yields $\nabla f\left(\delta \textbf{m}^{(k)}\right)+\nabla^{2}f\left(\delta \textbf{m}^{(k)} \right)\left(\delta \textbf{m}-\delta \textbf{m}^{(k)} \right)=0$. If the Hessian $\nabla^{2}f\left(\delta \textbf{m}^{(k)} \right)$ is invertible, the Newton iterative algorithm is  
 \begin{displaymath}
\delta \textbf{m}^{\left( k+1\right) }=\delta \textbf{m}^{\l...
 ...ight)\right]^{-1} \nabla f\left(\delta \textbf{m}^{(k)}\right).\end{displaymath} (31)
Clearly, the simple Newton iterative algorithm lacks 1D searching. The Newton iterative algorithm with 1D searching is called the damping Newton algorithm. Algorithm procedure:

(a) Assign the initial model $\delta \textbf{m}^{(1)}$ and the acceptable error $\varepsilon < 0$ and set the iterative number k=1.

(b) Calculate $\nabla f\left(\delta \textbf{m}^{(k)}\right)$ and $\left[ \nabla^{2}f\left(\delta \textbf{m}^{(k)} \right)\right]^{-1}$.

(c) If $\Vert \nabla f\left(\delta \textbf{m}^{(k)}\right)\Vert\leq \varepsilon $, then stop iteration; otherwise, $\textbf{d}^{(k)}=-\left[\nabla^{2} f\left(\delta \textbf{m}^{(k)} \right) \right] ^{-1} \nabla f \left( \delta \textbf{m}^{(k) }\right)$.

(d) Starting from $\delta \textbf{m}^{\left( k \right) }$, carry out 1D searching along the searching direction $\textbf{d}^{(k)}$ for $\lambda^{\left( k\right) }$ satisfying $f\left[ \left(\delta \textbf{m}^{\left( k\right) }\right) +\lambda_{k}\textbf{d...
 ...ght) }\right) +\lambda \textbf{d}^{(k)} \right] \right\rbrace _{\lambda \geq 0}$.

(e) Letting $\delta \textbf{m}^{\left( k+1\right) }= \delta \textbf{m}^{\left( k\right) } +\lambda_{k}\textbf{d}^{\left( k\right)}$ and k:=k+1, go to step (b). If the Hessian $\nabla^{2}f\left(\delta \textbf{m}^{(k)} \right)$ is not positive definite, the Newton algorithm should be modified further. That means $\nabla^{2}f\left(\delta \textbf{m}^{(k)} \right)$ is replaced with $\nabla^{2} f\left(\delta \textbf{m}^{(k)} \right)+\varepsilon_{k} I$. If the $\varepsilon_{k}$ is chosen suitably, the matrix $\nabla^{2} f\left(\delta \textbf{m}^{(k)} \right)+\varepsilon_{k} I$ will be positive definite. [B] Quasi-Newton algorithm: The main feature of this algorithm is that the inverse of the Hessian matrix is not explicitly calculated. Further implementing the differential operation on both sides of equation (30) yields  
 \begin{displaymath}
\nabla f\left(\delta \textbf{m}^{(k)}\right) \approx \nabla ...
 ...\left(\delta \textbf{m}^{k}-\delta \textbf{m}^{(k+1)} \right) ,\end{displaymath} (32)
and defining $\textbf{p}^{(k)}=\delta \textbf{m}^{k}-\delta \textbf{m}^{(k+1)}$ and $\textbf{q}^{(k)}=\nabla f\left(\delta \textbf{m}^{(k+1)}\right) - \nabla f\left(\delta \textbf{m}^{(k)}\right) $ yields the Quasi-Newton condition,  
 \begin{displaymath}
\textbf{p}^{(k)}=H_{k+1}\textbf{q}^{(k)},\end{displaymath} (33)
where $H_{k+1}=\left[\nabla^{2}f\left(\delta \textbf{m}^{(k+1)} \right) \right] ^{-1}$.A series of formulas for calculating Hk+1 is listed below: Formula 1: $H_{k+1}=H_{k}+\frac{\left(\textbf{p}^{(k)}-H_{k}\textbf{q}^{(k)} \right)\left(\...
 ...xtbf{q}^{(k)}\right) ^{T}\left( \textbf{p}^{(k)}-H_{k}\textbf{q}^{(k)}\right) }$.Formula 2: $H^{DFP}_{k+1}=H_{k}+\frac{ \textbf{p}^{(k)}\left(\textbf{p}^{(k)} \right)^{T}}{...
 ...tbf{q}^{(k)}\right)^{T}}{\left(\textbf{q}^{k} \right)^{T}H_{k}\textbf{q}^{(k)}}$, which is the DFP (Davidon-Fletcher-Powell) algorithm. Formula 3: $H^{BFGS}_{k+1}=H_{k}+\left(1+\frac{\left( \textbf{q}^{(k)}\right) ^{T}H_{k}\tex...
 ...k)}\left(\textbf{p}^{(k)} \right)^{T}}{\left(\textbf{p}^{k} \right)^{T}q^{(k)}}$, which is the BFGS (Broyden-Fletcher-Goldfarb-Shanno) algorithm. Formula 4: $H^{\phi}_{k+1}=\left( 1-\phi \right)H^{DFP}_{k+1}+\phi H^{BFGS}_{k+1} $, where $\phi$ is an parameter. The algorithm procedure is as follows:

(a) Assign the initial model $\delta \textbf{m}^{(1)}$ and the acceptable error $\varepsilon < 0$.

(b) Setting H1=In and the iterative number k=1, calculate $\textbf{g}_{1}=\nabla f\left(\delta \textbf{m}^{(1)}\right)$.

(c) Let $\textbf{d}^{(k)}=-H_{k}\textbf{g}_{k}$.

(d) Starting from $\delta \textbf{m}^{\left( k \right) }$, carry out 1D searching along the searching direction $\textbf{d}^{(k)}$ for $\lambda^{\left( k\right) }$ satisfying $f\left[ \left(\delta \textbf{m}^{\left( k\right) }\right) +\lambda_{k}\textbf{d...
 ...ght) }\right) +\lambda \textbf{d}^{(k)} \right] \right\rbrace _{\lambda \geq 0}$.

(e) If $\Vert \nabla f\left(\delta \textbf{m}^{(k)}\right)\Vert\leq \varepsilon $, then stop iteration; otherwise, go to Step (f).

(f) If k=n, then let $\Vert \delta \textbf{m}^{(1)}=\delta \textbf{m}^{(k+1)}\Vert$, go to Step(b); Otherwise, go to Step (g).

(g) Letting $ \textbf{g}_{k+1}=\nabla f\left(\delta \textbf{m}^{(k+1)}\right)$, $ \textbf{p}^{(k)}=\delta \textbf{m}^{(k)}-\delta \textbf{m}^{(k+1)}$ and $\textbf{q}^{(k)}=\nabla f\left(\delta \textbf{m}^{(k+1)}\right) - \nabla f\left(\delta \textbf{m}^{(k)}\right) $, calculate Hk+1 with any of Formula 1-4. Setting k:=k+1, go to Step (c).


next up previous print clean
Next: comparison among migration/inversion methods Up: Iterative inversion imaging algorithms Previous: [2] L norm or
Stanford Exploration Project
11/1/2005