next up previous print clean
Next: Minimizing the Huber function Up: Algorithm for minimizing the Previous: The limited memory BFGS

An efficient algorithm for solving nonlinear problems

The solver works as follows:
1.
Choose $\bold{m}_0$, l, $\bold{B}_0$. Set k=0.
2.
Compute
\begin{eqnarray}
\bold{d}_k&=&-\bold{B}_k\nabla f(\bold{m}_k),\\  \bold{m}_{k+1}&=&\bold{m}_k+\lambda_k\bold{d}_k,
 \end{eqnarray} (79)
(80)
where $\lambda_k$ meets the Wolfe conditions.
3.
Let $\hat{l}$=min$\{k$, $l-1\}$. Update $\bold{B}_0$ $\hat{l}+1$ times using the pairs $\{\bold{y}_i,\bold{s}_i\}_{j=k-\hat{l}}^k$, i.e., let
   \begin{eqnarray}
\bold{B}_{k+1}&=&\bold{v}_k'\bold{v}_{k-1}'\cdots\bold{v}_{k-\h...
 ...}_{k-1}'\bold{v}_k \nonumber\\  & &+\rho_k\bold{s}_k\bold{s}_k'.
 \end{eqnarray}
(81)
(82)
4.
Set k=k+1 and go to 2 if the residual power is not small enough.
The update $\bold{B}_{k+1}$ is not formed explicitly; instead we compute $\bold{d}_k=-\bold{B}_k\nabla f(\bold{x}_k)$ with an iterative formula Nocedal (1980). Liu and Nocedal (1989) propose scaling the initial symmetric positive definite $\bold{B}_0$ at each iteration as follows:  
 \begin{displaymath}
\bold{B}_k^0=\frac{\bold{y}_k'\bold{s}_k}{\parallel\bold{y}_k\parallel_2^2}\bold{B}_0.\end{displaymath} (83)
This scaling greatly improves the performances of the method. Liu and Nocedal (1989) show that the storage limit for large-scale problems has little effects. A common choice for l is l=5. In practice, the initial guess $\bold{B}_0$ for the Hessian is the identity matrix $\bold{I}$; then it might be scaled as proposed in equation ([*]). The nonlinear solver as detailed in the previous algorithm converges to a local minimizer $\bold{m}^*$ of $f({\bf m})$.


next up previous print clean
Next: Minimizing the Huber function Up: Algorithm for minimizing the Previous: The limited memory BFGS
Stanford Exploration Project
5/5/2005