next up previous print clean
Next: Trust region methods Up: Optimization Previous: The problem

Gradient Projection Algorithm

Identifying A(x) is a minimization problem in itself. An effective way to find A(x) is by using the gradient projection method Kelley (1999). First, define P as the projection onto $\Omega$ of ${\bf x}$ such that for each xi we have
\begin{displaymath}
P(x_i)= \left \{
 \begin{array}
{ccrclcl}
 l_i &if&x_i&\leq&...
 ...&x_i&\leq&u_i,\\  u_i &if&x_i&\geq&u_i.&&\\  \end{array}\right.\end{displaymath} (3)
From this definition, one can modify the classical steepest descent algorithm by projecting it onto the feasible region as follows:
\begin{displaymath}
{\bf x_{k+1}}=P({\bf x_k - \lambda g_k}),\end{displaymath} (4)
where k is the iteration number, ${\bf g_k}$ is the gradient of $f({\bf
 x})$ at iteration k and $\lambda$ is the step-length given by a line search scheme. Given the current position ${\bf x_k}$, finding the local minimizer is relatively straightforward. The important property of the projection algorithm is that the active set after many iterations is the same as the active set of the solution vector ${\bf x^*}$. Then, the exact solution of the projected gradient is not needed and only an approximate one is. Then, the inactive set can be optimized for the unconstrained variables, the bounded variables being held fixed. The unconstrained problem can be solved by any method for unconstrained optimization. The method in this paper is based on a trust region method that incorporates two important variations: first, line searches are used; second, the Hessian of the objective function (or second derivative) is approximated with BFGS matrices.
next up previous print clean
Next: Trust region methods Up: Optimization Previous: The problem
Stanford Exploration Project
10/23/2004