** Next:** Conjugate Guided Gradient(CGG) method
** Up:** CG method for LS
** Previous:** CG method for LS

Instead of *L*^{2}-norm solutions obtained by the conventional LS solution,
*L*^{p}-norm minimization solutions, with , are often tried.
Iterative inversion algorithms called IRLS
(Iteratively Reweighted Least Squares)
algorithms have been developed to solve these problems, which lie
between the least-absolute-values problem and the classical least-squares problem.
The main advantage of IRLS is to provide an easy way to compute
the approximate *L*^{1}-norm solution.
*L*^{1}-norm solutions are known to be more robust than *L*^{2}-norm solutions,
being less sensitive to spiky, high-amplitude noise Claerbout and Muir (1973); Scales et al. (1988); Scales and Gersztenkorn (1987); Taylor et al. (1979).

The problem solved by IRLS is a minimization of the weighted residual
in the least-squares sense :

| |
(5) |

The gradient for the weighted residual in the least-squares sense becomes
| |
(6) |

The particular choice for is the one
that results in minimizing the *L*^{p} norm of the residual.
Choosing the *i*^{th} diagonal element of
to be a function of the *i*^{th} component of the residual vector
as follows:
| |
(7) |

the norm of the weighted residual is then
| |
(8) |

Therefore, this can be thought of as a method that estimates
the gradient in the *L*^{p}-norm of the residual.
This method is valid for norms where .
When the *L*^{1}-norm is desired, the weighting is as follows:
This will reduce the contribution of large residuals
and improve the fit to the data that is already well-estimated.
Thus, the *L*^{1}-norm-based minimization is robust,
less sensitive to noise bursts in the data.
Huber proposed a hybrid *L*^{1}/*L*^{2}-norm Huber (1973)
that treats the small residuals in an *L*^{2}-norm sense
and the large residuals in an *L*^{1}-norm sense.
This approach deals with both bursty and Gaussian-type noise,
and can be realized by weighting as follows:

where is a value that is used as a threshold between *L*^{1} and *L*^{2} -norms.
IRLS can be easily incorporated in CG algorithms by including
a weight
such that the operator has a postmultiplier
and the the adjoint operator has a premultiplier
Claerbout (2004).
Even though we do not know the real *L*^{p}-norm residual vector
at the beginning of the iteration,
we can approximate the residual with a residual
of the previous iteration step, and it will converge to
a residual that is very close to the *L*^{p}-norm residual as the iteration step continues.
This can be summarized as follows:

`
iterate {
}.
`

** Next:** Conjugate Guided Gradient(CGG) method
** Up:** CG method for LS
** Previous:** CG method for LS
Stanford Exploration Project

5/23/2004