** Next:** Stability of the IRLS
** Up:** IRLS ALGORITHM APPLIED TO L
** Previous:** Formulation of the problem

To assure the convergence of the IRLS algorithm, the numerical result of
each iteration must be close to the numerical limit of the corresponding CG
algorithm, because it will be used through *W* as input for the next iteration.
Moreover, the convolution matrices *A* and may not be sparse
(equation (9)), especially when , as in predictive
deconvolution. Then many operations are required to compute
the convolutions and correlations in the conjugate-gradient algorithms,
so the influence of round-off errors increases. A study of the convergence
of the conjugate-gradient algorithm in this particular case is necessary.
The first iteration of the IRLS algorithm is a least-squares inversion.
Then we can use some results about *L*^{2} deconvolution problems. According
to Szegö's theorem (Ekstrom, 1973), since *A*^{T}*A* is a Toeplitz symmetric
positive matrix, its eigenvalues () can
be expressed relatively to the Fourier transform of the filter *a*.
Especially, if (*a*_{i}) doesn't suffer from aliasing:

where is the time sampling of the vectors *y*, *a* and *x* (Figure ).
It implies that, if the power spectrum of the filter *a* has
some amplitudes near 0, for example if *a* is band-limited, the problem should
be ill-conditioned. This is the case in predictive deconvolution,
where the filter *a* is the seismic trace itself. Moreover, by oversampling the
problem, we would remove the Nyquist frequency from the last frequency of the
filter, and create smaller eigenvalues; we are thus increasing again the
condition number of the problem (Figure ). Intuitively, we come
closer to an infinite-dimension problem, where *A*^{T}*A* has an infinite set of
positive eigenvalues, which decrease to 0 (Hilbert-Schmidt theorem for compact
self-adjoint operators); this limit value causes the ill-conditioning of the
problem for an infinite dimension.
In fact, we must be careful with Szegö's theorem. Milinazzo et al. (1987)
have shown that, even if the power spectrum of the filter *a* has some 0 values,
the minimum eigenvalue of *A*^{T}*A* might not vanish. They use an
asymptotic development of versus *n*_{x}, which effectively
goes to 0 when . Small-dimensioned problems might be
well-conditioned even if there are zeros in the power spectrum of the
convolution filter *a*.

Finally, two other remarks. First, adding some white noise increases the value
of the minimum eigenvalue, and decreases the condition number, as the maximum
eigenvalue is hardly modified. Secondly, if we want to apply CG algorithms to
least-squares deconvolution, as in the first step of the IRLS algorithm, the
convergence will be accelerated if the eigenvalues are gathered in groups: this
will not be true with smooth spectra, or very irregular spectra.

** Next:** Stability of the IRLS
** Up:** IRLS ALGORITHM APPLIED TO L
** Previous:** Formulation of the problem
Stanford Exploration Project

1/13/1998