next up previous print clean
Next: Stability of the IRLS Up: IRLS ALGORITHM APPLIED TO L Previous: Formulation of the problem

Ill-conditioning of L2 deconvolution processes

To assure the convergence of the IRLS algorithm, the numerical result of each iteration must be close to the numerical limit of the corresponding CG algorithm, because it will be used through W as input for the next iteration. Moreover, the convolution matrices A and $W^{{1\over 2}}A$ may not be sparse (equation (9)), especially when $n_x\ll n_y$, as in predictive deconvolution. Then many operations are required to compute the convolutions and correlations in the conjugate-gradient algorithms, so the influence of round-off errors increases. A study of the convergence of the conjugate-gradient algorithm in this particular case is necessary.

The first iteration of the IRLS algorithm is a least-squares inversion. Then we can use some results about L2 deconvolution problems. According to Szegö's theorem (Ekstrom, 1973), since ATA is a Toeplitz symmetric positive matrix, its eigenvalues ($\lambda_1^2,\cdots,\lambda_{n_a}^2$) can be expressed relatively to the Fourier transform $\hat{a}$ of the filter a. Especially, if (ai) doesn't suffer from aliasing:

\begin{displaymath}
\lambda_i^2 \approx \vert\hat{a}(\omega_i)\vert^2\:,\mbox{\h...
 ...\over (n_a-1)\Delta t}\mbox{\hspace{1.0cm} ($i=1$ to $n_a$)}\:,\end{displaymath}

where $\Delta t$ is the time sampling of the vectors y, a and x (Figure [*]). It implies that, if the power spectrum of the filter a has some amplitudes near 0, for example if a is band-limited, the problem should be ill-conditioned. This is the case in predictive deconvolution, where the filter a is the seismic trace itself. Moreover, by oversampling the problem, we would remove the Nyquist frequency from the last frequency of the filter, and create smaller eigenvalues; we are thus increasing again the condition number of the problem (Figure [*]). Intuitively, we come closer to an infinite-dimension problem, where ATA has an infinite set of positive eigenvalues, which decrease to 0 (Hilbert-Schmidt theorem for compact self-adjoint operators); this limit value causes the ill-conditioning of the problem for an infinite dimension.

In fact, we must be careful with Szegö's theorem. Milinazzo et al. (1987) have shown that, even if the power spectrum of the filter a has some 0 values, the minimum eigenvalue $\lambda_{min}$ of ATA might not vanish. They use an asymptotic development of $\lambda_{min}$ versus nx, which effectively goes to 0 when $n_x \rightarrow \infty$. Small-dimensioned problems might be well-conditioned even if there are zeros in the power spectrum of the convolution filter a.

Finally, two other remarks. First, adding some white noise increases the value of the minimum eigenvalue, and decreases the condition number, as the maximum eigenvalue is hardly modified. Secondly, if we want to apply CG algorithms to least-squares deconvolution, as in the first step of the IRLS algorithm, the convergence will be accelerated if the eigenvalues are gathered in groups: this will not be true with smooth spectra, or very irregular spectra.


next up previous print clean
Next: Stability of the IRLS Up: IRLS ALGORITHM APPLIED TO L Previous: Formulation of the problem
Stanford Exploration Project
1/13/1998