next up previous print clean
Next: THE LSL ALGORITHM Up: Darche: Adaptive deconvolution Previous: Darche: Adaptive deconvolution

Introduction

The general prediction problem consists in predicting a data sequence x(t) from the past values of another data sequence y(t). In spectral analysis (or spiking deconvolution) problems, we try to estimate y(t) from its own immediately past values. This is usually solved with Levinson algorithm, in terms of a prediction filter, solution of a Toeplitz system involving the autocorrelations of the data y(t) at different lags.

However, we are most often interested in the residuals of the prediction, not in the filter. The alternative proposed in spectral analysis by Burg (1975) was to use a new set of unknowns, the backward and forward residuals, and auxiliary variables, the reflection coefficients; the filters themselves are never computed.

Many improvements to Burg's method have been proposed. One of them, proposed by Hale (1981), considers a time-adaptive formalism (Hale, 1981). Lee et al. (1981) and Friedlander (1982) proposed an adaptive algorithm for spectral analysis called the least-squares lattice (LSL) algorithm, which is similar in concept, though not in formulation, to Burg's algorithm. The advantage of adaptive methods is their applicability to data with statistical variations. But, indeed, I have never seen applications of these various developments to fields other than spectral analysis.

Noticing this restriction, I generalize the LSL algorithm and Hale's adaptive version of Burg's algorithm to the general prediction problem, in which we predict a data sequence x(t) from any other sequence y(t). With this extension, I am better able to solve the prediction problem for non-stationary data.

In the first part of the paper, I describe the basic LSL algorithm used for spectral analysis. The main idea of this algorithm is that the optimal residual at a given time depends only on the past values of the data, not on the future values. It can also include exponential tapering. The description of the algorithm is rather long, but is necessary to understand the concepts I use throughout the paper. After the presentation of the algorithm, I extend it to the general prediction problem, using simple geometrical considerations.

In the second part of the article, I describe also the principle of the adaptive Burg's algorithm. Here, the optimal residuals depend on both future and past values of the data, but a time-varying exponential weighting is used to minimize the energy of the residuals. The extension to the general prediction problem requires however a slight transform of this algorithm, but is also relatively easy.

Finally, I apply these algorithms to the elimination of the water-bottom multiples and peglegs on a marine shot-gather. The process is actually run in the $\tau-p$ domain, to make the predictive modeling more reliable. This application will indeed show the superiority of the generalized Burg's algorithm, essentially because it is better stabilized, using both future and past values.

At the end of this paper, I add the sketches of the two algorithms.


next up previous print clean
Next: THE LSL ALGORITHM Up: Darche: Adaptive deconvolution Previous: Darche: Adaptive deconvolution
Stanford Exploration Project
1/13/1998