This leads to a technique that is new to me. I'll describe it first in the one-dimensional world. (In real life, multidimensional cases might be more interesting, for example where dip spectra change rapidly.) The basic problem is to define the appropriate regularization for a prediction-error filter (PEF). Regularization is ordinarily regarded as supplying a prior statement about the model, in this case, about the autoregression filter. We don't think about PEFs as being ``physical'' and the correct prior model and its covariance are not immediately obvious. The answer is that the prior PEF is nothing more and nothing less than the solution to the autoregression equations for a prior ``universal'' data set. In practice, it amounts to having a ``training'' data set. I have noticed an efficient way to merge the information of the training data set with the ``too-small'' local data set.

Given a data set packed in an operator and likewise a training data set ,we formulate the fitting goals for finding the PEF by using a constraint matrix (an identity matrix except for the (1,1) element which is zero).

(1) |

In principle, the training data (and hence the matrix ) is very large. Consider however a spectral factorization of the training data set. Say where is a minimum-phase spectral factorization of the training data set (and is the packing of into a convolution operator). For me, this is a new idea, that we express the prior information as a ``training wavelet'' that we find by spectral factorization of a ``universal'' data set. The idea is that we then find our ``local'' PEF by fitting the goals

(2) |

The result of fitting (2) is theoretically equal to that of (1) but computationally (2) is potentially much easier training wavelet is much more compact (because it is minimum-phase) than the full training data set.

4/20/1999