Next: APPLICATION TO A SYNTHETIC Up: Rickett et al.: Adaptive Previous: Introduction

# THEORY

A canonical time series analysis problem Robinson and Treitel (1980) is that of shaping filter estimation: given an input time series , and a desired time series , we must compute a filter, , that minimizes the difference between and .Optimal filter theory provides the classical solution to the problem by finding the filter that minimizes the difference in a least-squared sense, i.e. minimizing
 (1)

With the notation that is the matrix representing convolution with time series , we can rewrite this desired minimization as a fitting goal [e.g. Claerbout (1998a)],
 (2)
which leads us to following system of normal equations for the optimal shaping filter:
 (3)
Equation (3) implies that the optimal shaping filter, , is given by the cross-correlation of with , filtered by the inverse of the auto-correlation of . The auto-correlation matrix, ,has Toeplitz structure that can be inverted rapidly by Levinson recursion.

For the multiple suppression problem, the vector represents the multiple infested raw data, and the matrix represents convolution with the multiple model. Criterion (1) implies a choice of filter that minimizes the energy in the dataset after multiple removal.

One advantage with working with time-domain filters as opposed to frequency-domain filters is that the theory can be adapted relatively easily to address non-stationarity. Following Claerbout (1998a) and Margrave (1998), we extend the concept of a filter to that of a non-stationary filter-bank, which in principle contains one filter for every point in the input/output space. For a non-stationary filter-bank, , we identify with the filter corresponding to the location in the input/output vector, and the coefficient, fi,j, with the coefficient of the filter, .The response of non-stationary filtering with to an impulse in the location in the input is then .

With a non-stationary convolution filter, , the shaping filter regression normal equations, are massively underdetermined since there is a potentially unique impulse response associated with every point in the dataspace. We need additional constraints to reduce the null space of the problem.

For most problems, we do not want the filter impulse responses to vary arbitrarily, we would rather only consider filters whose impulse response varies smoothly across the output space. This preconception can be expressed mathematically by saying that, simultaneously with expression (1), we would also like to minimize
 (4)
where the new filter, , acts to roughen filter coefficients along the output axis of .

Combining expressions (1) and (4) with a parameter, that describes their relative importance, we can write a pair of fitting goals
 (5) (6)

By making the change of variables, Fomel (1997), we obtain the following fitting goals
 (7) (8)
which are equivalent to the normal equations
 (9)
Equation (9) describes a preconditioned linear system of equations, the solution to which converges rapidly under an iterative conjugate gradient solver.

Next: APPLICATION TO A SYNTHETIC Up: Rickett et al.: Adaptive Previous: Introduction
Stanford Exploration Project
4/29/2001