Next: Data Description
Up: Curry: Interpolating diffracted multiples
Previous: Introduction
Interpolation can be posed as a twostage problem, where in the first
stage some statistics of the data are gathered, and in the second stage this
information is applied to fill in the missing data. In the case of
transformbased interpolation methods, the initial transform corresponds to the
gathering of information on existing data, and the second stage is the transform
back to the original, more denselysampled space.
In terms of the predictionerror filter based interpolation used in this paper,
in the first stage an estimate of the data is made by creating a nonstationary
predictionerror filter (PEF) from the data by solving a linear leastsquares
inverse problem,
 

 (1) 
where represents nonstationary convolution with the data, is
a nonstationary PEF, (a selector matrix) constrains the value of the
first filter coefficient to 1, is a copy of the data, is a
regularization operator (a Laplacian operating over space) and is a
tradeoff parameter for the regularization. Solving this system will
create a smoothly varying nonstationary PEF that, when convolved with the data, will
ideally remove all coherent energy from the input data.
Once the PEF has been estimated, we can use it to constrain the missing
data by solving a second linear leastsquares inverse problem,
 

 (2) 
where is a selector matrix which is 1 where data
is present and where it is not, represents convolution with the
nonstationary PEF, is now a tradeoff parameter and is the desired
model.
In order to interpolate by a factor of two, the coefficients of the PEF are
expanded so that the filter coefficients fall on known data. Once the PEF is
estimated, the filter is shrunk down to its original size and then used to interpolate.
Next: Data Description
Up: Curry: Interpolating diffracted multiples
Previous: Introduction
Stanford Exploration Project
4/5/2006