next up previous print clean
Next: Sampling in time Up: Prediction error filters Previous: Scale invariance and filling

Implementation

There are two steps. The first calculates a PEF and the second calculates missing data values. Both are linear least squares problems, which I solve using conjugate gradients.

The first step we can write  
 \begin{displaymath}
\bold 0 \approx \bold Y \bold C \bold a + \bold r_0\end{displaymath} (18)
where $\bold a$ is a vector containing the PEF coefficients, $\bold C$ is a filter coefficient selector matrix, and $\bold Y$ denotes convolution with the input data. The coefficient selector $\bold C$ is like an identity matrix, with a zero on the diagonal placed to prevent the fixed 1 in the zero lag of the PEF from changing. The $\bold r_0$ is a vector that holds the initial value of the residual, $\bold Y \bold a_0$.If the unknown filter coefficients are given initial values of zero, then $\bold r_0$ contains a copy of the input data. $\bold r_0$ makes up for the fact that the 1 in the zero lag of the filter is not included in the convolution (it is knocked out by $\bold C$).

The second step is almost the same, except we solve for data values rather than filter coefficients, so knowns and unknowns are reversed. We use the entire filter, not leaving out the fixed 1, so the $\bold C$ is gone, but we need a different selector matrix, this time to prevent the originally recorded data from changing. Split the data into known and unknown parts with an unknown data selector matrix $\bold U$ and a known data selector matrix $\bold K$.Between the two of them they select every data sample once: $\bold U + \bold K = \bold I$.Now to fill in the missing data values, solve
   \begin{eqnarray}
\bold 0 & \approx & \bold A(\bold U + \bold K)\bold y \\  & = &...
 ...ld A \bold K \bold y \\  & = & \bold A \bold U \bold y + \bold r_0\end{eqnarray} (19)
(20)
(21)
$\bold A$ now denotes the convolution operator, and the vector $\bold y$ holds the data, with the selector $\bold U$preventing any change to the known, originally recorded data values. The $\bold r_0$ in interp is calculated just like the $\bold r_0$ in pefest, except that now the adjustable filter coefficients are presumably not zero. Where before $\bold r_0$ wound up holding a copy of the data (assuming the adjustable filter coefficients start out zero), here it holds the initial prediction error $\bold A \bold K \bold y_0$.

There is some wasted effort implied here. There is no sense, for instance, in actually running the filter over all those zero-valued missing traces in the PEF calculation step, so they can be left out.


next up previous print clean
Next: Sampling in time Up: Prediction error filters Previous: Scale invariance and filling
Stanford Exploration Project
1/18/2001