next up previous print clean
Next: PEF whiteness proof in Up: PREDICTION-ERROR FILTER OUTPUT IS Previous: Simple dip filters

PEF whiteness proof in 1-D

 The basic idea of least-squares fitting is that the residual is orthogonal to the fitting functions. Applied to the PE filter, this idea means that the output of a PE filter is orthogonal to lagged inputs. The orthogonality applies only for lags in the past, because prediction knows only the past while it aims to the future. What we want to show is different, namely, that the output is uncorrelated with itself (as opposed to the input) for lags in both directions; hence the output spectrum is white.

In (26) are two separate and independent autoregressions, $\bold 0\approx\bold Y_a\bold a$for finding the filter $\bold a$,and $\bold 0\approx\bold Y_b\bold b$for finding the filter $\bold b$.Evidently, if the matrices were infinitely tall, or if the values in the columns of the matrices got small towards the top and bottom, we would find that $\bold a =\bold b$. 
 \begin{displaymath}
\bold 0
\ \approx\ \bold r_a \ =\ 
 \left[ 
 \begin{array}
{...
 ... 
 \begin{array}
{c}
 1 \\  
 b_1 \\  
 b_2 \end{array} \right]\end{displaymath} (26)
When the energy $\bold r'\bold r$of a residual has been minimized, the residual $\bold r$ is orthogonal to the fitting functions. For example, choosing a2 to minimize $\bold r'\bold r$gives $0=\partial\bold r'\bold r/\partial a_2=2\bold r'\partial\bold r/\partial a_2$.This shows that $\bold r'$ is perpendicular to $\partial \bold r / \partial a_2$which is the rightmost column of the $\bold Y_a$ matrix. Thus the vector $\bold r_a$is orthogonal to all the columns in the $\bold Y_a$ matrix except the first (because we do not minimize with respect to a0).

Our goal is a different theorem that is imprecise when applied to the three coefficient filters displayed in (26), but becomes valid as the filter length tends to infinity $\bold a = (1,a_1, a_2, a_3,\cdots)$and the matrices become infinitely wide. Actually, all we require is that bn tend to zero. This generally happens because as n increases, yt-n becomes a weaker and weaker predictor of yt.

The matrix $\bold Y_a$ contains all of the columns that are found in $\bold Y_b$except the last (and the last one will turn out be irrelevant). This means that $\bold r_a$ is not only orthogonal to all of $\bold Y_a$'s columns (except the first) but $\bold r_a$ is also orthogonal to all of $\bold Y_b$'s columns except the last. Although $\bold r_a$ isn't really perpendicular to the last column of $\bold Y_b$, it doesn't matter because that column has hardly any contribution to $\bold r_b$since |bn|<<1. Because $\bold r_a$ is (effectively) orthogonal to all the components of $\bold r_b$,$\bold r_a$ is also orthogonal to $\bold r_b$ itself. (For any $\bold u$ and $\bold v$, if $\bold r\cdot \bold u=0$ and $\bold r\cdot \bold v=0$ then $\bold r\cdot (\bold u+ \bold v)=0$).

In choosing the example of Figure (26), I have shifted the two fitting problems by one lag. We could draw the two problems again shifted by two lags, three lags, and more, and we would find that $\bold r_b$ and $\bold r_a$are always orthogonal. Actually, $\bold r_b$ and $\bold r_a$ both contain the same signal $\bold r$but time-shifted. The orthogonality at all shifts means that the autocorrelation of $\bold r$vanishes at all lags. The autocorrelation does not vanish at zero lag, however, because $\bold r_a$ is not orthogonal to its first column (because we did not minimize with respect to a0).

As we redraw $\bold 0\approx\bold r_b =\bold Y_b\bold b$for various lags, we may shift the columns only downward because shifting them upward would bring in the first column of $\bold Y_a$ and the residual $\bold r_a$ is not orthogonal to that. Thus we have only proven that one side of the autocorrelation of $\bold r$vanishes. That is enough however, because autocorrelation functions are symmetric, so if one side vanishes, the other must also.

We also see where the proof would break if $\bold a$ and $\bold b$ were two-sided filters like $(\cdots,b_{-2}, b_{-1}, 1, b_1, b_2, \cdots)$.If $\bold b$ were two-sided, $\bold Y_b$ would catch the nonorthogonal column of $\bold Y_a$.Not only is $\bold r_a$ not proven to be perpendicular to the first column of $\bold Y_a$,but it cannot be orthogonal to it because a signal cannot be orthogonal to itself.

The consequence of this theorem is that the convolution of $\bold y$and $\bold a$ is white noise (its autocorrelation is an impulse function) and that means that $\bold y$ and $\bold a$ have mutually inverse spectrums. In other words, $\bold a$ captures a fundamental statistical aspect of $\bold y$.Where information is missing we can use the PEF to guess it.


next up previous print clean
Next: PEF whiteness proof in Up: PREDICTION-ERROR FILTER OUTPUT IS Previous: Simple dip filters
Stanford Exploration Project
2/27/1998