next up previous print clean
Next: NONSTATIONARITY Up: HOW TO DIVIDE NOISY Previous: Example of deconvolution with

Self fulfilling prophecy?

Pitfalls arise from the popular theoretical analysis explaining $\epsilon$.Since this theory is easily misused, I'd like to show you the trap. First, the model Y=FX is supplemented by an explicit additive noise $N(\omega)$ becoming  
 \begin{displaymath}
Y(\omega) {=}N(\omega) + F(\omega)X(\omega)\end{displaymath} (7)
Because we have many frequencies we can hope to learn more by averaging over frequency. Of particular interest is the variance (described more fully in chapter 11)  
 \begin{displaymath}
\sigma_X^2 {=}{1\over n} \ \sum_{j=1}^n \bar X(\omega_j)X(\omega_j)\end{displaymath} (8)
and likewise $\sigma_N^2$.

The general linear estimation method is to minimize something that looks like a sum of relative errors.  
 \begin{displaymath}
Q(X,N) {=}
{ \bar X\,X \over \sigma_X^2 } \ +\ 
{ \bar N\,N \over \sigma_N^2 }\end{displaymath} (9)
Notice the variances put both terms of the sum to the same physical units. Requesting the noise N to be as small as possible seems like a good idea and asking X to be small too seems to work against any bad effect of zero division in X=Y/F. By introducing (7) into (9) we can eliminate either N or X. Eliminating N we have  
 \begin{displaymath}
Q(X) {=}
{ \bar X\,X \over \sigma_X^2 } \ +\ 
{ (\overline{FX-Y})(FX-Y) \over \sigma_N^2 }\end{displaymath} (10)
Minimizing Q(X) by setting its derivative by $\bar X$ to zero gives
   \begin{eqnarray}
0 &=& {X \over \sigma_X^2} \ +\ {\bar F(FX-Y)\over \sigma_N^2} \\ X &=& {\bar F Y \over \bar F F \ +\ {\sigma_N^2 \over \sigma_X^2}}\end{eqnarray} (11)
(12)
This completes the ``logical explanation'' of equation (1). Equation (12) expresses the answer, but it does so in terms of the unknowns $\sigma_N^2$ and $\sigma_X^2$.Given an initial estimate of $\sigma_N^2/\sigma_X^2$equation (12) gives us X and (7) gives N, so we can then compute $\sigma_N^2$ and $\sigma_X^2$.Presumably these computed values are better than our initial guesses. In statistics, the variances in (12) are called priors, and it makes sense to check them, and even more sense to correct them. And from the corrected values we should be able to iterate, further improving the corrections. Equation (12) applies for each of the many frequencies, and there is only a single unknown, the ratio $\sigma_N^2/\sigma_X^2$so it seems like there is plenty of information and the bootstrapping procedure might work. A pessimist might call this bootstrapping ``self fulfilling prophecy.'' What do you think?

Truth is stranger than fiction. I tried it. With my first starting value for the ratio $\sigma_N^2/\sigma_X^2$the iteration led to the ratio being infinite. Another starting value led to the ratio being zero. All starting values led to one or the other of these nonsensical values. Eventually I deduced there must be a metastable value dividing the two possibilities, but that is hardly reassuring. I conclude we cannot bootstrap these prior assumptions. The solutions produced do not tend to the correct variance, nor is the variance ratio correct, so they cannot be used to bootstrap the variance. Philosophically we can be thankful the result failed to converge so we did not get a false confidence in the solution. I conclude that linear estimation theory, while beguiling and appearing to be a universal guide to practice, is actually incomplete. The incompleteness will grow in later chapters with multivariate least squares, because there the scalar $\sigma_x^2$ becomes a matrix. We continue our search for ``universal truth'' by studying more examples.


next up previous print clean
Next: NONSTATIONARITY Up: HOW TO DIVIDE NOISY Previous: Example of deconvolution with
Stanford Exploration Project
1/13/1998