(14) |

The filtering approach is identical except that I have

(15) |

(16) |

So why is it better to have instead of ? First, is a projection filter meaning that it sets to zero every signal component and keeps the rest basically unchanged (depending on the orthogonality of the noise and the signal components). Therefore, I think that is a better signal filter than and has fewer impact on the norm (the same way that inversion is better than filtering). Then, the definition of shows that the conditioning number of the Hessian should be better with than with . This property has also been established by Guitton (2002) where I showed that both approaches were related by preconditioning transformations. Finally, another advantage of is that the modeling operator can be anything I want, as long as in equation (2).

Therefore, the new interesting idea is that the prediction-error filter might not be the best approximation for the data covariance operator. A projection filter seems to be a better choice. I now illustrate the difference between the two examples with a 3-D field data example.

7/8/2003