next up previous print clean
Next: A 3-D field data Up: Guitton: Signal/noise separation Previous: The subtraction method

Why is it working ?

Remember that ${\bf L_nm_n - d}$ is something that should resemble the signal. If I only solve  
 \begin{displaymath}
\bf{0} \approx {\bf r_d} = {\bf L_nm_n - d},\end{displaymath} (14)
I intrinsically assume that the signal has minimum energy. Inverse theory teaches us that I should weight the residual so that its spectrum is white Claerbout and Fomel (2001); Tarantola (1987). ${\bf \overline{R_s}}$ in equation (14) does exactly that since ${\bf \overline{R_s}s}\approx{\bf 0}$ by definition. Thus, although not obvious from the fitting goal in equation (4) and the pseudo-inverse in equation (6), the subtraction method works very well because it approximates the covariance operators with the projection filters ${\bf \overline{R_s}}$ and ${\bf
 \overline{R_n}}$.

The filtering approach is identical except that I have  
 \begin{displaymath}
\bf{0} \approx {\bf r_d} = {\bf A_s}({\bf L_nm_n - d})\end{displaymath} (15)
instead of
\begin{displaymath}
\bf{0} \approx {\bf r_d} = {\bf \overline{R_s}}({\bf L_nm_n - d}), \end{displaymath} (16)
where ${\bf A_s}$ is usually a prediction-error filter for the noise such that ${\bf A_ss}={\bf 0}$. So the difference between the two methods is numerically embedded within the weighting operators ${\bf A_s}$ and ${\bf \overline{R_s}}$ in the fitting goals of equations (14) and (16).

So why is it better to have ${\bf \overline{R_s}}$ instead of ${\bf A_s}$ ? First, ${\bf \overline{R_s}}$ is a projection filter meaning that it sets to zero every signal component and keeps the rest basically unchanged (depending on the orthogonality of the noise and the signal components). Therefore, I think that ${\bf \overline{R_s}}$ is a better signal filter than ${\bf A_s}$ and has fewer impact on the norm (the same way that inversion is better than filtering). Then, the definition of ${\bf \overline{R_s}}$ shows that the conditioning number of the Hessian should be better with ${\bf \overline{R_s}}$ than with ${\bf A_s}$. This property has also been established by Guitton (2002) where I showed that both approaches were related by preconditioning transformations. Finally, another advantage of ${\bf \overline{R_s}}$ is that the modeling operator can be anything I want, as long as ${\bf L_sm_s = s}$ in equation (2).

Therefore, the new interesting idea is that the prediction-error filter might not be the best approximation for the data covariance operator. A projection filter seems to be a better choice. I now illustrate the difference between the two examples with a 3-D field data example.


next up previous print clean
Next: A 3-D field data Up: Guitton: Signal/noise separation Previous: The subtraction method
Stanford Exploration Project
7/8/2003