next up previous print clean
Next: Why is it working Up: Guitton: Signal/noise separation Previous: Introduction

The subtraction method

In this section, I show that starting from the least-squares inverse of the subtraction method, I can retrieve a fitting goal that resembles the fitting goal of the filtering technique. I first assume that the data ${\bf d}$ is the sum of the signal ${\bf s}$ and the noise ${\bf n}$ as follows:  
 \begin{displaymath}
{\bf d} = {\bf s} + {\bf n}\end{displaymath} (1)
Now I assume that I have two modeling operators ${\bf L_s}$ and ${\bf L_n}$ for the signal and noise respectively such that  
 \begin{displaymath}
\begin{array}
{ccc} 
 {\bf L_s m_s}&=&{\bf s}, \\  {\bf L_n m_n}&=&{\bf n},
 \end{array}\end{displaymath} (2)
where ${\bf m_n}$ and ${\bf m_s}$ are unknown model parameters. In this paper I assume that the two modeling operators are orthogonal, meaning that they model different regions of the data space. Inserting the modeling operators into equation (1) I have
\begin{displaymath}
{\bf d} = {\bf L_s m_s} + {\bf L_n m_n}.\end{displaymath} (3)
I then want to estimate ${\bf m_n}$ and ${\bf m_s}$ so that  
 \begin{displaymath}
{\bf 0} \approx {\bf r_d} = {\bf L_s m_s} + 
 {\bf L_n m_n}-{\bf d}.\end{displaymath} (4)
In a least-squares sense, I have to minimize the following objective function:
\begin{displaymath}
f({\bf m_s},{\bf m_n})=\Vert{\bf r_d}\Vert^2=\Vert{\bf L_s m_s} + 
 {\bf L_n m_n}-{\bf d}\Vert^2.\end{displaymath} (5)
The least-squares inverse of the model parameters is given by Guitton (2001)  
 \begin{displaymath}
\left( \begin{array}
{c} 
 \hat{{\bf m_n}} \\  \hat{{\bf m_s...
 ..._n}L_s})^{-1}{\bf L_s'\overline{R_n}}\end{array}\right){\bf d},\end{displaymath}   
with
\begin{displaymath}
\begin{array}
{ccc}
{\bf \overline{R_s}}&=&{\bf I}-{\bf L_s}...
 ...=&{\bf I}-{\bf L_n}({\bf L_n'L_n})^{-1}{\bf L_n'}.
 \end{array}\end{displaymath} (6)
One can easily recognize in the definition of ${\bf \overline{R_s}}$the identity matrix minus the signal resolution operator and for ${\bf
 \overline{R_n}}$, the identity matrix minus the noise resolution operator. Therefore, ${\bf \overline{R_s}}$ and ${\bf
 \overline{R_n}}$ are signal and noise filtering operators respectively with
\begin{displaymath}
\begin{array}
{rclr}
{\bf \overline{R_s}s}&\approx&{\bf 0}& ...
 ...{ and }\\  
{\bf \overline{R_n}n}&\approx&{\bf 0}.
 \end{array}\end{displaymath} (7)
An interesting property of ${\bf \overline{R_s}}$ and ${\bf
 \overline{R_n}}$ is that ${\bf \overline{R_{n}}}.{\bf
 \overline{R_{n}}}={\bf \overline{R_{n}}}$ and ${\bf \overline{R_{s}}}.{\bf
 \overline{R_{s}}}={\bf \overline{R_{s}}}$: they are called projectors. It is easy to verify that they are Hermitian operators as well Guitton (2001). Now, looking at the first row in equation(6), I have
\begin{displaymath}
\hat{{\bf m_n}}=({\bf L_n'\overline{R_s}L_n})^{-1}{\bf
 L_n'\overline{R_s}}{\bf d}.\end{displaymath} (8)
This expression is the least-squares inverse for the following objective function
\begin{displaymath}
f({\bf m_n})= ({\bf L_nm_n - d})'{\bf \overline{R_s}}({\bf L_nm_n - d})\end{displaymath} (9)
which can be rewritten
\begin{displaymath}
f({\bf m_n})= ({\bf L_nm_n - d})'{\bf \overline{R_s}}'{\bf \overline{R_s}}({\bf L_nm_n - d})\end{displaymath} (10)
or  
 \begin{displaymath}
f({\bf m_n})= \Vert{\bf \overline{R_s}}({\bf L_nm_n - d})\Vert^2\end{displaymath} (11)
using the properties of ${\bf \overline{R_s}}$. Therefore, equation (12) is the objective function for the fitting goal  
 \begin{displaymath}
\bf{0} \approx {\bf r_d} = {\bf \overline{R_s}}({\bf L_nm_n - d}).\end{displaymath} (12)
Similarly, if I look at the second row of equation (6), I end up with  
 \begin{displaymath}
\bf{0} \approx {\bf r_d} = {\bf \overline{R_n}}({\bf L_sm_s - d}).\end{displaymath} (13)

next up previous print clean
Next: Why is it working Up: Guitton: Signal/noise separation Previous: Introduction
Stanford Exploration Project
7/8/2003