next up previous print clean
Next: Filter estimation Up: Theory of multiple attenuation Previous: Theory of multiple attenuation

Multiple attenuation

First, I consider that any seismic data is the sum of signal and noise as follows:  
 \begin{displaymath}
\begin{array}
{rcl}
 \bf{d} &=& \bf{s}+\bf{n},
 \end{array}\end{displaymath} (1)
where $\bf{d}$ are the seismic data, $\bf{s}$ the signal we want to preserve and $\bf{n}$ the noise we wish to attenuate. In the multiple elimination problem, the noise is the multiples and the signal the primaries.

Now, assuming that we know the multidimensional PEFs ${\bf N}$ and ${\bf
 S}$ for the noise and signal components, respectively, we have  
 \begin{displaymath}
\begin{array}
{rcl}
 \bf{Nn} &\approx& \bf{0} \  \bf{Ss} &\approx& \bf{0}
 \end{array}\end{displaymath} (2)
by definition of the PEFs. Equations (1) and (2) can be combined to solve a constrained problem to separate signal from spatially uncorrelated noise as follows:  
 \begin{displaymath}
\begin{array}
{rcccl}
 {\bf 0} & \approx & {\bf r_n} & = & {...
 ...subject to} &\leftrightarrow& \bf d &=& {\bf s+n} 
 \end{array}\end{displaymath} (3)
We can easily eliminate $\bf{n}$ in the last equation of the fitting goal (3) by convolving with ${\bf N}$. Doing so, we end up with the following fitting goals:  
 \begin{displaymath}
\begin{array}
{rcccl}
 {\bf 0} & \approx & {\bf r_d} & = & {...
 ... \  {\bf 0} & \approx & {\bf r_s} & = & {\bf Ss} 
 \end{array}\end{displaymath} (4)
For some field data, it might be useful to add a masking operator on the data and signal residual ${\bf r_d}$ and ${\bf r_s}$ in order to perform the noise attenuation. It happens for example when the noise appears after a certain time or offset. I call ${\bf M}$ this masking operator and I weight the fitting goals in equation (5) as follows:  
 \begin{displaymath}
\begin{array}
{rcccl}
 {\bf 0} & \approx & {\bf r_d} & = & {...
 ...\  {\bf 0} & \approx & {\bf r_s} & = & {\bf MSs} 
 \end{array}\end{displaymath} (5)
Solving for $\bf{s}$ in a least-squares sense lead to the objective function
\begin{displaymath}
f({\bf s})=\Vert{\bf r_d}\Vert^2+\epsilon^2\Vert{\bf r_s}\Vert^2\end{displaymath} (6)
where ${\epsilon}$ is a constant to be chosen a-priori that relates to the signal/noise ratio. The least-squares inverse for $\bf{s}$ becomes  
 \begin{displaymath}
{\bf \hat{s}} = ({\bf N'MN+}\epsilon^2{\bf S'MS})^{-1}{\bf N'MNd},\end{displaymath} (7)
where (') stands for the adjoint. Note that since ${\bf M}$ is a diagonal operator of zeros and ones, we have ${\bf M'M=M^2=M}$. It is interesting to note that ${\bf N'MN}$ is the inverse spectrum of the noise and ${\bf S'MS}$ is the inverse spectrum of the signal where we perform the attenuation. Soubaras (1994) uses a very similar approach for random noise and more recently for coherent noise attenuation Soubaras (2001) with F-X PEFs. Because the size of the data space can be quite large, we estimate $\bf{s}$ iteratively with a conjugate-gradient method. In the next section, I describe the PEFs estimation method I use to compute ${\bf N}$ and ${\bf
 S}$ needed in equation (7).


next up previous print clean
Next: Filter estimation Up: Theory of multiple attenuation Previous: Theory of multiple attenuation
Stanford Exploration Project
7/8/2003