next up previous print clean
Next: Examples with synthetic data Up: Alvarez and Guitton: Adaptive Previous: Introduction

Description of the method

Conceptually, the first step of our method is to form the convolutional matrices of both the estimated multiples $\mathbf{M}$ and the estimated primaries $\mathbf{P}$. In practice, these matrices are not explicitly formed but computed with linear operators Claerbout and Fomel (2002). Next, we compute non-stationary filters in micro-patches (that is, filters that act locally on overlapping two-dimensional partitions of the data) to match the estimated multiples and the estimated primaries, to the data containing both. We compute the filters by solving the following least-squares inverse problem:
       (1)
(2)
(3)
(4)
where $\mathbf{f_m}$ and $\mathbf{f_p}$ are the matching filters for the multiples and primaries respectively, $\mu$ is a parameter to balance the relative importance of the two components of the fitting goal Guitton (2005), $\mathbf{d}$ is the data (primaries and multiples), $\mathbf{A}$ is a regularization operator, (in our implementation a Laplacian operator), and $\epsilon$ is the usual way to control how strong a regularization we want.

Once convergence is achieved, each filter is applied to its corresponding convolutional matrix, and new estimates for M and P are computed:
\begin{eqnarray}
M &\leftarrow & M\mathbf{f_m} \\  P &\leftarrow & \mu P\mathbf{f_p}.\end{eqnarray} (5)
(6)
These updated versions of M and P are then plugged into equations 1 and [*] and the process repeated until the cross-talk has been eliminated or significantly attenuated.


next up previous print clean
Next: Examples with synthetic data Up: Alvarez and Guitton: Adaptive Previous: Introduction
Stanford Exploration Project
1/16/2007