next up previous [pdf]

Next: Examples with synthetic data Up: Adaptive matching Previous: Introduction

Description of the method

Conceptually, the first step of the new method is to form the convolutional matrices of both the estimated multiples $\mathbf{M}$ and the estimated primaries $\mathbf{P}$. In practice, these are huge matrices that are not explicitly formed but are replaced by equivalent linear operators (, ). Next, I compute non-stationary filters in micro-patches (that is, filters that act locally on overlapping two-dimensional partitions of the data) to match the estimated multiples and the estimated primaries, to the data containing both. I compute the filters by solving the following linear least-squares inverse problem:
$\displaystyle \left[\begin{array}{c c}
\mathbf{M} & \mu \mathbf{P} \end{array}\right]
\left[\begin{array}{c}\mathbf{f_m} \\
\mathbf{f_p} \end{array}\right]$ $\textstyle \approx$ $\displaystyle \mathbf{d}$ (1)
$\displaystyle \epsilon \mathbf{A} \left[\begin{array}{c} \mathbf{f_m}\\  \mathbf{f_p} \end{array}\right]$ $\textstyle \approx$ $\displaystyle \mathbf{0}$ (2)

where $\mathbf{f_m}$ and $\mathbf{f_p}$ are the matching filters for the multiples and primaries respectively, $\mu$ is a parameter to balance the relative importance of the two components of the fitting goal (, ), $\mathbf{d}$ is the data (primaries and multiples), $\mathbf{A}$ is a regularization operator, (in my implementation a Laplacian operator), and $\epsilon$ is the usual parameter to control the level of regularization.

Once convergence is achieved, each filter is applied to its corresponding convolutional matrix, and new estimates for $\mathbf{M}$ and $\mathbf{P}$ are computed:

$\displaystyle \mathbf{M}_{i+1}$ $\textstyle \leftarrow$ $\displaystyle \mathbf{M}_i\mathbf{f_m}_i$ (3)
$\displaystyle \mathbf{P}_{i+1}$ $\textstyle \leftarrow$ $\displaystyle \mu \mathbf{P}_i\mathbf{f_p}_i.$ (4)

Here $i$ represents the index of the outer iteration of the linear problem described by Equations 1 and 2. Notice that I hold $\mu$ constant although it could be changed from iteration $i$ to iteration $i+1$. Also notice that the regularization operator $A$ and the regularization parameter $\epsilon$ in Equation 2 could be different for $\mathbf{f_m}$ and $\mathbf{f_p}$. I have chosen to keep them the same to limit the number of adjustable parameters. This choice worked very well in all my tests. The updated versions of the convolutional matrices $\mathbf{M}_{i+1}$ and $\mathbf{P}_{i+1}$ are plugged into equations 1 and 2 and the process repeated until the cross-talk has been eliminated or significantly attenuated.
next up previous [pdf]

Next: Examples with synthetic data Up: Adaptive matching Previous: Introduction

2007-10-24