previous up next print clean
Next: Stack equalization Up: Examples Previous: Inverse Linear Interpolation

Deburst

The next example of regularization is the problem of removing large spiky noise from experimental data. The input synthetic data, shown in the top plot of Figure 8, contains numerous noise spikes and bursts. Some of the noise bursts are a hundred times larger than shown. Simple median smoothing (second top plot in Figure 8) can remove some individual spikes, but fails to provide an adequate output overall. Claerbout suggests iteratively reweighted least squares as a robust efficient method of despiking. The operator L in this case is as simple as identity, but we weight equation (1) by some weighting operator W, which is chosen to suppress non-Gaussian statistical distribution of the noise. Good results in the considered example were achieved with the ``Cauchy'' weighting function  
 \begin{displaymath}
 W (\mbox{\unboldmath$r$}_i) = \mbox{\textbf{diag}}
\left(\f...
 ...\unboldmath$r$}_i^2}{\bar{\mbox{\unboldmath$r$}}^2}}}\right)\;,\end{displaymath} (29)
where $\mbox{\unboldmath$r$}_i$ denote components of the residual r = d-m, and $\bar{\mbox{\unboldmath$r$}}$ is the residual median value. The dependence on r of the weighted operator makes the problem nonlinear, but iterative reweighting allows us to approach it with piecewise linearization.

 
burst4
burst4
Figure 8
The top is synthetic data with noise spikes and bursts. Next is after despiking with running medians. The two bottom plots are outputs of the deburst process with regularized iteratively reweighted least squares.
view burn build edit restore

Claerbout's model-space regularization used convolution with the Laplacian filter (1,-2,1) as the roughening operator D. For a comparison with the data-space regularization, I applied triangle smoothing as the preconditioning operator P. The results, shown in the two bottom plot of Figure 8, look similar. Both methods succeeded in removing the noise bursts from the data and producing a smooth output. The data-space regularization did a better job of preserving the amplitudes of the original data. This effect partially results from a low dependency on the scaling parameter $\mbox{\unboldmath$\lambda$}$, which I reduced to 0.01 (compared with 1 in the case of model-space regularization.) The model residual plot in Figure 9 shows again a considerably faster convergence for the data-space method, in complete agreement with the theory.

 
conv4
Figure 9
Convergence of the iterative optimization for deburst, measured in terms of the model residual. The ``d'' points stand for data-space regularization; the ``m'' points, model-space regularization.
conv4
view burn build edit restore


previous up next print clean
Next: Stack equalization Up: Examples Previous: Inverse Linear Interpolation
Stanford Exploration Project
11/11/1997