next up previous print clean
Next: Examples on real data Up: Application of the CGG Previous: Application of the CGG

Examples on synthetic data

To examine the performance of the proposed CGG method, a synthetic CMP data set with various types of noise is used. Figure 1 shows the synthetic data with three types of noise -- Gaussian noise in the background, bursty noise, and a very noisy trace. Figure 1 (right) is the same data as Figure 1 (left), but displayed in wiggle format to clearly show the bursty noise that was not discernable because of the clipping in the raster-format display. The amplitudes of the three bursty spikes are eight times the maximum amplitude of the hyperbolas.

 
fig1
fig1
Figure 1
Synthetic input data with various noise types in raster format (left) and wiggle format (right).
view burn build edit restore

Figure 2(a) shows the modeled data from the velocity-stack panel obtained using the conventional CG algorithm for LS solution. The inversion was obtained with 30 iterations and the same number of iteration was used for all the other examples including the real data cases. We can clearly see the limit of L2-norm minimization. The noise with Gaussian statistics is removed quite well, but some spurious events are generated around the bursty noise spikes and noisy trace. Figure 2(b) shows the modeled data from the velocity-stack panel obtained using the IRLS algorithm in an L1-norm sense. In Figure 2(b) we can see the robustness of L1-norm minimization. The nursty noise is reduced significantly, but the removal of the background noise seems to be worse than the result of L2-norm minization.

Figure 2(c) shows the modeled data from the velocity-stack panel obtained using the CGG method, with the iteratively reweighted residual in an L1-norm sense. The result is comparable to the IRLS inversion ( Figure 2(b)). This tells us that guiding the gradient vector toward the L1-norm gradient gives a solution similar to the L1-norm solution with the IRLS method. Figure 2(d) shows the modeled data from the velocity-stack panel obtained using the CGG method with iteratively reweighted gradient as follows:

\begin{displaymath}
\mbox {diag} (\bold W_i)=\vert\bold m_{i-1}\vert^{1.5} ,\end{displaymath}

that is the diagonal element of the i-th iteration weighting matrix is the absolute value of the model vector from the previous iteration, raised to the power 1.5. The result shows that the Gaussian background noise and the very noisy trace are better removed than with any of the L1-norm approaches, either in CGG or IRLS. However, some spurious events around the bursty spikes still exist. This is because of the high amplitude of the noise. Since the weight is dependent on the amplitude of the model vector, and because the high amplitude in the CMP gather is also mapped to a high amplitude in the velocity-stack panel, the bursty noise with high amplitude would have higher weighting than the noise with low amplitude. However, this kind of artifact would easily be removed if the bursty noise had an amplitude similar to the rest of signal, which is the case when AGC (automatic gain control) is applied to the data.

 
fig2
fig2
Figure 2
Remodeled data from the result of the LS inversion (a), from the result of the IRLS inversion with an L1-norm minimization (b), from the CGG method with iteratively reweighted residuals (c) in an L1-norm sense, and from the result of the CGG method with iteratively reweighted gradients (d).
view burn build edit restore

Figure 3 shows the modeled data obtained using the CGG algorithm with both residual and gradient weightings. In this case, the result looks like modeled data without any noise, because the bursty noise is reduced with residual weighting (L1-norm criteria), and background noise is removed with gradient weighting.

 
fig3
Figure 3
Remodeled data from the CGG method, with iteratively reweighted residual and gradient together.
fig3
view burn build edit restore

The velocity stacks obtained from the various inversions are shown in Figure 4. From left to right, the velocity-stack panels correspond to the results from LS inversion, IRLS inversion, CGG with residual weighting, CGG with gradient weighting, and CGG with both residual and gradient weighting. From these velocity-stack panels, we can deduce why different inversion methods were successful with the different noise styles. If we want an application that distinguishes signals in the model space, we can see that gradient weighting is the preferred method, because it gives a more parsimonious representation than the others.

 
fig3v
fig3v
Figure 4
Velocity-stack panels obtained by various inversions. From left to right: LS inversion, IRLS, CGG with residual weighting, CGG with gradient weighting, and CGG with both residual and gradient weighting.
view burn build edit restore


next up previous print clean
Next: Examples on real data Up: Application of the CGG Previous: Application of the CGG
Stanford Exploration Project
5/23/2004