next up previous print clean
Next: : REFERENCES Up: Application of the CGG Previous: Examples on synthetic data

Examples on real data

I tested the proposed CGG method on two real data sets that contain various types of noise. The data are shot gathers from land surveys. The trajectories of the events from both data sets look "hyperbolic" enough to be tested with hyperbolic inversion.

Figure 5 shows one of the real data sets tested. The noise in the data is mainly the strong ground roll, the amplitude anomalies early at near-offset and late at 0.8km offset, and the time shift around offsets 1.6 km and 2.0 km. Figure 6(a) shows the modeled data obtained using the IRLS algorithm in an L1-norm sense. Figure 6(b) shows the modeled data obtained using the CGG algorithm with residual weighting in an L1-norm sense. In both panels in Figure 6, the noise is greatly reduced, especially the ground roll, since the limited-size model space does not include the ground-roll velocity. However, we can see that the IRLS method reduce the signal at far offset too much and the signal is preserved better by CGG with residual weighting (Figure 6(b)) than with IRLS inversion (Figure 6(a)). Figure 7(a) shows the modeled data from the result obtained using the CGG algorithm with gradient weighting. The following weighting factor was used:

\begin{displaymath}
\mbox {diag} (\bold W_i)=\vert\bold m_{i-1}\vert^{1.5} ,\end{displaymath}

that is the diagonal element of the i-th iteration weighting matrix is the absolute value of the (i-1)-th iteration model vector raised to the power 1.5. Comparing this with Figure 6(b) shows that weighting the gradient directly could result in better noise reduction at far offset than CGG with residual weighting. Figure 7(b) shows the modeled data obtained using the CGG algorithm with both residual and gradient weightings. In this case, the result is very similar to that of gradient-only weighting (Figure 7(a)), and it tells us that most of the noise could removed by weighting only the gradient.

The velocity stacks obtained from the various inversions are shown in Figure 8. As we saw in the synthetic data example, the velocity stacks obtained by the CGG method that includes gradient weighting, Figure 8 (c) and (d), show more parsimonious representation than the others.

 
fig4-0
Figure 5
The real data used for the inversion. Notice the strong ground roll, the amplitude anomalies early at near-offset and late at 0.8km offset, and the time shift around offsets 1.6 km and 2.0 km.
fig4-0
view burn build edit restore

 
fig4-1
fig4-1
Figure 6
Remodeled data from the IRLS inversion with L1-norm minimization (a) and from the result of the CGG method with iteratively reweighted residual in an L1-norm sense (b).
view burn build edit restore

 
fig4-2
fig4-2
Figure 7
Remodeled data from the CGG method with iteratively reweighted gradient (c) and from the CGG method with iteratively reweighted residual and gradient together (d).
view burn build edit restore

 
fig4v
fig4v
Figure 8
Velocity-stack panels obtained by various inversions. From left to right: IRLS, CGG with residual weighting, CGG with gradient weighting, and CGG with both residual and gradient weighting.
view burn build edit restore

Figure 9 shows another real data set used for testing. The noise in the data can be characterized by anomalous shifts and noisy amplitude at near offset around 1.5 sec, several junk traces around middle offset, and wide-spread noise. Figure 10(a) shows the modeled data obtained using the IRLS algorithm in an L1-norm sense. Figure 10(b) shows the modeled data obtained using the CGG algorithm with residual weighting in an L1-norm sense. In both results, most of the noise is greatly reduced. However, we can see that some of the noise characterized by anomalous amplitude is better reduced by CGG with residual weighting (Figure 10(b)) than by IRLS inversion (Figure 10(a)) and the IRLS inversion shows overly reduced amplitude at far offset signal, again. We can see that for the same iteration numbers, residual-only weighting is more effective than IRLS style residual weighting. Figure 11(a) shows the modeled data obtained using the CGG algorithm with gradient weighting. The following weighting factor was used:

\begin{displaymath}
\mbox{diag}(\bold W_i)=\vert\bold m_{i-1}\vert^{2} ,\end{displaymath}

that is the diagonal element of the i-th iteration weighting matrix is the square of the (i-1)-th iteration model vector. Comparing this with Figure 10(b) shows the interesting result that weighting the gradient directly could result in noise reduction similar to that of residual weighting.

Figure 11(b) shows the modeled data obtained using the CGG algorithm with both residual and gradient weightings. In this case, the result differs little from that of gradient-only weighting (Figure 11(a)), and most of the noise could removed by weighting only the gradient.

The velocity stacks obtained from the various inversions are shown in Figure 12. As we saw in the synthetic data example, the velocity stacks obtained by the CGG method with gradient weighting show more parsimonious representation than the others.

 
fig5-0
Figure 9
The real data used for the inversion. Notice the amplitude anomalies and time shift at near-offset traces, unrealistic junk traces at 1.8 km and 2.7 km, and widespread noise.
fig5-0
view burn build edit restore

 
fig5-1
fig5-1
Figure 10
Remodeled data from the IRLS inversion with L1-norm minimization (a), and remodeled data from the CGG method with iteratively reweighted residual in the L1-norm sense (b).
view burn build edit restore

 
fig5-2
fig5-2
Figure 11
Remodeled data from the CGG method with iteratively reweighted gradient (c), and from the CGG method with iteratively reweighted residual and gradient together (d).
view burn build edit restore

 
fig5v
fig5v
Figure 12
Velocity-stack panels obtained by various inversions. From left to right: IRLS, CGG with residual weighting, CGG with gradient weighting, and CGG with both residual and gradient weighting.
view burn build edit restore

The proposed CGG (Conjugate Guided Gradient) inversion method is a modified CG (Conjugate Gradient) inversion method, which guides the gradient vector during the iteration and allows the user to impose various constraints for residual and model. The guiding is implemented by weighting the residual vector and the gradient vector, either separately or together. Weighting the residual vector with the residual itself corresponds to guiding the solution search toward the Lp-norm minimization; weighting the gradient vector with the model itself corresponds to guiding the solution search toward a priori information imposed. Testing the CGG algorithm for the velocity-stack inversion of noisy synthetic and real data demonstrates that residual weighting appears to be comparable to or better than IRLS for the L1-norm solution. Gradient weighting produces a more spiky velocity spectrum than any of the Lp-norm solutions, which are preferable for velocity picking. Therefore, I think the CGG method is a possible alternative to the more traditional IRLS method for robust inversion of seismic data.


next up previous print clean
Next: : REFERENCES Up: Application of the CGG Previous: Examples on synthetic data
Stanford Exploration Project
5/23/2004