next up previous print clean
Next: Acknowledgments Up: Guitton: Multiple attenuation Previous: Stacked sections

Discussion-Conclusion

In this paper, I compared multiple attenuation results obtained with three different techniques. These methods are (1), the hyperbolic Radon transform (HRT), for which multiples are attenuated according to their moveout (2), the Delft approach, for which the multiples are first predicted and then adaptively subtracted from the data and (3), a pattern-based method, for which the multiples and primaries are separated according to their multivariate spectra. These three techniques were tested on a deep water, Gulf of Mexico dataset. This dataset features a shallow salt body that creates strong complex multiples in the subsurface. The main goal of this paper was to illustrate the strengths and weaknesses of each method. It also intended to show that our most recent developments in non-stationary prediction-error filters estimation could be effectively used for multiple attenuation.

From my results, it appears that the HRT is unable to properly attenuate multiples in complex geology. This problem comes from the moveouts of the multiples that cannot be described by simple functions. This is particularly true when the multiples associated with the rugose salt are present in the gather. With non-hyperbolic events, the energy of the multiples tends to spread out in the Radon domain. Therefore, the primaries and the multiples do not separate very well. However, the HRT does a good job at removing multiples outside the salt boundaries. There, the HRT compares favorably with the Delft approach. For complex geology, applying the HRT in the image space Sava and Guitton (2003) is a better alternative.

The Delft approach with non-stationary adaptive filtering removes more multiples inside the salt boundaries than the HRT. In particular, it attenuates complex multiples much better. One drawback of the adaptive filtering approach, however, is that it tends to be very sensitive to inaccuracies in the multiple model. Thus, the quality of the subtraction decreases at short offset and where unmodeled multiples are present like diffracted multiples or other 3-D effects.

It is very pleasing to see that the the pattern-based method leads to the best multiple attenuation results both inside and outside the salt boundaries. One main advantage of these techniques is that they tend to cope slightly better with inaccuracies in the multiple model than the adaptive subtraction. This comes from the fact that the prediction-error filters approximate multidimensional spectra. Therefore, as long as the multiple model has the same ``character'' than the real multiples, the attenuation works. In addition, more dimensionality in the filters and in the data noticeably increases the quality of the multiple attenuation results. Thus, adding more dimensions to our data and filter, we can better discriminate between the primaries and the multiples because they look more different.

From this analysis it appears that the pattern-based approach with 3-D non-stationary filters yields the best multiple attenuation result. At this stage, it is important to keep in mind that the pattern-based approach works as long as the primaries and multiples do not have the same multivariate spectrum. If they do, we can either try to estimate the cross-spectrum, estimate the regularization parameter better or choose another method. The Gulf of Mexico dataset does not seem to cause this kind of problem, however. The next ``natural'' step consists in testing this technique with 3-D data. Of course, I need to devise a strategy for modeling the multiples in 3-D. The convolutional model of the Delft approach might not be feasible in practice and more practical approaches, like the one currently developed by Brown (2003), will be needed.


next up previous print clean
Next: Acknowledgments Up: Guitton: Multiple attenuation Previous: Stacked sections
Stanford Exploration Project
7/8/2003