next up previous print clean
Next: Practical issues Up: Vlad: Enhanced random noise Previous: Introduction

Improving on existing methods

When doing prediction filtering, the noise is defined as whatever remains after applying a signal-annihilation filter to the data. The filter must be designed on a sample of the data that has as little noise as possible. However, totally noise-free data is not available, so both f-x and t-x decon pass some noise along with the signal. Most often they ``find'' quasi-coherent patterns in the noise. The lower panels of Figure [*] are a good example of this phenomenon.

A close examination of the spurious coherent patterns passed by f-x and by t-x decon reveals that they are different. Simply averaging the results will increase the signal/noise ratio.[*] Moreover, the potential for improvement does not stop here. The two semi-coherent noise patterns passed by each method have interfered with each other, so their coherence is much reduced after averaging. Noise in the averaging result will therefore be vulnerable again to attenuation by spatial prediction filtering. I found in practice that a t-x decon with a smaller filter than the first one used produces good results. The Enhanced Random Noise Attenuation (ERNA) flowchart is displayed in Figure [*]. Figures [*], [*] and [*] show the result of applying this technique on real poststack seismic data,[*] real prestack seismic data[*] ``spiked'' with synthetic noise and Ground-Penetrating Radar (GPR) data.[*] The results of f-x and t-x decons are also displayed alongside in the bottom panels for a comparison.

 
skema_bw
skema_bw
Figure 1
Enhanced Random Noise Attenuation flowchart
view


next up previous print clean
Next: Practical issues Up: Vlad: Enhanced random noise Previous: Introduction
Stanford Exploration Project
7/8/2003