next up previous print clean
Next: Conclusions Up: Vlad: Enhanced random noise Previous: Improving on existing methods

Practical issues

The improvement brought about by ERNA increases with the amount of random noise up to a point, after which it decreases as the signal becomes totally dominated by noise. As with any noise removal process, the residual (eliminated noise) should be routinely examined to see whether any meaningful signal has been thrown away too. Residuals should ideally look like pure white noise. If they do not, then substitute the residual for the input data and iterate again (see Figure [*]). If signal is still seen in the last residual, and additional iterations do not bring improvement, then modification of the parameters of the f-x and t-x decons may be necessary.

Sometimes a very small amount of signal will be left in the residuals even after extensive adjustment of the parameters. While this may signify that the optimal combination of parameters has not been attained yet, in practice the result is better than either f-x or t-x decon. Given the way the method works (averaging the results of f-x and t-x decons), in order for signal to be overlooked by ERNA, it has to be overlooked by both types of decon. The left panels of Figure [*] show the final residuals after applying several ERNA iterations to each of the three datasets presented in Figures [*], [*], and [*]. Hints of signal can be distinguished, especially in the prestack seismic residual. However, it must be kept in mind that the amplitude in the panels has been gained for visualization. The Root Mean Square (RMS) values of the residuals throughout the iterations (normalized to the value of the input data) are shown in the right side panels in the same Figure [*]. The first iteration removes most of the noise; the rest only extract leftover signal from the residuals. After a small number of iterations ERNA converges to its practical limit.

Experience with data strongly affected by coherent noise shows that the method does not react well to crossing dips, especially when one set of events is less coherent than the other. Applying the method to such data results in a patchwork of small regions in which a single dip dominates, much like the results of a dip field estimator that attempts to find the most energetic local dip. Another instance when ERNA may not function optimally is the case of very simple synthetics, which contain extended surfaces of constant (especially zero) values between events. This is because f-x and t-x "decons" may have trouble with such areas. "Spiking" very clean real data with noise (Figure [*]) can be a good alternative to using a simple synthetic.

Fx2d is cheaper to apply than Txdec, the cost of which increases with the cube of the filter size. Txdec can be applied on a 3-D cube, doing true 3-D spatial prediction, while with Fx2d one has to resort to a surrogate involving multiple passes. The cost of 3-D t-x decon can be prohibitive nonetheless, so a 2-D decon may have to be used. Simply looping 2-D noise attenuation operators along the third dimension of a cube will eliminate data that is coherent along the third dimension, but is not coherent in the filtering plane. This is easily visible in the residual. To simulate the effects of true 3-D noise attenuation one has to take the residual (not the result) of the first 2-D filtering along the first dimension, and to filter it along another dimension, obtaining a second result and a second residual. This second residual is then filtered along the third dimension to obtain a third filtering result and a third residual. The final result is the sum of the three filtering results.

A potential objection to the method is that the noise passed by the spatial prediction filtering programs consists only of coding artifacts due to the different size of the windows. The response is that when varying the parameters, including window sizes, inside the same method, the artifacts remain similar enough as not to destruct their coherence when superimposed. However, the patterns passed by any of the f-x runs are different from those passed by the t-x, and they interfere to destroy their coherence. The two spatial prediction filtering methods employed are supposed in theory to produce similar results, by working in two different domains. However, we see that in practice they produce different artefacts. A more quantitative analysis of the properties of the noise passed by f-x and t-x decon may be warranted.

My experience showed that ERNA can benefit velocity analysis in the case of 2D prestack seismic data when automatic velocity analysis is affordable for each midpoint. Denoising common-offset planes greatly improved the ability of automatic velocity picking programs to output results that are consistent across midpoint.

F-x decon is commonly implemented in seismic processing packages, and it also exists in SEPlib, as does t-x decon. There is therefore no need to write any code in order to implement the ERNA technique. Everything can be accomplished in a Makefile, a shell script or any other form of batch file.


next up previous print clean
Next: Conclusions Up: Vlad: Enhanced random noise Previous: Improving on existing methods
Stanford Exploration Project
7/8/2003