next up previous print clean
Next: Data continuity assumptions Up: LEAST-SQUARES SEPARATION OF SIGNAL Previous: Noise removal by characterizing

Data editing

 

This chapter addresses an issue that is somewhat different than the topics covered by the rest of this thesis. This issue is data editing, or the removal of bad data by muting, one of the simplest methods of removing noise. The emphasis in this chapter is eliminating isolated high-amplitude unpredictable events. These events are unpredictable in the sense that they leave high-amplitude residuals when predictable events are removed. This is in contrast to the low-amplitude residuals normally expected as prediction errors.

Data editing was one of the original methods of controlling noise in seismic data. When a seismic trace was dominated by noise, it was simply removed. For prestack data that would only be stacked, removing an offending trace would not significantly affect the stack as long as the stack fold was significant. Even then, only traces completely overwhelmed by high-amplitude noise needed attention, since moderately bad traces generally would not affect the stacked result.

In modern data processing, manual editing of traces is no longer practical since the volume of data is so large that a processor cannot examine all the data in a reasonable time. Automatic editing is especially important for three-dimensional surveys because of the huge data volumes involved. These large data volumes have led to other efforts to edit data automatically Pokhriyal et al. (1991); Pokhriyal (1993).

With modern acquisition systems, the dynamic range of the recorded data is much larger than that available with older systems. While this large dynamic range allows more accurate recording of the desired signals, higher amplitude noise is also allowed into the data. This noise may be caused either by recording unwanted signals, such as ground roll, or it may be caused by imperfections in the recording instruments. Modern data processing techniques preserve these high amplitudes, where some older data processing methods would have limited the highest amplitudes to some ``reasonable'' limit.

Another problem is that data is no longer simply stacked; clean prestack data is needed for processes such as velocity determination, prestack migration, and AVO measurements. The processes discussed later in this thesis will also be impacted by high-amplitude noise in the data. High-amplitude noise is a special problem when least-squares methods are being used, since the square of a high-amplitude error is likely to overwhelm the smaller errors that are of more interest.

The goal of this chapter is to locate, eliminate, and mark the position of high-amplitude noise so that it will not contaminate the least-squares techniques presented in the following chapters. Marking the locations of bad data is done for two reasons. The first is to avoid mistaking the data that was zeroed for good data. The second reason is to allow restoration of the data that has been removed with an estimate derived from an inversion processes.

The unpredictable noise will be detected when the residual of a prediction-error filter exceeds a limit that is defined from the data. This technique will be demonstrated on real data with a variety of noises.



 
next up previous print clean
Next: Data continuity assumptions Up: LEAST-SQUARES SEPARATION OF SIGNAL Previous: Noise removal by characterizing
Stanford Exploration Project
2/9/2001