Next: REORDERING THE OFFSET ORDER Up: Sun: Multiple suppression using Previous: Sun: Multiple suppression using

# INTRODUCTION

One of the essential differences between multiples and primaries is moveout. On the basis of this difference, a velocity-stacking multiple suppression approach was developed by Lumley et al 1994. In the velocity-stacking domain, the multiple and primary energy are separated and a masking function is applied to the data in the velocity-stacking domain to remove the multiple energy. Then the data can be inverse-transformed back to the time domain with only primary energy left.

Since the velocity-stacking operator is time-variant, it does not have the Toeplitz structure in the frequency domain. Therefore, given an operator and its adjoint, the inverse transform has to be formulated as an Lp norm optimization problem, which makes this approach fairly expensive. In addition, this approach cannot handle the near-offset multiple events very well. Because the near-offset multiple event is parallel to the primary event, it is very difficult to separate it from primary energy in the velocity stacking domain.

In this paper, I propose a new approach, which also makes use of the moveout difference between primary and multiple events. Instead of velocity-stacking transform, normal-moveout correction is applied to the CMP gather. Ideally, the hyperbolic primary events will be flattened after NMO correction, whereas the multiple events will not be. There is a residual moveout for the multiple events. I then randomize the order of offsets. This process will have little influence over the primary, but the shape of the multiple event will be destroyed by the residual moveout. In other words, after this randomization, the primary energy is still coherent and the multiple energy will look like random noise. Therefore, the multiple energy can be removed using a prediction-error filter (PEF).

The randomization of the offset order can be regarded as a random process, which can be applied to a single CMP gather many times to produce an ensemble of samples. The ensemble of samples forms a 3-D cube, which can be further decomposed into many small 3-D subcubes. A 3-D PEF is estimated from each subcube and then convolved with the cube to remove the multiple energy. After that, all the samples are averaged back into one CMP gather, which is ideally free of multiple. Here, the estimation of the PEF for each subcube is posed as an inversion problem.

In order to improve the efficiency of the algorithm, I set the initial guess of the PEF for one subcube to be the PEF of its preceding subcube. In other words, except for the first subcube, which starts with a zero-valued initial guess, all the subsequent subcubes take last estimated PEF as initial guess. Therefore, the iteration steps can be reduced to one step for all the subsequent subcubes with little loss of accuracy. Since we are mainly interested in preserving the horizontal events, there are some requirements regarding the shape of the PEF.

Two synthetic and one real example are presented to show that this multiple suppression approach can remove the multiples from near to far offset. Especially in the near offset, its performance is very promising. Since this approach is based on PEF, other kinds of random noise will be removed as well as multiples. Therefore, the signal-to-noise ratio will be increased.

Next: REORDERING THE OFFSET ORDER Up: Sun: Multiple suppression using Previous: Sun: Multiple suppression using
Stanford Exploration Project
11/11/1997