next up previous print clean
Next: Components of the Teleseismic Up: Wilson and Guitton: Linear Previous: Wilson and Guitton: Linear

Introduction

The growing abundance of densely sampled recordings of the teleseismic wavefield from regional scale portable and permanent seismic arrays demands the employment of more sophisticated array processing algorithms previously developed for seismic exploration. Most of the effort to date has concentrated on depth imaging algorithms using forward-scattered arrivals and free-surface multiples Aprea et al. (2002); Dueker and Sheehan (1997); Poppeliers and Pavlis (2003); Rondenay et al. (2001); Sheehan et al. (2000); Wilson et al. (2003); Wilson and Aster (2004). We suggest that imaging efforts in the absence of other preprocessing steps often are misguided. Seismic imaging algorithms are developed for noise-free datasets with an ideal distribution of source-receiver geometries that illuminate the imaging target from all angles. Even exploration data collected using industry standard acquisition geometries and well-controlled local sources with clean, easily modeled source functions are plagued by noise and incomplete illumination. To overcome these acquisition shortcomings, industry processing flows usually begin with different combinations of deconvolution, data interpolation and regridding, surface static corrections, datuming, and spatial/temporal filtering Yilmaz (1997).

Although many arrivals in the teleseismic wavefield have comparable and sometimes better signal-to-noise ratios than exploration experiments, all teleseismic imaging experiments suffer from sparse, incomplete, and irregular angular and spatial sampling (e.g. limited range of source-receiver offsets and azimuths). These difficulties hamper imaging efforts and require the employment of preprocessing steps similar to those used by the exploration industry. Deconvolution in the form of traditional "receiver function analysis" is of course widely employed to enhance receiver-side converted arrivals Langston (1977); Phinney (1964). Initial attempts to use f-k Wilson and Aster (2004) or Karhunen-Loeve Rondenay et al. (2001) filtering on teleseismic data have shown promise although it may not be the best method of signal extraction because of lack of spatial frequency resolution and inability to cope with time variable (non-stationary) signals. Other preprocessing efforts have used the predicted teleseismic P-wave slowness for a given arrival to separate signal from near surface scattering Jones and Phinney (1998); Wilson et al. (2003, 2004) and to interpolate data traces Poppeliers and Pavlis (2003) . However, besides deconvolution, none of these or other widely used industry standard preprocessing steps have become commonplace in teleseismic imaging practice despite its apparent necessity.

It is likely that the exclusion of these preprocessing steps is largely historical. Initial analysis of teleseismic conversions was performed with a single set of three component seismograms recorded by an isolated station Langston (1977); Phinney (1964). To avoid scattering from short wavelength features that depend strongly on back-azimuth, seismograms from single stations were routinely low-pass filtered to remove everything except energy primarily sensitive to structure with long spatial wavelengths (long temporal wavelengths). After the introduction of three component seismic arrays and with the birth of programs like IRIS-PASSCAL (http://www.iris.edu), data processing still followed the path devised for single isolated stations with the only difference being that the point measurements made by individual stations were now closer together. Much of the information about the lithospheric structure beneath the stations was discarded through temporal filtering and the absence of array-based processing. To extract the most information about lithospheric structure from increasingly more dense seismic arrays we must reexamine standard teleseismic data preprocessing flows prior to application of imaging algorithms.

For this reason, we have introduced a new method for signal/noise separation and data interpolation using an inverse formulation of the linear radon transform based on previous work applied to industry data Guitton and Symes (2003); Sacchi and Ulrych (1995) . This paper will begin by defining signal and noise for wavefields produced by teleseismic earthquakes. After definition, we will show how differences in the basic moveout and geometry of the signal and noise wavefields can be used to separate them through projection of the data space onto a plane wave basis (linear radon domain). Our choice of separation through linear radon transform also gives us the added advantage of automatic data interpolation of spatially coherent plane wave arrivals upon return to the data domain. We show application of this technique to one synthetic and one recorded dataset with differing receiver spacing, target depths, and structural geometry.


next up previous print clean
Next: Components of the Teleseismic Up: Wilson and Guitton: Linear Previous: Wilson and Guitton: Linear
Stanford Exploration Project
5/3/2005