Electronic document preface (ps 65K) (src 59K)
I have created a textbook on echo soundings that provides a model for all publications where the results are contained in figures and illustrations. In this arrangement, programs are linked with the text file. Located in each figure caption is a pushbutton that launches the commands that recreate the figure. This gives new meaning to the hallowed words, ``reproducible research.'' The software is all free and nearly all of it is in widespread use. Experience at the Stanford Exploration Project shows that preparing such documents is little extra effort for those already using LATEX and makefiles. However, the required software is many megabytes and installing software on UNIX systems is daunting for amateurs. Therefore, I plan to distribute a CD-ROM with software preinstalled for machines of several manufacturers. This gives instant access to my book and it also provides a model of reproducible research. With luck, this document may appear on CD-ROM simultaneously with the paper report.
2-D interpolation with Spitz's method (ps 1421K) (src 2610K)
I have implemented Spitz's method to interpolate missing data in the presence of aliasing. The program was tested on synthetic and real data. Despite its ability to deal with aliasing, the program has some limitations inherent in the method.
Design of inverse Kirchhoff-style filters by LSCG (ps 56K) (src 36K)
Claerbout J. F.
Using the least-squares conjugate-gradient methods described in PVI, I devise 2-D inverse filters to a Kirchhoff modeling impulse response. The filter values do not fill a 2-D plane but are constrained to lie along the same trajectory as the impulse response. The technique is also applicable to DMO and residual migration operators. It seems that the cost of computing these operators will be comparable to the usual cost of applying them.
Eigenvector formulation for missing data (ps 36K) (src 3K)
Claerbout J. F.
I have a family of practical applications that I cannot seem to formulate properly with weighted linear least squares [Claerbout, 1990]. I think I need a wholly new formulation such as the one with eigenvectors that I propose below: ...
Trace interpolation using a recursive dip filter (ps 464K) (src 2637K)
Ji J. and Claerbout J.
A scheme for missing-trace interpolation of a common midpoint (CMP) gather is proposed. Spatial aliasing in a CMP gather can be overcome by normal-moveout (NMO) correction. After NMO correction, we fill in missing traces so as to minimize the output of high-pass, recursive dip filtering. For optimization using a conjugate gradient, we derive the conjugate operator of the recursive dip filter. The proposed scheme is applied to three kinds of missing data problems - interlaced missing traces, truncation at the end of survey and randomly positioned missing traces.
Trace interpolation using spectral estimation (ps 122K) (src 708K)
A scheme for missing-trace interpolation of linear events is proposed. For a two-dimensional dataset which contains linear events, a post-interpolation spectrum can be estimated from a portion of the original aliased spectrum. The restoration of missing trace data is accomplished by minimizing the energy after applying a filter which has an amplitude spectrum that is inverse to the estimated spectrum.
Moving source and receiver position by slant stacks (ps 115K) (src 178K)
Slant stacks are used to move source or receiver positions of seismic data. A least squares forward slant stack incorporates irregular geometry-related effects and diminishes artifacts in the p- domain. This step is followed by a transpose stack to return to the x-t domain with a different geometry. This very basic procedure can be improved by applying a weighting function in the slant stack domain. The weighting function is derived from a semblance panel computed during the slant stack. This is a nonlinear step and equalizes the importance of coherent events with large amplitude differences.
A trace interpolation scheme (ps 36K) (src 3K)
Interpolation continues to occupy thought at SEP, and in this issue alone there are contributions from Balog, Claerbout, Ji, Karrenbach, and Zhang. It is a major topic in Claerbout's new book (1991,1992), and Spitz (1991) has recently introduced a new scheme with a useful reference list. Why should I add yet another paper? The answer is that this method is quite different and remarkably simple: Interpolate the unaliased, low-frequency data by classical linear means, and then reconstruct the high frequencies from the low. The computation is straightforward, using one-dimensional modules and well-known techniques. Indeed, the method is simple enough that it seems unlikely that it is new. What may be novel are ...
Interpolation and Fourier transform of irregularly sampled data (ps 107K) (src 129K)
Many algorithms in seismic data processing require that input data have uniform sampling intervals both in time and in space. However, in reality, the difficulties in positioning the sources and receivers may cause the spatial sampling intervals to vary from place to place. If the variations of the sampling intervals are too large to be acceptable, an interpolation process must be done. Because seismic data are almost always sampled uniformly in time, the spatial interpolation can be done along each constant time slice or each constant frequency slice. Therefore, the problem can be reduced to one dimension. ...
Modeling a discrete system by eigenvalue decomposition (ps 161K) (src 6893K)
Filho C. A. C.
Numerical solutions of the wave equation require the discretization of the spatial and temporal dimensions, which creates numerical artifacts such as dispersion. If the physical system is already spatially discrete, the exact solution will be truly dispersive. The intrinsically dispersive character of these systems are a result of the constraints in the energy flux between neighbor cells of the system. Using the eigenvalue decomposition of the spatial operator I solve, exactly, the equations of motion associated with 1 and 2-D transverse isotropic spatially discrete systems. The method is unconditionally stable and requires no time discretization, but the cost is prohibitively high for practical uses. The intrinsically dispersive character of these systems are a result of the constraints in the energy flux between neighbor cells of the system. To overcome the high cost of the eigenvalue decomposition method, I have implemented an approximate solution using a time recursive scheme in a parallel architecture, with results that show good agreement with the exact solution.
Accurate, efficient 2-D elastic modeling using S&M (ps 186K) (src 8493K)
Filho C. A. C.
Some important aspects of accurate finite-difference elastic modeling algorithms are the use of high order differential operators, staggered grid computations, and a two-step implementation of the spatial operator by first obtaining the strains and then the stresses (Dablain, 1986; Mora, 1986; Etgen, 1989). The high-order operators are required to avoid numerical anisotropy and dispersion, while the staggered scheme is important to guarantee the spatial synchronization of the two-step 1-D operators used in the strain-stress implementation. Using the exact solution of a discrete physical system (Cunha, 1991) as guide for the discretization, and following the equivalence relations defined in the ...
Angle-dependent reflectivity from elastic reverse-time migration (ps 178K) (src 17311K)
Filho C. A. C.
Any amplitude-based lithologic inversion method must compensate for all changes in the amplitudes that are not directly related to the properties to be estimated. Conventional methods for compensation of geometrical spreading and transmission/conversion losses are not applicable when the subsurface structures cannot be well approximated by horizontal layers. Migration-based methods have been proposed recently as a solution for correctly retrieving angle dependent reflectivity in the presence of complex structures. I this paper I describe here an anisotropic prestack reverse migration scheme for retrieving the P-P reflectivity as a function of the local Snell parameter. The paper shows that this functionality represents a more appropriate way of expressing the directional dependence of the reflection coefficient than the more usual angular function.
Kirchhoff 3-D prestack time migration on the Connection Machine (ps 94K) (src 6316K)
Lumley D. and Biondi B.
We develop and implement a computer algorithm to image process (migrate) 3-D prestack seismic reflection data on the massively parallel Connection Machine (CM) architecture. Our Kirchhoff 3-D wave-equation time migration achieves our initial goal of 400 Mflop/s on an 8k processor CM, and is scalable to over 3 Gflop/s on a full 64k processor Connection Machine. This is accomplished by parallelizing the migration problem in surface coordinates (x,y), and by serial looping over the pseudodepth coordinate , thus optimizing the amount of in-processor parallel computation. Data I/O rates between the front-end and the CM are enhanced by first transposing the data and then calling parallel processor data access subroutines. Internal data movement is enhanced by calling indirect memory addressing and circular data shifting subroutines.
Stability of 2D finite difference traveltime algorithms (ps 95K) (src 331K)
Popovici A. M.
The original finite difference traveltime calculations implemented after Van Trier and Symes (1990) did not have an adaptive step determination. As a result, numerical instabilities appeared in unpredictable cases, for example the simple constant gradient velocity model in Figure 2. Such instability does not present a problem for research algorithms because numerical instabilities can be controlled by decreasing the grid spacing. However, for production programs this is impractical as users may not have complete control of each step of the finite ...
Phase shift plus interpolation and split-step Fourier migration (ps 678K) (src 7499K)
Popovici A. M.
Migration algorithms based on Fourier methods are naturally parallel. Two Fourier based migration algorithms (Phase Shift Plus Interpolation and Split-step) were implemented on the Connection Machine and tested on variable velocity 2-D and 3-D models. The migration results of the two methods and their run-times on the Convex and the Connection Machine were compared. For the three dimensional case a 33 times improvement in the run-time of the PSPI and a 14 times improvement in the run-time of the Split-step algorithm were obtained on the parallel computer.
Finite difference calculation of Green's functions (ps 322K) (src 539K)
With the asymptotic ray approximation, the Green's functions of the elastic wave equation are characterized by traveltime and amplitude functions. A new finite difference algorithm is developed to calculate the traveltime and amplitude functions of the first arrival seismic waves for a given 2-D medium. This algorithm combines features of previous algorithms. To achieve efficiency and accuracy, the partial differential equations are solved in polar coordinates. The resulting codes are fully vectorizable. The stability of the algorithm is ensured by adapting the step sizes of the wavefront extrapolation to the local directions of wave propagation. For traveltime calculations, the algorithm handles slowness models containing large contrast. For amplitude calculations, the algorithm handles both line and point source geometries, but requires slowness models to be smooth since transmission losses are not taken into account. The source radiation patterns, if known, can also be incorporated into the amplitude calculation. Examples with different types of slowness models show that the method works. A potential application of the method and its extension to 3-D could be useful for Kirchhoff prestack depth migration.
The local paraxial ray method: a proposal (ps 53K) (src 9K)
The seismic traveltimes and amplitudes associated with the WKB Green's functions can be computed by using two different approaches. One approach uses the dynamic ray tracing method to find the traveltimes and amplitudes along rays (Cervený, 1987), and then interpolates them onto a regular grid with the paraxial ray approximation (Beydoun and Keho, 1987). The other approach uses a finite difference method to extrapolate wavefronts so that the traveltimes and amplitudes are computed directly on a regular grid (Vidale, 1988; Vidale and Houston, 1990; ...
Refraction statics in mountainous terrain (ps 88K) (src 56K)
Refraction statics are often applied to data gathered in mountainous regions in order to correct the data to datum. The motivation for this is to enable standard processing techniques to be used on the data. Unfortunately, the refraction inversions often yield poor results which do not make sense geologically and which cause error in the statics corrections. The refraction statics program considered here works by inverting the first break picks in order to obtain a model for the near surface low velocity layer. Once the near surface model is obtained, a static correction is calculated based on this model. The refraction inversion algorithm assumes that the first breaks are refracted head wave arrivals. In mountainous regions this assumption breaks down, and the presence of transmitted arrivals causes substantial error in the inversion. A ray tracing program is used to model refracted head wave and transmitted arrivals. These synthetic arrivals are input to the refraction inversion program to demonstrate that refraction statics programs fail when transmitted arrivals are present in the data. The transmitted arrivals are faster than the refracted head wave arrivals, and cause substantial error in the refraction statics inversion. ...
Multiple suppression in the velocity domain (ps 643K) (src 6384K)
Filho C. A. C. and Claerbout J. F.
Multiple suppression in the velocity domain is often accomplished by a filtering process that removes the events inside a selected window in that domain. Alternately, the suppression goal can be formulated as an optimization problem in which we try to find a model that, when added to the data, minimizes the power of the output in some region of the velocity domain. Defining this region is of fundamental importance for the success of the method, that is, for attaining maximum suppression of multiples with minimum interference with the primaries. The predictive character of reverberation events in the velocity domain can be efficiently used to automatically design these multiple-related windows. Tests with synthetic data show the efficacy and robustness of the method, which is able to preserve primary events with stacking velocities inside the multiple velocity range.
A simple wavefield separation scheme (ps 568K) (src 2411K)
Exact separation of the different wavetypes in a multicomponent dataset requires knowledge of all the elastic properties of the near surface. I propose an approximate scheme that uses a small number of ``separation parameters.'' The approximate scheme uses a three stage approach; at each stage a maximum of three parameters must be estimated. I demonstrate the effectiveness of this scheme on a synthetic dataset.
Singular value decomposition for cross-well tomography (ps 172K) (src 128K)
Michelena R. J.
Singular value decomposition is performed on the matrices that result in tomographic velocity estimation from cross-well traveltimes in isotropic and anisotropic media. The singular vectors in both data and model space are presented along with their corresponding singular values for a simple geometry.
Traveltime picking through phase spectrum calculation (ps 114K) (src 223K)
Zhang L. and Claerbout J.
Traveltime tomographic inversions of seismic data usually require picking the arrival times of certain events recorded by receivers. We have developed an algorithm that picks the traveltime of an event according to the phase properties of the wavelet of the event. If the event has a minimum phase wavelet, the algorithm picks the first break of the wavelet. If the wavelet of the event has a zero phase spectrum, the central point of the wavelet is picked. For any mixed phase wavelet, the algorithm picks the first break of the causal, minimum phase part of the wavelet. Numerical experiments show that, for isolated, noise-free events, the algorithm performs well. However, for field data with strong, spiky noises, the algorithm fails to pick the correct traveltimes.
Traveltime tomography without ray tracing (ps 58K) (src 7K)
The computational tasks involved in the nonlinear tomographic traveltime inversion can be efficiently accomplished by using finite difference methods. The traveltimes can be computed by solving the eikonal equation. The derivatives of traveltimes with respect to the parameters of a slowness model can also be computed by solving a first-order linear partial differential equation. This new approach greatly reduces the computational cost imposed by the conventional ray tracing method. It ensures the proper treatment of the first-arrival traveltimes. Furthermore, the method handles the slowness model expanded with arbitrary basis functions, which is useful in incorporating geological structure interpretation into the inversion process. I expect that the new method will improve the result of the inversion.
Do core-sample measurements record group or phase velocity? (ps 211K) (src 24573K)
Dellinger J. and Vernik L.
Laboratory core-sample measurements do not record elastic constants directly; they record traveltimes from which the elastic parameters must be deduced. But do the measured traveltimes represent group velocities, phase velocities, or something else? For propagation down symmetry directions group and phase velocity are the same so there is no ambiguity. However, propagation in nonsymmetry directions is key to determining a complete set of elastic constants. In nonsymmetry directions there is no guarantee the energy radiated from the source will travel straight up the core to the receiver; the leading planar portion of the wavefront may crab sideways and partially or completely miss its desired target. Our model results show that unless the miss is complete picking first breaks will still give a good phase-velocity arrival time, the amount of additional error due to the miss being less than the normal random measurement errors of one or two percent. Unfortunately it is still true that even a small error of only one or two percent in a crucial nonsymmetry-direction phase-velocity measurement can result in a significant error in the derived value of C13, especially for qP waves. If good qP and qSV nonsymmetry-direction measurements are both available, however, the error bounds on C13 can be considerably improved.
SEP72 - LIVE (ps 53K) (src 416K)
This SEP report will be online. The basic structure of the report is very simple with a minimum of requirements to the authors. Each paper in this report lives in its own directory. That directory contains everything which is needed to regenerate the entire paper. This handout is designed to outline the organization to the authors.