Next: About this document ...
Up: Table of Contents
SEP-113 -- TABLE OF CONTENTS
A comparison of three multiple-attenuation methods for a Gulf of Mexico dataset (ps.gz 11869K) (pdf 1837K) (src 28662K)
Least-squares joint imaging of primaries and pegleg multiples: 2-D field data test (ps.gz 2373K) (pdf 745K) (src 15221K)
Three multiple attenuation techniques are tested on a Gulf of
Mexico dataset. These methods are (1) a hyperbolic Radon transform
followed by a mute (2), the Delft approach and (3), a pattern-based
technique. The Radon transform separates multiples and primaries
according to their moveout. The Delft approach models the multiples
and subtracts them by estimating adaptive filters.
The pattern-based method uses the multiple
model from the Delft approach to extract and separate multiples from the
primaries according to their multivariate spectra.
Because of the complex geology and the modeling
uncertainties introduced by 3-D effects and the acquisition
geometry, the Radon transform and the Delft approach do not perform
as well as the pattern-based method. In addition, the pattern-based
method works significantly better when higher dimension filters
are utilized: diffracted multiples are well
attenuated while preserving the primaries.
In this paper I present an improved least-squares joint imaging algorithm
for primary reflections and pegleg multiples (LSJIMP) which utilizes improved
amplitude modeling and imaging operators presented in two companion papers in
this report Brown (2003a,b). This algorithm
is applied to the entire Mississippi Canyon 2-D multiples test dataset and
demonstrates a good capability to separate pegleg multiples from the data.
Multiple attenuation in the image space (ps.gz 5264K) (pdf 947K) (src 90911K)
Sava P. and Guitton A.
Multiples can be suppressed in the angle-domain image space after migration.
For a given velocity model, primaries and multiples have different
angle-domain moveout and therefore can be separated
using techniques similar to the ones employed in the data space prior to
migration. We use Radon transforms in the image space to discriminate between
primaries and multiples. This method has the advantage of working with
3-D data and complex geology. Therefore, it offers an alternative to the more
expensive Delft approach.
Multiple suppression in the angle domain with non-stationary prediction-error filters (ps.gz 6236K) (pdf 2164K) (src 27948K)
Haines S., Guitton A., and Sava P.
Non-stationary Prediction Error Filters (PEF's) present an effective
approach for separating multiples from primaries in the angle domain.
The choice of models to be used for estimation of the PEF's has a
substantial impact on the final
result, but is not an obvious decision. Muting in the
parabolic radon transform (PRT) domain produces an effective
multiple model, but the corresponding primary model must be
massaged in order to minimize remaining multiple energy and achieve a
Multiple attenuation with multidimensional prediction-error filters (ps.gz 7555K) (pdf 1770K) (src 16128K)
Multiple attenuation in complex geology remains
a very intensive research area. The proposed technique
aims to use the spatial predictability of both the signal (primaries) and noise
(multiples) in order to perform the multiple attenuation in
the time domain. The spatial
predictability is estimated with multidimensional
prediction-error filters. These filters are time-variant in order
to handle the non-stationarity of the seismic data.
Attenuation of surface-related multiples is illustrated
with field data from the Gulf of Mexico with 2-D and 3-D filters.
The 3-D filters allow the best attenuation result. In particular,
3-D filters seem to cope better with inaccuracies present in the
multiple model for short offset and diffracted multiples.
Source-receiver migration of multiple reflections (ps.gz 1962K) (pdf 565K) (src 5791K)
Multiple reflections are usually considered to be noise and many
methods have been developed to attenuate them. However, similarly to primary reflections,
multiple reflections are created by subsurface reflectors and contain
their reflectivity information. We can image surface related multiples, regarding
the corresponding primaries as the sources. Traditional source-receiver migration
assumes that the source is an impulse function. I generalize the source-receiver migration
for arbitrary sources, and apply it to the migration of multiple reflections. A complex
synthetic dataset is used to test the theory. Results show that my
multiple migration algorithm is effective for imaging the multiple-contaminated data.
Prestack time imaging operator for 2-D and 3-D pegleg multiples over nonflat geology (ps.gz 1207K) (pdf 470K) (src 13520K)
My Least-squares Joint Imaging of Multiples and Primaries (LSJIMP) algorithm
Brown (2003b) separates pegleg multiples and primaries. LSJIMP
computes separate images of the peglegs and primaries, and then uses the mutual
consistency of the images to discriminate against unwanted noise types in each
image. The images must be consistent in two respects: kinematics and
amplitudes. A companion paper Brown (2003a) paper addresses the
amplitude issue. In this paper, I address the kinematics. Kinematically, the
events must be correctly positioned in time and flat with offset. To this end,
Amplitude modeling of pegleg multiple reflections (ps.gz 896K) (pdf 407K) (src 13402K)
My Least-Squares Joint Imaging of Multiples and Primaries (LSJIMP) algorithm
Brown (2003a) separates pegleg multiples and primaries. LSJIMP
computes separate images of the peglegs and primaries, and then uses the mutual
consistency of the images to discriminate against unwanted noise types in each
image. The images must be consistent in two respects. Kinematically, the events
must be correctly positioned in time and flat with offset. This is accomplished
by an improved normal moveout operator (HEMNO) introduced in a companion paper in
this report Brown (2003b). This paper addresses the second aspect
Narrow-azimuth migration of marine streamer data (ps.gz 335K) (pdf 482K) (src 1227K)
Equivalence between shot-profile and source-receiver migration (ps.gz 24K) (pdf 38K) (src 5K)
I introduce a new migration method that overcomes the
limitations of common-azimuth migration while retaining
its computational efficiency for imaging marine streamer data.
The method is based on source-receiver downward-continuation
of prestack data with a narrow range
of cross-line offsets.
To minimize the width of the cross-line offset range,
while assuring that all the recorded events are correctly propagated,
I define an ``optimal'' range of cross-line offset dips.
To remove the effects of the boundary artifacts
I apply a coplanarity condition on the prestack image.
This process removes from the image cube
the events that are not correctly focused at zero offset.
Tests of the proposed method with the SEG-EAGE salt dataset
show substantial image improvements
in particularly difficult areas of the model
and thus confirm the capability of the new method
to overcome the limitations of common-azimuth migration
in complex areas.
Shan G. and Zhang G.
Shot-profile migration and source-receiver migration seem different, but the
image and Common Image Gather they obtain is the same.
In this paper, we
prove that shot-profile migration and source-receiver migration are equivalent,
assuming that the imaging condition is cross-correlation and the method for propagating the
source and receiver wavefields is a one-way wave equation.
This is achieved after generalizing source-receiver migration to arbitrary sources.
Multichannel deconvolution imaging condition for shot-profile migration (ps.gz 660K) (pdf 607K) (src 1921K)
Valenciano A. A. and Biondi B.
A significant improvement of seismic image resolution is obtained by framing the shot-profile migration imaging condition as a 2-D deconvolution in the shot position/time (xs,t) domain. This imaging condition gives a better image resolution than the crosscorrelation imaging condition and is more stable than the ``more conventional'' 1-D deconvolution imaging condition.
A resolution increment is also observed in common image gathers (CIGs) computed with the 2-D deconvolution imaging condition, thus allowing a more accurate velocity analysis.
Operator aliasing in wavefield continuation migration (ps.gz 152K) (pdf 98K) (src 360K)
Artman B., Shragge J., and Biondi B.
With the widespread adoption of wavefield continuation methods
for prestack migration, the concept of operator aliasing
warrants revisiting. While zero-offset migration is unaffected,
prestack migrations reintroduce the issue of operator aliasing. Some situations
where this problem arises include subsampling the shot-axes to save
shot-profile migration costs and limited cross-line shot locations
due to acquisition strategies. These problems are overcome in this
treatment with the use of an appropriate source function or
band-limiting the energy contributing to the image.
We detail a synthetic experiment that shows the ramifications of
subsampling the shot axis and the efficacy of addressing the
problems introduced with our two approaches. Further, we explain
how these methods can be tailored in some situations to include
useful energy residing outside of the Nyquist limits.
Phase-shift migration of approximate zero-offset teleseismic data (ps.gz 220K) (pdf 194K) (src 2759K)
A hybrid of traditional survey-sinking migration is derived that
is applicable to teleseismic wavefields. To reconfigure
teleseismic data to an approximate equivalent of zero-offset, an
adjoint linear moveout shift is applied. This transformation
enables the straightforward development of
phase-shift operators to downward continue the modified teleseismic
data. This method also affords an opportunity for
imaging earth structure with a variety of forward- and
backscattered modes through appropriate choices of wavefield
velocities. This method is applied to a synthetic teleseismic data
set, and several migration results are presented to demonstrate its effectiveness.
Imaging with buried sources (ps.gz 64K) (pdf 71K) (src 2748K)
Shragge J. and Artman B.
Because the shot-profile migration algorithm largely mimics the data
acquisition process, simple thought experiments may extend its utility
to image the subsurface with less conventional geometries and/or
sources. Imaging with the forward- and backward-scattered wavefields
in an elastically modeled earth from buried sources is easily
implemented without the development of new tools. With this potential
in mind, we identify several novel applications of this wave-equation
imaging technique, detail the requirements and processing required for
its success, and give an example of the process and results by
applying these concepts to a crustal-scale imaging experiment using
emergent teleseismic plane-waves as sources.
Improving the amplitude accuracy of downward continuation operators (ps.gz 459K) (pdf 235K) (src 779K)
Vlad I., Tisserant T., and Biondi B.
While wave-equation downward continuation correctly accounts for traveltimes, the
amplitude and phase of the image can be improved. We show concrete
ways of implementing a previously proposed improvement using both
mixed-domain and finite-difference extrapolators. We apply the corrections to constant
velocity, constant vertical velocity gradient and v(x,z) cases and show that
the correction brings amplitudes closer to the theoretical values.
Angle-domain common-image gathers for migration velocity analysis by wavefield-continuation imaging (ps.gz 1036K) (pdf 824K) (src 4898K)
Wavefield-continuation angle-domain common-image gathers in 3-D (ps.gz 194K) (pdf 140K) (src 800K)
Biondi B. and Symes W.
the kinematic properties of
offset-domain Common Image Gathers (CIGs)
and Angle-Domain CIGs (ADCIGs) computed
by wavefield-continuation migration.
Our results are valid regardless of
whether the CIGs were obtained by using the
correct migration velocity.
They thus can be used as a theoretical basis for
developing Migration Velocity Analysis (MVA) methods
that exploit the velocity information contained in ADCIGs.
We demonstrate that in an ADCIG cube the image point
lies on the normal to the apparent reflector dip,
passing through the point where the source ray intersects
the receiver ray.
Starting from this geometric result,
we derive an analytical expression for the expected movements
of the image points in ADCIGs
as functions of the traveltime perturbation
caused by velocity errors.
By applying this analytical result and
assuming stationary raypaths,
we then derive two expressions for the
Residual Moveout (RMO) function in ADCIGs.
We verify our theoretical results
and test the accuracy of the proposed RMO functions
by analyzing the migration results
of a synthetic data set
with a wide range of reflector dips.
Our kinematic analysis leads also to the development
of a new method
for computing ADCIGs when
significant geological dips
cause strong artifacts in the ADCIGs computed by
The proposed method is based on the computation of
offset-domain CIGs along the vertical-offset axis (VOCIGs)
and on the ``optimal'' combination of these new CIGs
with conventional CIGs.
We demonstrate the need for and the advantages of the
proposed method on a real data set acquired
in the North Sea.
Tisserant T. and Biondi B.
Angle-Domain Common-Image Gathers (ADCIGs) are often used
for velocity analysis or Amplitude Versus Angle studies.
Wavefield-continuation methods can easily generate angle gathers
before Prucha et al. (1999) or after imaging
Sava and Fomel (2000).
The method proposed by Sava and Fomel (2000)
assumes that the source and
Angle decompositions of images migrated by wavefield extrapolation (ps.gz 356K) (pdf 248K) (src 604K)
I present an extension to the angle-domain decomposition of images
migrated using wavefield extrapolation. Traditionally, reflectivity
is described by a 1-D function of scattering angle. I show that we can
further decompose the image function of other angles related to the
structure and acquisition. In the 2-D case, the reflectivity is
described function of two angles, while in the 3-D case the reflectivity
is described function of four angles. Applications for such a multi-angle
decomposition include amplitude and illumination compensation due
to limited acquisition.
Inversion and filtering
Multiple realizations and data variance: Successes and failures (ps.gz 2104K) (pdf 1568K) (src 8280K)
Flattening 3-D data cubes in complex geology (ps.gz 2585K) (pdf 631K) (src 5387K)
Clapp R. G.
Geophysical inversion usually produces a single solution to
a given problem.
Often it is desirable to have some error bounds on our estimate.
We can produce a range of models by first realizing
that the single solution approach produces the minimum energy, minimum
solution. By adding appropriately scaled random noise to our
residual vector we change the minimum energy solution.
Multiple random vectors produce multiple new estimates for our model.
These various solutions can be used to assess error in our model parameters.
This methodology strongly
relies on having a decorrelated residual vector and, previously,
was used primarily on the model styling portion of our inversion
problem because it came closer to honoring the decorrelation requirement.
With an appropriate description of the noise covariance,
multiple realizations can
Examples of perturbing the data fitting portion of the standard
inversion are shown on a 2-D deconvolution and 1-D velocity estimation
problems. Results indicate that methodology has potential but is not
well enough understood to be generally applied.
3-D volumes of data can be efficiently flattened with Fourier domain methods as long as the reflections are continuous and depth invariant. To handle faults (discontinuous reflections), I pose the flattening problem in the time-space (T-X) domain to apply a weight. This ignores fitting equations that are affected by the faults. This approach successfully flattens a synthetic faulted model. Also, the flattening method is applied repeatedly in the T-X domain to flatten a synthetic model that has pinch-outs and structure that varies with depth. There are two possible schemes for handling unconformities. One scheme requires that the unconformity be picked, the data separated into different volumes, flattened individually, and then recombined. Another scheme is to apply the flattening method which picks travel-times for all horizons at once without being restricted to time-slices. It is expected that this method will be much more computationally intensive but will require less initial picking. Both of these methods need more development and testing.
Amplitude balanced PEF estimation (ps.gz 735K) (pdf 721K) (src 4399K)
Inverse theory teaches us that the residual, or misfit function,
should be weighted by the inverse covariance matrix of the
noise. Because the covariance operator is often difficult to
estimate, we can approximate it with a diagonal weight that can be more easily
computed. This paper investigates the possible
choices of weighting functions for the data residual when prediction-error
filters are estimated. Examples with 2-D and 3-D field data prove that it is
better to weight the residual than to weight the data before
starting the inversion.
Coherent noise suppression in electroseismic data with non-stationary prediction-error filters (ps.gz 4448K) (pdf 675K) (src 5242K)
Haines S. and Guitton A.
Non-stationary prediction-error filters (PEF's) provide an effective
means for separating signal from coherent noise in electroseismic
data. The electroseismic signal is
much weaker than the noise, so we can not rely on windowing,
transforms, or other alterations of the original data to create models
for the PEF estimation. Instead, we design signal PEF's using the physical
predictability of the phenomena, and estimate noise PEF's using portions
of the original data. This technique is effective on
synthetic and real data.
Subtraction versus filtering for signal/noise separation (ps.gz 175K) (pdf 199K) (src 3136K)
In Guitton (2001) I presented an
efficient algorithm that attenuates coherent noise based on the
spatial predictability of both the noise and the signal. I called this
algorithm the subtraction method. This algorithm was first presented
by Nemeth (1996) and works generally better than the standard
projection filtering technique Abma (1995); Soubaras (1994). The main
motivation in writing this paper is to better understand why the
subtraction method attenuates the noise better than the filtering approach Guitton and Claerbout (2003). This is difficult to answer...
Enhanced random noise attenuation (ps.gz 1515K) (pdf 724K) (src 3346K)
Spatial prediction filtering attenuates random noise uncorrelated from
trace to trace, while preserving linear, predictable events. The
prediction is formulated as a least-squares problem in either the
t-x or the f-x domain. The methods are casually known as ``
f-x decon'' and ``t-x decon.'' Although established by common
practice, the name ``decon'' is not appropriate in these cases,
because it suggests a similarity with the much better-known
deconvolution of the signal along the time axis. However,
deconvolution removes the predictable information (wavelet +
multiples) and keeps the unpredictable (the reflectivity function),
Wave-equation migration velocity analysis by inversion of differential image perturbations (ps.gz 8915K) (pdf 1179K) (src 15135K)
Steering filters in 3-D: Tubes rather than planes (ps.gz 436K) (pdf 248K) (src 1048K)
Sava P. and Biondi B.
Wave-equation migration velocity
analysis is based on the linear relation that can be
established between a perturbation in the migrated image and the
corresponding perturbation in the slowness function.
Our method formulates an objective function in the image space,
in contrast with other wave-equation tomography techniques which
formulate objective functions in the data space.
We iteratively update the slowness function to account
for improvements in the focusing quality of the migrated image.
Wave-equation migration velocity analysis (WEMVA) produces wrong results
if it starts from an image perturbation which is not compliant with
the assumed Born approximation.
Other attempts to correct this problem lead to either unreliable or
hard to implement solutions.
We overcome the major limitation of the Born approximation by
creating image perturbations consistent with this
approximation. Our image perturbation operator is computed as a
derivative of prestack Stolt residual migration, thus
our method directly exploits the power of prestack residual
migration in migration velocity analysis.
Clapp R. G. and Clapp M. L.
composed of non-stationary plane-wave destruction filters, called a
steering filter, has practical applications to many problems. Clapp et
al. (1997) demonstrated their use for the missing data problem. Fomel (2000) showed how they could be used for signal-noise separation. In several papers Clapp
and Biondi (1998, 2000); Clapp (2001), they are used for regularizing
tomography. Prucha and Biondi (2002) used them to better handle the null-space in wave equation migration. Several different methods have been suggested for constructing the 2-D representation of these filters...
Semblance picking (ps.gz 1067K) (pdf 328K) (src 61710K)
Clapp R. G.
Obtaining reasonable moveout measurements is an
essential step in migration velocity updating.
The problem is at some level non-linear.
A viable solution is to solve the problem iteratively
by relinearizing the problem several times.
Preliminary results are encouraging.
Realization and analysis of an integration tomography scheme (ps.gz 1218K) (pdf 895K) (src 2839K)
I realize and analyze a model-based joint tomography scheme in this paper. Surface reflection seismic data and VSP traveltimes are used simultaneously to invert the velocity by using an integrated inversion scheme. Since more data are used, the integrated tomography can obtain more accurate inversion results with lower uncertainty. Using identity operator as integration operator, I apply this method to a synthetic anticline model. Compared to the surface reflection tomography, the integrated inversion result is better in areas where the VSP ray coverage is good and conversely, the integrated inversion result is poorer in the areas where VSP ray coverage is sparse or inexistent. The results suggest we can obtain improved velocity field by this integration tomography scheme if using a well-designed integration operator.
Migration and illumination
The oddities of regularized inversion: Regularization parameter and iterations (ps.gz 542K) (pdf 220K) (src 11374K)
Amplitude and kinematic corrections of migrated images for non-unitary imaging operators (ps.gz 2285K) (pdf 551K) (src 5405K)
Clapp M. L.
Proper imaging in areas with complex overburdens can not be done effectively
with an adjoint operator such as migration. To image in complex areas, we
really want to apply an inverse operator, but most imaging problems can be
very large matrices that are difficult to invert directly. Therefore, many
schemes to approximate an inverse operator have been developed.
Regularized least-squares inversion implemented
in an iterative scheme can be very effective in dealing with illumination
the imaging and regularization operators are well chosen. However, those
aren't the only decisions that need to be made. Both the choice of
regularization parameter () which balances the data and model residuals
and number of iterations (niter) can have a
significant effect on the quality of the final image. In this paper,
I describe some of the
issues that must be taken into account when choosing and niter
for an imaging problem with poor illumination.
I also examine their effects on a simple synthetic data example. These
that the effects of and niter are related and must be considered
when performing an inversion for imaging.
Obtaining true-amplitude migrated images remains a
challenging problem. One possible solution to address it is
iterative inversion. However, inversion is an expensive process that can
be rather difficult and expensive to apply, especially with 3-D
data. In this paper, I propose computing an image that is close to
the least-squares inverse image by approximating the Hessian, thus
avoiding the need for iterative inversion. The Hessian is
approximated with non-stationary matching filters. These filters are
estimated from two images: one is the migration result () and the second is the migration result of the remodeled
data computed from . Tests on the Marmousi dataset show that this filtering approach gives results
similar to iterative least-squares inversion at a lower cost.
In addition, because no regularization was used in the
inversion process, the filtering method produces an image with fewer artifacts.
Applying this method in the angle domain yields similar conclusions.
Directions in 3-D imaging - Strike, dip, both? (ps.gz 592K) (pdf 154K) (src 1859K)
Clapp M. L.
In an ideal world, a 3-D seismic survey would have infinite extents and
dense shot and receiver grids over the entire x-y plane.
This would provide the best illumination possible everywhere in the
subsurface. In our world, our limited source-receiver geometries allow
energy to leave the survey and the density of our shot and receiver
arrays depends on the equipment available. For 3-D surveys, the geometry
leads to limited azimuth ranges dependent on the direction in which the
survey is shot. The illumination itself
depends on the subsurface structure. For all of these reasons, shooting
our surveys in different directions will result in different subsurface
Illumination compensation: Model space weighting vs. regularized inversion (ps.gz 1375K) (pdf 412K) (src 110910K)
Clapp M. L.
In areas of complex geology, finite surveys and large velocity contrasts
result in images full of artifacts and amplitude variations due to
illumination problems. Cheap methods such as model space weighting
and expensive methods such as regularized least-squares inversion are
among the schemes that have been developed to deal with these issues.
Model space weighting operators can be obtained by applying a
forward modeling and an adjoint migration operator to a user-specified
reference model, then applied a posteriori to an image. Regularized
least-squares inversion applied in an iterative scheme requires the
selection of an imaging operator and a regularization operator that will
compensate for the illumination problems during the processing itself.
Applying each of these methods to the Sigsbee2A dataset, a complex synthetic,
shows that model space weighting a posteriori can help to equalize
amplitudes, but will strengthen artifacts within the image. Regularized
least-squares inversion will equalize amplitudes and reduce artifacts, but
can be quite expensive.
Iteratively re-weighted least-squares and PEF-based interpolation (ps.gz 623K) (pdf 531K) (src 3712K)
Parameter optimization for multiscale PEF estimation (ps.gz 71K) (pdf 82K) (src 44111K)
Interpolation methods frequently deal poorly with noise. Least-squares based interpolation
methods can deal well with noise, as long as it is Gaussian and zero-mean. When this is not
the case, other methods are needed. I use an iteratively-reweighted least-squares scheme to
interpolate both regular and sparse data with non-stationary prediction-error filters. I show
that multi-scale methods are less susceptible to erratic noise than single-scale PEF estimation
methods. I also show how IRLS improves results for PEF estimation in both cases, and how IRLS can
also improve the second stage of the interpolation, where the unknown data is constrained by the PEF.
Data interpolation is a long standing and persistent problem in
exploration geophysics. Methods range from those based on the known
behavior of the kinematics of seismic data
Chemingui (1999); Fomel (2001); Vlad and Biondi (2001), to
those that are based on transformations of the data into another
domain, such as the Fourier Schonewille (2000) or Radon
Interpolation of bathymetry data from the Sea of Galilee: A noise attenuation problem (ps.gz 1044K) (pdf 1442K) (src 4419K)
Guitton A. and Claerbout J.
We process a bathymetry survey from the Sea of Galilee. This
dataset is contaminated with non-Gaussian noise in the form
of glitches and spikes inside the lake and at the track ends.
Drift on the depth measurements leads to
vessel tracks in the preliminary depth images. We derive an
inversion scheme that produces a much reduced noise map of the Sea of Galilee.
This inversion scheme includes preconditioning and Iteratively
Reweighted Least-Squares with the proper weighting function to get rid
of the non-Gaussian noise. We remove the ship tracks by adding a
modeling operator inside the inversion that accounts for the drift in the data.
We then approximate the model covariance matrix with a
prediction-error filter to enhance details in the middle of the lake.
Unfortunately, the prediction-error filter has the property of degrading the
resolution of the depth map at the edges of the lake. Our images of
the Sea of Galilee show ancient shorelines and rifting features inside the lake.
Flexible 3-D seismic
survey design (ps.gz 918K) (pdf 268K) (src 47808K)
Ocean-bottom seismometers in Japan (ps.gz 1527K) (pdf 267K) (src 1738K)
Using all available subsurface information in the design
of a 3-D seismic survey, we can better adjust the acquisition effort
to the demands of illumination of the target horizon. I present a method
that poses the choice of the acquisition parameters as an integer optimization
problem. Rays are shot from grid points on the target reflector at
uniform opening and azimuth angles and their emergence positions
at the surface are recorded. The optimization (an exhaustive search in this
example) minimizes the distance
between the ray emergence coordinates and the source and receiver
coordinates of candidate geometries subject to appropriate geophysical
and logistics constraints. I illustrate the method with a 3-D subsurface
model that I created featuring a target reflector whose depth changes
significantly across the survey area. I show that for this model the
standard approach would lead to a design requiring 200 shots/km2 whereas
the optimum design requires only 80 shots/km2 without sacrificing the
illumination of the target at any depth or the logistics of acquisition.
Ocean-bottom seismometers are well-tested, functional tools commonly
used in crustal seismology. They can be deployed much deeper and
are more robust than ocean-bottom cables, being the only type of instrument for 4-C
surveys at depths greater than 1500m. I present the state-of-the-art
of the Japanese OBS technology and the logistics associated with it to the seismic industry reader.
Dynamic permeability in poroelasticity (ps.gz 53K) (pdf 104K) (src 53K)
Poroelastic shear modulus dependence on pore-fluid properties arising in a model of thin isotropic layers (ps.gz 74K) (pdf 146K) (src 72K)
Berryman J. G.
The concept of dynamic permeability is reviewed. Modeling of
seismic wave propagation using dynamic permeability is important for
analyzing data as a function of frequency. In those systems
where the intrinsic attenuation of the wave is caused in large part
by viscous losses due to the presence of fluids, the dynamic
permeability provides a very convenient and surprisingly universal
model of this behavior.
Berryman J. G.
Gassmann's fluid substitution formulas for bulk and shear moduli
were originally derived for the quasi-static mechanical behavior of
fluid-saturated rocks. It has been shown recently that it is possible to
understand deviations from Gassmann's results at higher frequencies when
the rock is heterogeneous, and in particular when the rock heterogeneity
anywhere is locally anisotropic. On the other hand, a well-known way of
generating anisotropy in the earth is through fine (compared to
wavelength) layering. Then, Backus' averaging of the mechanical
behavior of the layered isotropic media at the microscopic level
produces anisotropic mechanical and seismic behavior
at the macroscopic level. For our present purposes, the Backus averaging
concept can also be applied to fluid-saturated porous media, and
thereby permits us to study how and what deviations from
Gassmann's predictions could arise in an elementary fashion. We
consider both closed-pore and open-pore boundary conditions between
layers within this model in order to study in detail how
violations of Gassmann's predictions can arise. After evaluating a
number of possibilities, we determine that energy estimates show
unambiguously that one of our possible choices - namely,
Geff(2) = (C11 + C33 - 2C13 - C66)/3 - is the
correct one for our purposes. This choice also possesses the very interesting
property that it is one of two sets of choices satisfying a product formula
,where are eigenvalues of the stiffness matrix
for the pertinent quasi-compressional and quasi-shear modes.
KR is the Reuss average for the bulk modulus, which is also the
true bulk modulus K for the simple layered system. KV is the
Voigt average. For a polycrystalline system composed at the
microscale of simple layered systems randomly oriented in space, KV and KR
are the upper and lower bounds respectively on the bulk modulus, and
Geff(2) and Geff(1) are the upper and lower bounds
respectively on the Geff of interest here. We find that
Geff(2) exhibits the expected/desired behavior, being
dependent on the fluctuations in the layer shear moduli and also being
a monotonically increasing function of Skempton's coefficient B of
pore-pressure buildup, which is itself a measure of the pore fluid's
ability to stiffen the porous material in compression.
SEPlib programming and irregular data (ps.gz 19K) (pdf 41K) (src 16K)
MPI in SEPlib (ps.gz 20K) (pdf 36K) (src 11K)
Clapp R. G.
SEP3D is a powerful extension to the original SEPlib hypercube format.
SEP3D attempts to build as much as possible on top of the
original hypercube format. As a result, a SEP3D dataset is to some extent an
amalgamation of two or three SEPlib datasets.
Traditionally SEPlib files are composed of two parts, an ASCII description file and
a binary data file. The ASCII file provides a description
and a pointer to the binary file (by default ending in @).
Below is a summary of the three ASCII/binary pairs.
Clapp R. G.
For many applications, a very few routines are all the MPI that
is needed. The basic idea of these routines is to make
a local version of global files. These files are transfered
to and from the master process through some simple routines.
These routines can be broken into three categories:
initialization, distribution, and collection.
All of the routines are written in C with a Fortran interface (e.g.
floats become reals, ints become integers).
Next: About this document ...
Up: Table of Contents
Stanford Exploration Project