Interval velocity estimation from beam-stacked data -- field-data results (ps 37K) (src 6K)
Biondi B.
My velocity estimation from beam-stacked data
successfully reconstructs
a velocity anomaly from a field data set.
Using the estimated velocity model for depth migrating
the data
corrects the effects of the anomaly on
the positioning and focusing of the reflectors.
The estimated anomaly well explains the perturbations
in the beam-stacked data caused by the actual velocity anomaly.
An analysis of the transformed data
also shows that the resolution of beam stacks
is sufficient for measuring different effects of the velocity
anomaly on the moveouts of the reflections at different offsets.
A truncation effect in velocity analysis (ps 39K) (src 3K)
Jedlicka J.
A set of parabolas that is due to the truncation effect can be seen in
velocity stacks. A velocity stack that uses uncertain velocities attenuates
the truncation effect.
Structural-velocity estimation - imaging salt structures (ps 43K) (src 8K)
Trier J. v.
Depth migration is often needed for the correct imaging of salt structures.
However, the velocity model necessary for the migration is hard to
determine in salt regions, because of large lateral velocity
variations and poor data quality.
In previous reports I discussed a gradient optimization method to estimate
a structural-velocity model in these regions. The method is based on
the inversion of perturbations in prestack migrated events, where the
events correspond to structural boundaries picked from the migrated image.
The optimization does not require picking of prestack data;
the data part of the gradient is determined from local semblance calculations.
A preliminary analysis of a dataset recorded above a salt structure
explores the difficulties in salt imaging, and illustrates the gradient
calculation.
Velocity analysis by inversion (ps 67K) (src 30K)
Jedlicka J.
Velocity analysis using a conjugate gradient method gives a better
resolution than that obtained by classical stacking along hyperbolas. The data can be
easily obtained back from this velocity analysis panel.
Comparisons of various ways of sampling in the velocity domain show
that even sampling in slowness squared should be chosen.
Migration under velocity uncertainty: impulse responses of the operators (ps 20K)
Zhang L.
Under the assumption that the probability density functions of the
velocities are known, and are spacially invariant, two optimal migration
operators are formulated. For flat events, the operators give the same
responses as does the Stolt migration operator. As the dips of the events
increase, the operators gradually attenuate the high frequency content of the
events while keeping the low frequency energy of the events unchanged.
Kinematic residual prestack migration (ps 51K) (src 9K)
Etgen J. T.
Using a kinematic approach, it is possible to derive a residual
migration operator that
converts a constant-offset section migrated with one slowness
to a constant-offset section migrated with another slowness.
Residual constant-offset migration can be decomposed into
the following sequence of processes:
1) residual common-reflection-point gathering or DMO,
2) residual NMO, and 3) residual zero-offset migration.
If the residual
zero-offset migration part of the operator is ``subtracted" from
residual constant-offset migration we are left with a
residual NMO+DMO operator that corrects a migrated constant-offset section
for the residual NMO and residual common-reflection-point smear associated
with a change in migration slowness without performing residual zero-offset
migration. This ability to change the migration slowness of the data
without moving events around is useful for velocity analysis.
Finite-difference calculation of traveltimes (ps 45K) (src 7K)
Trier J. v.
The Eikonal equation can be transformed into a
flux-conservative equation, which can then be solved with standard
finite-difference techniques.
A first-order upwind finite-difference scheme proves accurate enough for
seismic applications. The method is limited to rays traveling
in one direction, and only calculates single-valued traveltime
functions.
The algorithm efficiently calculates
traveltimes on a regular grid, and is useful in Kirchhoff
migration and modeling.
Scattering analysis of passive seismic data (ps 41K) (src 9K)
Cole S.
Blasts set off in a quarry 15 km from the SEP passive experiment
site are easily detected, even if they are quite small, using beam
steering. These blasts have provided a signal with which various
processing techniques can be evaluated before being applied to ambient
noise data.
One such technique is imaging by looking for evidence of diffraction off
near-surface structures. This yields a picture that is fairly consistent
but dominated by plane-wave energy arriving in the same direction
as scattered waves.
One of the most surprising results of the experiment is the observation
of many weak near-vertically incident events during nighttime recording.
While their origin has yet to be explained, these events are not due
to electrical interference.
Kinematics of Prestack Partial Migration in a variable velocity medium (ps 53K) (src 7K)
Popovici A. M. and Biondi B.
Prestack partial migration (PSPM)
is a well known
process for transforming to zero offset the prestack data,
when the velocity function
is slowly varying.
To adapt PSPM to media in which the velocity v(z) is
a rapid function of z
the generalized PSPM impulse response is computed using raytracing
and is applied to the data as an integral operator.
The impulse response of the PSPM operator
can be computed by considering the PSPM process
as the combination of
full prestack migration followed by
zero-offset modeling.
The kinematics of the proposed PSPM operator and the ones
of the constant velocity PSPM
differ significantly
in some realistic cases, such as in presence of a low velocity layer or
when a hard sea-bottom causes
a jump in velocity.
Cascaded 15-degree normal moveout (ps 41K) (src 3K)
Jedlicka J.
The superposition of 15-degree partial normal moveouts
is proved to converge to normal moveout.
This relation is a basis for cascaded migrations.
Recovering source properties from nine-component monitor data (ps 21K) (src 1K)
Dellinger J.
Monitor phones usually have only a single-component, and are intended to
monitor source statics and wavelets. Three component monitor phones can be
used for this purpose as well, but also provide additional multicomponent
information that can be used to track source moves, examine source-ground
coupling and radiation patterns, and look for near-surface anisotropy. I
demonstrate a simple, robust method for estimating wavetype polarization
directions at a single geophone. The results appear to be repeatable to within
about half a degree. The wavetype polarization directions in turn provide
information about the position of the source. Knowing that the source motions
derived from both P and S arrivals should be the same gives us further
information about the source radiation patterns. The main effect seems to be
that the first stomp at a given location is weaker than all later ones.
Solving for the source (ps 21K) (src 1K)
Karrenbach M. and Muir F.
Multi-component source and receiver data can be turned into a vector
product field only if the source and receiver responses are reducible to a
uniform, isotropic model. Careful design is sufficient in the case of
seismometers. Sources are another matter, since they interact in an unknown,
variable, and non-linear manner with the earth's surface. We discuss a
technique and requirements for reducing the responses of multi-component
sources to a uniform isotropy. We explain the problem structure and find
a pre-stack formulation which minimizes the differences in a reciprocal
trace pair.
Converting surface motions to subsurface wavefields (ps 22K) (src 1K)
Nichols D.
The measured quantities in nine-component seismic data recorded at a
free surface
are displacements in the X, Y, and Z directions. The
quantities we would like to use in seismic imaging are the amplitudes of the
upcoming P, S1 and S2 waves. If the properties of the near-surface are known
exactly it is possible to obtain the wave amplitudes from the measurements.
However if the properties of the near-surface are unknown they must be
estimated as the conversion is performed. It is not possible to estimate
accurately all the elastic properties of the near-surface from reflection
seismic data, so some approximate conversion scheme in which only a few
parameters need to be estimated is required. The conversion scheme most often
used is valid for waves propagating vertically or near vertically in a medium
exhibiting orthorhombic symmetry. I present a scheme that uses approximations
valid for waves arriving over a larger range of angles in a medium with the
more general monoclinic symmetry. Three main classes of methods may be used
to choose
the parameters that ``best'' perform the conversion from surface motions to
wave amplitudes. These are, searching the full parameter space, interactive
selection of parameters, and automatic search of the parameter space.
Wavefield separation in three dimensions (ps 20K) (src 1K)
Dellinger J. and Etgen J.
In two-dimensional isotropic media the divergence and the curl operators
pass only P and S waves, respectively. Mode separation in two-dimensional
anisotropic media is only slightly harder. In three
dimensions, however, things get complicated. The P wave can
still be separated from the two shear waves, but the two shear waves form a
continuous single two-sheeted surface and cannot be separated from each other.
We will show several finite-difference examples of wavetype separation in
transverse isotropic and orthorhombic anisotropic media.
Separation of converted modes in marine data (ps 60K) (src 10K)
Filho C. A. C. and Muir F.
Two methods are tested to separate
the converted modes from the dominant P wavefield. The first one
uses the critical slowness for P waves as the discriminant
factor and is ineffective for deep reflectors.
A better result is obtained with another approach,
in which the upcoming wavefield is divided into
zones sharing a common range of ray parameter; a filtering process
using the horizontal slowness as the discriminant factor is then applied
to each zone. Although more efficient, this
method is more sensitive to contamination by multiples.
For this reason the suppression of multiples and peg-legs
prior to the separation process becomes of vital importance.
Elastic properties of fractured rocks (ps 21K) (src 1K)
Muir F. and Nichols D.
Recent work on equivalent elastic medium theory is modified and extended
beyond fine-layered anisotropic rocks to include multiple fracture sets, lying
anywhichway. The group-theoretic structure is still basic, but group elements
are now in terms of compliances rather than stiffnesses, and this much
simplifies the algebra, and makes the commutative property clear - rocks can
be fractured in any order. This then leads to a method for modeling the effect
of fracture distributions. This new model also provides a unifying framework
to compare the several specialized fracture models in the literature. Several
simple but physically reasonable examples are given.
Beam steering using 3-component data (ps 22K) (src 1K)
Karrenbach M. and Cole S.
Conventional beam steering is a well known data analysis tool for scalar
data. We have extended the method to accommodate vector data, and 3-component
1D, 2D, or 3D arrays. The dataset is
decomposed into plane waves by summing along different wavefronts
while combining the vector components to recover a particular wave type.
Beam steering of 3-component data consists of two steps, a time shift and
a vector transformation. In an isotropic subsurface only the vector
transformation depends on direction, while in an anisotropic subsurface both
the time shift and the vector transformation depend on it. The vector
transformation additionally depends on whether the data were recorded on a free
surface or within the medium. We show synthetic examples for the 3-component
1D, 2D and 3D array cases, compare them to their single-component counterparts,
and give a real data example using a 1-D array of 3-component receivers.
Finite-aperture slant-stack transforms (ps 20K) (src 1K)
Kostov C.
I derive accurate and efficient algorithms for computing least-squares
inverses of the slant-stack and related time-invariant linear transforms.
Conventionally, the inverse transform is computed assuming inifinite
aperture of the array - then the inver is simply the conjugate of the forward
transform followed by a rho-filter. While considerably cheaper to compute,
the infinite-aperture transform introduces more artifacts than the
finite-aperture least-squares inverse transform.
The new algorithms bring down the cost of applying the least-squares inverse
transform to nearly the cost of applying the infinite-aperture inverse.
The efficiency of the algorithms results from the observation that
the matrix of normal equations has a Toeplitz structure, even for data that
are irregularly sampled or non-uniformly weighted in offset. To ensure
the numerical stability of the least-squares inverse, I introduce a sampling rate in ray-parameter that depends on frequency and corresponds to a
uniform sampling in wavenumber.
Examples with synthetic and field data illustrate two applications of the
slant stack transform
interpolation and dip-filtering. The examples confirm that pairs of
finite-aperture forward and least-squares inverse transforms introduce less
artifacts than other, conventional transform pairs.
Iterative L1 deconvolution (ps 70K) (src 33K)
Darche G.
An alternative to the classical Wiener deconvolution is to
minimize the L1 norm of the residuals rather than their L2 norm.
This minimization is achieved by using an iteratively reweighted least-squares
algorithm, which finds L1 minima, starting from L2 minima. The
solution found by this algorithm is still close to the result of the L2
deconvolution, and does not resolve the indeterminacy of the
phase of the seismic wavelet. However, this algorithm shows the efficiency of
L1 deconvolution in presence of noise bursts in the seismic data, and could
be used for more general noise problems.
Object-Oriented programming for seismic data (ps 59K) (src 10K)
Dulac J. and Nichols D.
The intuitive appeal of object oriented programming is that it provides
better concepts
and tools to represent the natural structure of the real world
as closely as possible.
The advantages of this direct representation capability in programming and data
modeling are (1) data abstraction or encapsulation, the program objects
have a one to one correspondence with the problem objects,
and (2) reusable toolkits with well-defined minimal interfaces.
We have created classes of SepData objects for seismic data manipulation.
Data are encapsulated into objects such as Cube, Plane and Trace.
Graphic objects are attached to these data objects to facilitate
interactive programming.
This library is already used for an object-oriented implementation of
the Zplane and Overlay programs.
X3D: extensible, interactive three dimensional geophysical data visualization (ps 22K)
Ottolini R.
New three dimensional geophysical visualizations include radial trace
sheets and 3-D cellular automata fluid flow. These plus earlier visualizations
are implemented on top of an extensible, interactive, portable three
dimensional graphics library called X3D.
Why does SEP still use Vplot? (ps 22K) (src 1K)
Dellinger J.
Vplot is SEP's ``home-brew'' plotting system. Ten years after its
creation, SEP continues to use Vplot even though Vplot is not suited for
interactive plotting. Many better-written commercial and academic packages are
now available, but SEP has not switched yet because these packages fail to
provide certain critical capabilities needed by researchers:
The relative strengths and weaknesses of Vplot and current ``window'' systems
are based upon their differing aims. Vplot attempts to be a good system for
archiving important plots; it is meant for making static figures that will go
into papers or onto slides. Current window systems are designed as a base for
interactive application programs. Both sorts of capabilities are needed by
researchers.
Into the nineties: SEP workstation update (ps 22K)
Ottolini R.
New workstations are fast enough (10 MIPS, 4 MFLOPS) for interactive
two-dimensional signal processing. SEP has acquired four. Maturing software
tools-X-Windows and C++-greatly assist interactive processing development
too.
Device-independent software installation with Cake (ps 23K) (src 1K)
Nichols D. and Cole S.
At SEP we have for different types of UNIX machines and need to use
our software on all of them. Ideally we would like to have one copy
of the program source that could them be compiled on each machine. The UNIX
utility make has limitations that make this goal difficult to
achieve. We now use a new utility called cake. It is much more flexible
and has allowed us to achieved the goal of having one repository for our source
code that is used by all four systems: Convex,Sun-3,Sun-4, and
DECstation-3100.
Preface to practical inversion tutorial (ps 27K) (src 2K)
Claerbout J. F.
My book revisions have lead to three chapters that form
a practical inversion tutorial.
Included in the tutorial are some new research results, namely
(1) antialias velocity estimation,
(2) weighting function for separation of pressure and shear waves,
(3) properties of interpolation-error filters, and
(4) blind deconvolution of an all-pass filter.