Characterization of a hydrate reservoir (ps 3464K) (src 20032K)
Ecker C.
I use marine seismic data from the Blake Outer Ridge region to characterize
the lateral distribution of a methane hydrate reservoir. Detailed
amplitude versus offset (AVO) analysis of the data, combined with velocity
analysis and seismic impedance inversion is used to explore the extend and
characteristic of the bottom simulating reflector (BSR) associated with the
base of the hydrate stability field. The results suggest a strong correlation
between strong BSR reflections and the presence of a low velocity zone. This is
indicative of the presence of free gas beneath the hydrate.
Weaker BSR amplitudes occur in areas of decreased velocity contrasts and
``fractured'' appearance of the BSR. The P-impedance inversion results
in strong contrasts both at the seafloor
and the BSR due to the significant velocity contrasts there.
The S-wave impedance contrast, on the other
hand, shows a stronger contrast in the vicinity of the BSR than at the
seafloor. The BSR
contrast has a rather irregular appearance and seems to be dominated
by events cutting in from underneath. These events are significantly weaker
in the P-impedance contrast and are partly not even visible.
AVO analysis of the amplitudes at three
different locations along the BSR resulted in decreasing
P-wave velocities across the BSR and S-wave velocities that are either
increasing, decreasing or unchanged. Possible reasons for these
local variations of the
shear wave velocity might be either
patchy hydrate saturation or patchy gas saturation beneath the hydrate.
Furthermore, the hydrate might be fractured, thus causing the
properties of the hydrate to change locally.
Detecting S-wave velocity contrasts across BSRs using AVO (ps 571K) (src 1213K)
Ecker C.
Reservoir characterization of methane hydrate data requires a reasonable
estimate of their S-wave velocity behavior. AVO analysis at the bottom
simulating reflector (BSR), which
is the reflection off the bottom of a stable hydrate structure, is a tool to extract this information from surface P-wave data.
Using sonic and density logging data from a recent ODP (Ocean Drilling Program)
cruise at the
Blake Outer Ridge, offshore Florida and Georgia, I evaluate the effect of
different S-wave velocity structures on the seismic BSR amplitudes by
forward seismic modeling. I show that even for an angle coverage of
30 degrees,
S-wave velocity contrasts of about 17 m/s across the BSR can still be
uniquely distinguished in the case of ideal amplitudes.
Introduction of random Gaussian noise of a S/N ratio
of about 2:1 strongly reduces the ability to differentiate between such small
contrasts. In that case, errors between
50 m/s and
100 m/s can be
introduced into the estimation of the shear wave velocity contrast.
A subsequent 2-D inversion of a model, that varies
laterally only in S-wave velocity, results in a more stable resolution in the
presence of noise.
Bandwidth-equalization and phase-matching of time-lapse seismic datasets (ps 535K) (src 2278K)
Rickett J.
Time and frequency domain methods for equalizing bandwidth and making
phase corrections are tested on synthetic time-lapse data.
Cross-equalization filters are primarily designed to
equalize amplitude spectra by decreasing the bandwidth of the higher
frequency survey to that of the lower frequency survey.
I also tested approaches that tried to maximize the bandwidth of the
difference between surveys, in such a way to avoid amplifying noise.
Cross-equalization of a very shallow seismic experiment (ps 511K) (src 1139K)
Rickett J. and Bachrach R.
Cross-equalization is catch-all term for the removal of systematic
non-fluid related differences between reservoir monitoring surveys.
It is an important step in the processing of time-lapse seismic data
as it allows interpretation of difference images in terms of fluid
parameters. In this paper we apply a post-stack
cross-equalization algorithm to detect differences between two
high-resolution seismic surveys caused by the addition of 20 liters
of honey in the near-surface. It follows the cross-equalization work
on synthetic data presented in the previous paper
...
Systematic AVO response with depth (ps 107K) (src 249K)
Boyd D.
Well logs reaching several kilometers into the surface of the
earth show that the P-wave reflection coefficient changes behavior
with depth. I examined one particular well log, reaching approximately 5 km below the surface
of the earth and assumed to be saturated with water. Simple two-layer
models consisting of an upper layer of saturated sand and a lower layer of shale
were constructed for three different depths to study the AVO response.
Studying how the amplitude
variation with offset (AVO) response changes with depth for brine and gas
saturated sand shows that the well log exhibits all three classes of reflectivity.
The shallow portions of the well display typical Class III reflections.
The deepest sections exhibit Class I behavior. Near 2.4 km, the well
log comes to a crossover point and displays Class II reflection behavior.
Azimuth moveout + common-azimuth migration: Cost-effective prestack depth imaging of marine data (ps 1668K) (src 4370K)
Biondi B.
Common-azimuth imaging can significantly reduce
the cost of full-volume 3-D prestack depth imaging
of marine data sets.
The common-azimuth imaging procedure comprises of two steps:
first, transformation of the prestack data to common azimuth data
by azimuth moveout (AMO);
second, imaging the transformed data by common-azimuth migration.
Both these steps are computationally efficient.
AMO is a partial-migration operator and thus
it has narrow spatial aperture.
Common-azimuth migration
is based on downward-continuation
of the wavefield;
therefore, its computational cost increases only as the square of
the image depth.
In contrast, the cost of conventional Kirchhoff migration is proportional
to the cube of the image depth.
Because it is a wavefield-continuation method,
common-azimuth migration
does not require the
computation of asymptotic Green functions.
Therefore,
common-azimuth imaging is likely to overcome
some of the accuracy problems encountered by Kirchhoff migration
in the presence of complex wave-propagation phenomena.
The proposed common-azimuth imaging procedure successfully
depth imaged a marine data set recorded in the North Sea.
These positive results suggest the application
of common-azimuth imaging
to velocity estimation based on wavefield focusing.
Iterative estimation of depth migration operators (ps 143K) (src 252K)
Clapp R. G. and Biondi B.
We divide the velocity estimation problem into two steps: focusing
operator estimation and back projection to the velocity field.
We propose estimating perturbations in the focusing operators
by analyzing homothetic scans of the travel-time field.
In the case of systematic bias in the velocity model,
the correct perturbations to the focusing operator are indicated
by the homothetic scans.
3-D prestack datuming in midpoint and offset coordinates (ps 204K) (src 352K)
Crawley S.
3-D seismic surveys are designed to have even sampling in
midpoints, but often have irregular sampling in offset.
For instance, marine surveys tend towards poor sampling and
small aperture in the crossline offset direction, owing to the
necessity of towing streamers.
Integral operators which work on 3-D prestack data must deal with the
problems that arise from 3-D prestack geometry in order to be useful;
a straightforward generalization of a 2-D operator is likely to consume
a great many CPU cycles to produce suboptimal output.
Prestack Kirchhoff datuming can be made more effective in 3-D by
reformulating it in midpoint and offset coordinates, and choosing
an antialiasing strategy to take advantage of the resulting limitation
of the time dip of events.
Two-step equalization of irregular data (ps 164K) (src 385K)
Chemingui N. and Biondi B.
Sampling irregularities in seismic data may introduce
noise, cause amplitude distortions and even structural distortions
when wave equation processes such as dip moveout, azimuth moveout,
and prestack migration are applied.
Data regularization before imaging becomes a processing requirement to
preserve amplitude information and produce a good quality final image.
We propose a new technique to invert for reflectivity models
while correcting for the effects of irregular sampling.
The final reflectivity model is a two-step solution where
the data is equalized in a first stage with an inverse
filter and an imaging operator is then
applied to the equalized data to invert for a model.
Based on least-squares theory, the solution estimates an equalization
filter that corrects the imaging operator for the interdependencies
between data elements. Each element of the filter is
a mapping between two data elements. It reconstructs a data trace
with given input geometry
at the geometry of the other data element.
This mapping represents an AMO transformation.
The filter is therefore a symmetric AMO matrix with diagonal
elements being the identity
and the off-diagonal elements being the trace-to-trace AMO transforms.
We explore the effectivness of the method in the 2D case for the application of
partial stacking by offset continuation.
The equalization step followed by imaging
has proved to correct and equalize the processing for
the effects of fold variations.
Depth focusing analysis for 3-D migration velocity estimation (ps 234K) (src 203K)
Malcotti H. and Biondi B.
We present partial results of the implementation of
3D depth focusing velocity analysis, based on 2-D and 3-D prestack
downward continuation algorithm in the CMP domain. We discuss the
focusing analysis methodology for point diffractors in a constant
velocity media. In addition, we discuss the 2-D and 3-D kinematic bases of 3D
focusing analysis and the implicit approximations of
depth focusing analysis technique. We show the depth error panels resulting from
using diffractors points and discuss the differences between the
depth error gather obtained by a 2-D and 3-D downward continued
operator.
Traveltime computation with the linearized eikonal equation (ps 90K) (src 4061K)
Fomel S.
Traveltime computation is an important part of seismic imaging
algorithms. Conventional implementations of Kirchhoff migration
require precomputing traveltime tables or include traveltime
calculation in the innermost computational loop . The cost of
traveltime computations is especially noticeable in the case of 3-D
prestack imaging where the input data size increases the level of
nesting in computational loops.
The eikonal differential equation is the basic mathematical model,
...
Diagonal weighting: An elementary challenge to mathematicians (ps 38K) (src 3K)
Claerbout J.
Geophysical mapping and imaging are applications where we seek
an approximate pseudo inverse of a matrix of very high order.
Say, constructs theoretical data
from model parameters
using a linear operator
.Experience shows that the transpose of the simulation operator
...
Preconditioning and scaling (ps 38K) (src 3K)
Claerbout J.
In geophysical mapping and imaging applications we
set up linear equations of high order.
We face subjective issues
like how to scale components of an operator
and how much damping (regularization) to use.
Here I summarize a few scaling tricks.
Please keep in mind that
we do not have matrices (data structures) but operators,
i.e., function pairs for applying an operator
and its adjoint (transpose).
...
On model-space and data-space regularization: A tutorial (ps 775K) (src 16852K)
Fomel S.
Constraining ill-posed inverse problems often requires
regularized optimization. I describe two alternative approaches to
regularization. The first approach involves a column operator and an
extension of the data space. The second approach constructs a row
operator and expands the model space. In large-scale problems, when
the optimization is incomplete, the two methods of regularization
behave differently. I illustrate this fact with simple examples and
discuss its implications for geophysical problems.
On the general theory of data interpolation (ps 98K) (src 58K)
Fomel S.
Data interpolation is one of the most important tasks in
geophysical data processing. Its importance is increasing with the
development of 3-D seismics, since most of the modern 3-D
acquisition geometries carry non-uniform spatial distribution of
seismic records. Without a careful interpolation, acquisition
irregularities may lead to unwanted artifacts at the imaging step
Chemingui and Biondi (1996); Gardner and Canning (1994).
...
Cross product operator detects plane reflectors (ps 126K) (src 1068K)
Schwab M.
I propose an estimation of the dip of a plane layer volume by
minimizing the output of a cross product differential expression.
This rather peculiar expression allows reliable estimates
in small expectation volumes and it is easily extended to
Prediction Error (PE) filters. I believe the method
yields reasonable dip estimates even for cases that only approximate
a plane layer volume (e.g., after the addition of noise). However,
the approach does not yield a straightforward technique
for subtracting the dominant plane-layered contribution from the
original image. The cross product expression
yields a vectorial, not a scalar output. The back-projection
of the vectorial output by the adjoint operation results in
a scalar function which does not have a simple, meaningful interpretation.
The usefulness of the cross product expression is uncertain.
Pre-whitening and coherency filtering (ps 407K) (src 640K)
Schwab M.
Ultimately, a seismic image serves interpreters as a means of building
a geological model of the subsurface. Automatic edge detection schemes
can help to produce images that emphasize critical geological features
such as faults and river channels.
Showing some striking coherency images of complex geological
structures, Bahorich and Farmer 1995
brought the coherency attribute to the attention of the geophysical
...
Multiple suppression using prediction-error filter (ps 187K) (src 1351K)
Sun Y.
I present an approach to multiple suppression, that is
based on the moveout between primary and multiple events in the CMP gather.
After normal moveout correction, primary events will be horizontal,
whereas multiple events will not be.
For each NMOed CMP gather, I reorder the
offset in random order. Ideally, this process has little influence on the
primaries, but
it destroys the shape of the multiples. In other words, after randomization
of the offset order, the multiples appear as random noise. This ``man-made''
random noise can be removed using prediction-error filter (PEF).
The randomization of the offset order can be regarded as a random process,
so we can apply it to the CMP gather many times and get many different
samples. All the samples can be arranged into a 3-D cube, which is further
divided into many small subcubes. A 3-D PEF can then be estimated from each
subcube and re-applied
to it to remove the multiple energy. After that, all the samples are averaged
back into one CMP gather, which is supposed to be free of multiple events.
In order to improve the efficiency of the
algorithm of estimating the PEF for each subcube, except for the first
subcube which starts with a zero-valued initial guess, all the subsequent
subcubes take the last estimated PEF as an initial guess.
Therefore, the iteration
count can be reduced to one step for all the subsequent subcubes with little
loss of accuracy.
Three examples demonstrate the performance of this new approach, especially in
removing the near-offset multiples.
Inverse NMO stack in depth-variable velocity (ps 237K) (src 1434K)
Sun Y.
Inverse NMO stack is a procedure which combines conventional NMO and stacking
into one step. By solving a set of simultaneous equations using optimization
methods, such as conjugate gradient, inverse NMO stack tries to
find the most ``reasonable'' stack trace for a CMP gather.
Claerbout 1994 discusses inverse NMO stack in
constant velocity to illustrate how back projection can be upgraded towards
inversion. In this note, I extend his idea to the case of
depth-variable velocity.
...
Attenuation of long period multiples (ps 813K) (src 1409K)
Holden T. C.
I present a multiple suppression method that utilizes inverse
beam stacking to model and remove long period multiples from CMP
gathers. The method is applied to synthetic data and is compared with
the Parabolic Radon Transform (PRT) method. The results are very good
except at the nearest offsets.
The changing face of deep reflection seismic profiling (ps 412K) (src 442K)
Long A.
The often complex and poor-quality seismic events resolved by deep
seismic profiling are generally not dissimiliar to those found on
conventional exploration data. Historically however,
the processing technology used to image these data has been simplistic, by
comparison to exploration studies. This gap in processing technology
is generally not viewed as an urgent issue by the crustal geophysics community,
who are typically more concerned with broader geological issues.
Nevertheless, it would appear that at least on one front, things are going
to change. The future of deep seismic profiling
is clearly taking two separate paths. On the gross scale, academic researchers are
beginning to take time out from their fervent acquisition
to review their techniques, their Earth geological models and their perceived future
challenges. On the other hand, efforts using deep seismic profiling to place
natural resources in a crustal-scale context are rapidly growing in strength, with
many such surveys being acquired.
These latter studies will become the technical leaders for deep seismic studies,
incorporating more advanced processing technologies better known in the
exploration industry. The results of these commercially-driven studies should then be
of great use to the academic community, providing many previously unattainable insights into
the Earth.
Analysis of Thomsen parameters for finely-layered VTI media (ps 83K) (src 212K)
Berryman J. G., Grechka V., and Berge P. A.
Since the work of Postma (1955) and Backus (1962), much has been learned
about elastic constants in vertical
transversely isotropic (VTI) media when the anisotropy is
due to fine layering of isotropic elastic materials.
Nevertheless, there has continued to be a degree of uncertainty about
the possible range of Thomsen's anisotropy parameters
and
for such media.
We show that
lies in the range
, for finely layered media having
constant density; smaller positive and all negative values of
occur for media with large fluctuations in the Lamé
parameter
.We show that
can also be either positive or negative,
and that for constant density media
.Among all theoretically possible random media,
positive and negative
are equally likely in finely layered
media limited to two types of constituent layers.
Layered media having large fluctuations in Lamé
are the ones most likely to have positive
.Since Gassmann's results for fluid-saturated porous media show
that the effects of fluids influence only the
Lamé constant,
not the shear modulus
, these results suggest that positive
occurring together with positive but small
may be indicative of changing fluid content in layered earth.
Prestack time migration for anisotropic media (ps 2200K) (src 2304K)
Alkhalifah T.
Prestack phase-shift migration is implemented by evaluating
the offset-wavenumber (kh) integral using the
stationary-phase method. Thus, the stationary point along kh must be calculated prior
to applying the phase shift.
This type of implementation allows for migration of separate offsets, as opposed to
migration of the whole prestack data when using the original formulas.
For zero-offset data, the stationary
point (kh=0) is known in advance, and, therefore, the phase-shift migration can be implemented
directly. For non-zero-offset data, we first evaluate kh that corresponds
to the stationary point solution
either numerically or through analytical approximations.
The insensitivity of the phase to kh
around the stationary point solution (its maximum) implies that even an imperfect kh obtained analytically can
go a long way to getting an accurate image.
In transversely isotropic (TI) media, the analytical solutions of the stationary point (kh) include more
approximations than those corresponding to isotropic media (i.e., approximations corresponding to
weaker anisotropy). Nevertheless, the resultant equations, obtained using Shanks transforms,
produce accurate migration signatures for strong anisotropy (0.3) and even large offset-to-depth ratios (>2).
The analytical solutions are particularly accurate in predicting the non-hyperbolic moveout
behavior associated with anisotropic media, a key ingredient to performing
an accurate non-hyperbolic moveout inversion for strongly anisotropic media.
Although the prestack correction achieved using the phase-shift method can also be obtained using
a cascade of normal-moveout correction, dip-moveout (DMO) correction, and zero-offset
time migration, the prestack approach can handle sharper velocity models more efficiently. In addition,
the resulting operator is cleaner than that obtained from the DMO method.
Synthetic and field data applications of the proposed prestack migration demonstrate its usefulness.
Seismic anisotropy in Trinidad: Processing and Interpretation (ps 7518K) (src 7543K)
Alkhalifah T. and Rampton D.
The lithology of offshore Trinidad is formed of alternating sequences
of sand and shale dominated layers. Average (effective) anisotropy is
much lower in Trinidad compared to the prevoiusly studied area of
offshore Angola due to the large amount of
sand in the subsurface.
Nevertheless, accounting for anisotropy in seismic processing results
in improved imaging of
structural and stratigraphic features. The imaging improvement is
shown for two different lines from
that region. Inversion for an interval value of the
anisotropy parameter (), suggests that low values are correlated
with sands (or any other isotropic material), while
high interval
values are correlated with shales.
Correlation between separate independent measurements for
across common midpoints (CMPs)
enhances the credibility of such
estimates as a representation of real geologic parameters. Finally,
the
curve
agrees well with gamma-ray well-log measurements used as a shale
estimate. This result confirms the
hypothesis that anisotropy is due to shales in the subsurface, and the
inversion for interval
can subsequently be used to predict
lithology.
Residual migration in VTI media using anisotropy continuation (ps 117K) (src 67K)
Alkhalifah T. and Fomel S.
We introduce anisotropy continuation as a process which relates
changes in seismic images to perturbations in the anisotropic medium
parameters. This process is constrained by two kinematic equations,
one for perturbations in the normal-moveout (NMO) velocity and the
other for perturbations in the dimensionless anisotropy parameter
. We consider separately the case of post-stack migration and
show that the kinematic equations in this case can be solved
explicitly by converting them to ordinary differential equations by
the method of characteristics. Comparing the results of kinematic
computations with synthetic numerical experiments confirms the
theoretical accuracy of the method.
Empowering SEP's documents (ps 227K) (src 759K)
Fomel S., Schwab M., and Schroeder J.
The arrival of LATEX2e at SEP enhanced our LATEX typesetting system
and led us to overhaul SEP's customized macros. The revised
system enables us to use the latex2html script
Drakos (1996) to publish our documents routinely on the Internet.
Additionally, we improved the communication between a document's
makefile and its corresponding LATEX file. Finally, we replaced a
gigantic c-shell script (texpr) that governed SEP's entire
document processing, by a set of small Perl scripts. These Perl
...
A seismic inversion library in Java (ps 186K) (src 246K)
Schwab M. and Schroeder J.
Jag is a Java library for numerical optimization of geophysical problems.
It shares the fundamental class hierarchy
with HCL, a C++ library. We found writing Java easier than writing
C++. Java freed us from garbage collection and pointer arithmetic
and gave us multiple inheritance of interfaces.
During the development we guided our design decisions on a small set
of research scenarios. We are confident Jag will excel
in prototyping solutions to geophysical inversion problems.
Furthermore, we are at the verge of delivering Jag results wrapped in
reproducible documents on the World Wide Web.
Unfortunately, Java's current performance is inferior to even C++, which
might restrict Jag to small- and medium-sized research projects.