Waveform inversion by one-way wavefield extrapolation (ps.gz 1686K) (pdf 1190K) (tgz 59 M)
Shragge J.
Forward modeling in frequency-domain waveform inversion is often implemented using finite difference (FD) methods. However, the cost of FD modeling remains too expensive for typical 3D seismic data volumes. One-way wavefield extrapolation is an alternative forward-modeling strategy considerably cheaper to implement. This approach, though, comes with caveats that typically include lower accuracy at steep propagation angles in laterally varying media, a difficulty for incorporating source radiation patterns, and an inability to propagate turning or multiply reflected waves. Each of these factors can play a role in determining the success or failure of a waveform inversion analysis. This study examines the potential for using one-way Riemannian wavefield extrapolation (RWE) operators in the forward modeling component of frequency-domain waveform inversion. RWE modeling is carried out on computational meshes designed to conform to the general direction of turning-wave propagation, which enables the calculation of the direct arrivals, wide-angle reflections and refractions important for a successful waveform inversion. The waveform inversion procedure otherwise closely resembles conventional frequency-domain approaches. Forward modeling test results indicate that RWE waveforms match fairly well with those generated by FD modeling at wider offsets. Preliminary tests of a RWE waveform inversion scheme demonstrate its ability to invert FD-generated synthetic data for moderate (10) 1D velocity perturbations.
Interval velocity estimation through convex optimization (ps.gz 674K) (pdf 337K) (tgz 31M)
Witten B. and Grant M.
Convex optimization is an optimization technique which maximizes
efficiency by fully harnessing the convex nature of certain
problems. Here we test a convex optimization solver on a least-squares formulations
of the Dix equation. Convex optimization has many
useful traits including the ability to set bounds on the solution
which are explored here. As well, this example
serves as a test for the feasibility of convex optimization for
future, more expensive tomography problems.
Anisotropic velocity spectra from residual moveout transformation in anisotropic angle-domain common-image gathers: First results (ps.gz 0K) (pdf 0K) (tgz 0K)
Jousselin P.
We present the first results of a new method of estimating anisotropic migration velocities:
the computation of anisotropic velocity spectra from Residual Moveout (RMO)
transformation in anisotropic Angle-Domain Common-Image Gathers (ADCIGs). In the first
part of this paper, we formulate the estimation of the anisotropic parameters from the RMO
curves as an inverse problem. This analysis reveals how accurately we can estimate the
parameters, what trade-offs exist between them, and eventually which iterative estimation
procedure we should use. In the second part, we compute the anisotropic velocity spectra of
synthetic data and estimate anisotropic migration velocities.
Image-space surface-related multiple prediction (ps.gz 13235K) (pdf 2604K) (tgz 31.6M)
Artman B. and Matson K.
The first step in surface-related multiple elimination (SRME) is
prediction of the multiples from the recorded seismic data. Adaptive or
pattern-based subtraction techniques are often then employed to eliminate
the multiples from the data to isolate primary reflections. The malleable
framework of shot-profile depth migration can be easily modified to
produce the migration of the conventional surface-related multiple
prediction (SRMP) during the course of a shot-profile migration
with the (small) additional cost of an extra imaging condition. There
may be several advantages to removing the multiples in the image space
as the kinematics of events are simpler and the image-space volume
is smaller than the data space for subtraction.
Image-space multiple prediction (IS-SRMP) takes advantage of
the commutability of convolution and extrapolation. Casting
the multiple prediction problem in terms of a migration imaging
condition immediately suggests a deconvolutional variant. A
deconvolutional implementation (dividing the multiple model by a power
spectrum) increases the bandwidth of the IS-SRMP result to a similar
range as the conventional image which aids in subtraction.
Simultaneous adaptive matching of primaries and multiples with non-stationary filters (ps.gz 2452K) (pdf 701K) (tgz 20.7M)
Alvarez G. and Guitton A.
We develop a method to match estimates of primaries
and multiples to data containing both. It works with prestack
data in either data space or image space and addresses the well-known
issue of cross-talk between the estimates of the primaries and the
multiples.
The method iteratively computes non-stationary filters with micro-patches
and its cost is a negligible fraction of the cost of computing the
estimates of the primaries and multiples with SRME.
We show, with several synthetic and real data examples,
that the matched
estimates of both primaries and multiples are essentially free of cross-talk.
We also apply the method to the separation of ground-roll and body
waves and show that most residual ground-roll contaminating the estimate
of the body waves can be eliminated.
Least-squares migration of incomplete data sets with regularization in the subsurface-offset domain (ps.gz 1516K) (pdf 849K) (tgz 3.3M)
Tang Y.
I present a method to address the migration artifacts caused by insufficient
offset coverage. I pose the migration as a least-squares inversion problem regularized
with the differential semblance operator, followed by forcing a sparseness constraint
in the subsurface-offset domain. I demonstrate that adding these regularization terms suppresses the
amplitude smearing in the subsurface-offset domain and improves the resolution of the
migrated image. I test my methodology both on a synthetic two-layer data set and the
Marmousi data set.
Covariance based interpolation (ps.gz 1077K) (pdf 405K) (tgz 10.8M)
Vyas M.
I test and extend an image interpolation algorithm designed for digital images to suite seismic data sets. The problem of data interpolation can be addressed by determining a filter based on global and local covariance estimates. Covariance estimates have enough information to discern the presence of sharp discontinuities (edges) without the need to explicitly determine the dips. The proposed approach has given encouraging results for a variety of textures and seismic data sets. However, when sampling is too coarse (aliasing) a proxy data set needs to be introduced as an intermediate step. In images with bad signal-to-noise ratio, covariance captures the trend of the signal as well as that of the noise; to handle such situations, a model-styling goal (regularization) is incorporated within the interpolation scheme. Various test cases are illustrated in this article, including one using post-stack 3D data from the Gulf of Mexico.
Improved flattening with geological constraints (ps.gz 26K) (pdf 73K) (tgz 20.8M)
Lomask J.
A Discrete Cosine Transform (DCT) flattening method is used to precondition a constrained flattening method. In this method, the linear step of a Gauss-Newton flattening method with geological constraints is solved with preconditioned conjugate gradients. The preconditioner utilizes DCTs to invert a Laplacian operator. Memory and computational cost savings from the use of the DCT make this method the most efficient constrained flattenening algorithm to date. A 3D faulted field data example is presented.
Data-fusion of volumes, visualization of paths, and revision of viewing sequences in Ricksep (ps.gz 3194K) (pdf 551K)(tgz 18.5M)
Chen D. M. and Clapp R. G.
Ricksep is a freely-available interactive viewer for multi-dimensional cubes, capable of simultaneous display of multiple data sets from different viewing angles, animation of movement through the data space, and selection of local regions for data processing and information extraction. Several new features are added to Ricksep to enhance the program's functionality. First, two new data-fusion algorithms synthesize a new data set from two source data sets, one with mostly high-frequency content, such as seismic data, and the other with mostly low-frequency content, such as velocity data. Previously separated high and low-frequency details can now be viewed together. Second, a new projection algorithm, integrated with Ricksep's point-picking capabilities, effectively displays arbitrary paths through the data space. The algorithm responds to path changes in real time, restores depth information lost through ordinary projection techniques, and supports the generation of multiple paths differentiated by point-picking symbols. Third, a viewing history list is maintained to enable Ricksep's users to edit and save a sequence of viewing states. The feature supports undoing and redoing of viewing commands and animation of viewing sequences, a generalization of the viewer's movie feature. A theoretical discussion and several examples using real seismic data show how the new features offer more convenient, accurate ways to manipulate multi-dimensional data sets.
Time domain passive seismic processing at Valhall (ps.gz 12013K) (pdf 3833K) (tgz 59.5M)
Artman B.
Passive recordings from an array of 2500 four component receiver
stations at the Valhall reservoir in the Norwegian North Sea were made
available from February 2004 and January 2005 by BP. Analysis of some
of the raw hydrophone records shows that the bulk of the records do
not yield obvious, crisp events. Some noise trains that look like
approximately 7 second envelopes of a shot record emanating from the
location of the production/drilling facilities are observed
occasionally. Crisp water-velocity hyperbolic events are
sometimes seen which are also centered at the location of the
platforms within the array.
Simple correlation does not yield interpretable synthesized shot
gathers from the raw data. Dividing the correlations by the power
spectrum of the traces correlated reveals several interesting
features. There are a few complete hyperbolas in the data that may
correspond to the desired events analogous to active seismic data.
Dominating the gathers however, is a single event from a source
probably 40 km to the West of the array traveling at roughly 1450 m/s.
The same event is present in both the 2004 and 2005 data. The event
locus is exactly over the Ardmore field in British waters, operated by
Tuscan Energy (Scotland) Limited.
Converted-wave common azimuth migration: Real data results (ps.gz 362K) (pdf 396K) (tgz 4.5M)
Rosales D. A., Clapp R. G., and Biondi B. L.
The converted-wave common-azimuth migration operator (PS-CAM)
and the single mode common-azimuth migration operator
share the advantage that they
need a 3-D prestack cube of data with four
dimensions (),
instead of the entire five dimensions ().
The five dimensions can be reduced to
four dimensions through different processes.
This paper compares two images of the PS data set
from the OBS acquisition on the Alba oil field
in the North sea. The two images correspond to the
following processes: 1. The common-azimuth
migration of the data regularized using Normal Moveout.
2. The common-azimuth migration of the data
regularized using PS Azimuth Moveout.
The final results show that the image from the
data regularized using PS Azimuth Moveout is
significantly better than the
images obtained the image from the data regularized
using Normal Moveout.
Wave-equation inversion prestack Hessian (ps.gz 341K) (pdf 592K) (tgz 1.6M)
Valenciano A. A. and Biondi B.
The angle-domain Hessian can be computed from the subsurface-offset domain Hessian by an offset-to-angle transformation. An understanding of how the seismic energy is distributed with reflection angle can be gained by explicitly computing the angle-domain Hessian. The Hessian computation in a model with a single reflector under a Gaussian velocity anomaly (creating uneven illumination) illustrates the theory.
How much precision do we need in wave-equation based migration? (ps.gz 5187K) (pdf 1367K) (tgz 22.8M)
Clapp R. G.
To understand why we can reduce the precision of our input data
without meaningful loss in final image quality it is important
to remember that migraton is summing along a surface in
multi-dimensional space.
Imagine the process of forming an image m at a given ix,iy, and iz.
To form this one point in image space involves a sumation over a
five-dimensional (t,hx,hy,mx,my) input space of the data
multiplied by the Green's function , ...
Optimized implicit finite-difference migration for TTI media (ps.gz 227K) (pdf 121K) (tgz 0.4M)
Shan G.
I develop an implicit finite-difference migration algorithm for tilted transversely isotropic (TTI) media.
I approximate the dispersion relation of TTI media with a rational function series,
whose coefficients are estimated by least-squares optimization.
The dispersion relation of TTI media is not a symmetric function,
so an odd rational function series is required in addition to the even one.
These coefficients are functions of Thomsen anisotropy parameters. They
are calculated and stored in a table before the wavefield extrapolation.
Similar to the isotropic and VTI media, in 3D a phase-correction filter is applied after the finite-difference
operator to eliminate the numerical error caused by two-way splitting.
I generate impulse responses for this algorithm and compare them to those generated using the phase-shift method.