Delayed-shot migration in TEC coordinates [pdf 2.9M][source] Jeff Shragge This paper extends the analytical Riemannian wavefield extrapolation (RWE) approach to 3D coordinate systems. I formulate an inline delayed-shot migration procedure in tilted elliptical-cylindrical (TEC) coordinate systems. When inline coordinate tilt angles are well-matched to the inline source ray parameters, the TEC coordinate extension affords accurate propagation of both steep-dip and turning-wave components. I show that wavefield extrapolation in TEC coordinates is no more complicated than propagation in elliptically anisotropic media. Impulse response tests illustrate the accuracy and lack of numerical anisotropy of the implemented scheme. I apply this approach to a realistic 3D wide-azimuth synthetic derived from a field Gulf of Mexico data set. The resulting images demonstrate the imaging advantages made possible through 3D RWE implementations, including the improved imaging of steeply dipping salt flanks, potentially at a reduced computational cost. Narrow-azimuth migration results demonstrate the applicability of the approach to typical Gulf of Mexico field data.
Reverse time migration with random boundaries [pdf 536K][source] Robert G. Clapp Kinematics in iterated correlations of a passive acoustic experiment [pdf 1.3M][source] Sjoerd de Ridder and George Papanicolaou Correlating ambient seismic noise can yield the inter-station Green's function, but only if the energy that is excited by seismic background sources is sufficiently equipartitioned after averaging over all sources. If this requirement is not fulfilled, the reconstructed Green's function is imperfect. Secondary scattering can mitigate the directivity of the primary wave field emitted by the sources. To extract and utilize secondary scattering for Green's function reconstruction, we introduce a second correlation using an auxiliary station. We investigate the kinematics of the reconstructed Green's functions to understand the role of the positions of source, scatterer and auxiliary stations. Iterated correlations can use secondary scattering to mitigate the directivity in the background seismic wave field. In general, there will be additional spurious events in the retrieved Green's functions. Averaging the results of several sources and using a network of randomly distributed auxiliary stations can minimize these spurious events with respect to the correct events in the retrieved Green's functions.
Velocity model building
Measuring image focusing for velocity analysis [pdf 1.0M][source] Biondo Biondi I present a method for extracting velocity information
by measuring the focusing and unfocusing of migrated images.
It measures image focusing by evaluating coherency
across structural dips, in addition to coherency across
The inherent ambiguity between velocity and reflectors' curvature
is directly tackled by introducing a curvature correction
into the computation of the semblance functional
that estimates image coherency.
The resulting velocity estimator provides velocity
estimates that are: 1) unbiased by reflectors' curvature,
and 2) consistent with the velocity information that we routinely
gather by measuring coherency over aperture/azimuth angles.
The application of the method to a 2D synthetic data set
and a 2D field data set confirms
that it provides consistent and unbiased velocity information.
It also suggests that velocity estimates
based on the new image-focusing semblance may
be more robust and have higher resolution than estimates
based on conventional semblance functionals.
Preliminary tests on two 2D zero-offset synthetic data sets
show that velocity information can be extracted from zero-offset data
in presence of reflectors with arbitrary curvature,
and not only in presence of point diffractors as previously published
Attribute combinations for image segmentation [pdf 856K][source] Adam Halpert and Robert G. Clapp Seismic image segmentation relies upon attributes calculated from seismic data, but a single attribute (usually amplitude) is not always sufficient to produce an accurate result. Therefore, a combination of information from different attributes should lead to an improved segmentation outcome. This paper explores opportunities for combining attribute information at three different stages: before segmentation (by multiplying attribute volumes), after the eigenvector calculation (via a linear combination of individual eigenvectors), and after individual boundaries have been drawn (by using uncertainty calculations to extract the best elements of individual boundaries). Overall, a method that uses uncertainty calculations to determine weights for the eigenvector linear combination produces satisfactory results, while avoiding potential drawbacks of other methods. This method produces promising results when tested on field data in both two and three dimensions.
Wave-equation tomography using image-space phase-encoded data [pdf 584K][source] Claudio Guerra and Yaxun Tang and Biondo Biondi Wave-equation tomography in the image-space is a powerful technique that promises to yield more reliable velocity models than ray-based migration velocity analysis in areas of complex overburden. Its practical use, however, has been limited because of the high computational cost. Applying a target-oriented approach and using data reduction can make wave-equation tomography in the image space of practical use. Here, we present results of applying image-space wave-equation tomography in the generalized source domain, where a small number of synthesized shot gathers are generated. Specifically, we generate synthesized shot gathers by image-space phase encoding. This technique can also be used in a target-oriented way. The comparison of the gradients of the tomography objective functional obtained using image-space encoded gathers with those obtained using the original shot gathers shows that those encoded shot gathers can be used in wave-equation tomography problems. Velocity inversion using image-space phase-encoded gathers converges to reasonable results when compared to the correct velocity model. We illustrate our method by applying it to the Marmousi model.
Seismic tomography with co-located soft data [pdf 1.6M][source] Mohammad Maysami and Robert G. Clapp There is a wide range of uncertainties present in seismic data. Limited subsurface illumination is also common, specially in areas with salt structures. These shortcomings are only a few of many different reasons that makes seismic tomography an under-determined problem with a large null space.
We can use additional information to reduce the uncertainty and constrain this large null space. The additional information, also known as co-located soft (secondary) data, can be the result of integrating a non-seismic data from the same subsurface area.
A measure of structural similarity between the two given data fields can create a link between the different types of data. We use cross-gradient functions to incorporate this structural information, given by secondary data, into the inverse problem as a constraint.
Automatic velocity picking by simulated annealing [pdf 288K][source] Yunyue (Elita) Li and Biondo Biondi Manual velocity picking is an inevitable and tedious process in
the petroleum industry. An ideal velocity model is both
geologically significant and
geophysically smooth. Velocity picking can be phrased as a nonlinear optimization
problem with multiple contradictory objectives. In this paper, we
develop an automatic velocity picking technique based on
the Simulated Annealing (SA) Algorithm. Accuracy and smoothness
of the velocity model are used as objective functions. To improve
the convergence of the algorithm, we include prior knowledge of the
in the initialization and the constraints. The algorithm is
adapted for this problem and demostrated using a 2-D field example.
Hessian based inversion
Least-squares migration/inversion of blended data [pdf 3.6M][source] Yaxun Tang and Biondo Biondi We present a method based on least-squares migration/inversion to directly image data
collected from recently developed wide-azimuth acquisition geometries, such as
simultaneous shooting and continuous shooting, where two or
more shot records are often blended together. We show that by using least-squares
migration/inversion, we not only enhance the resolution of the image,
but more importantly, we also suppress the
crosstalk or acquisition footprint, without any pre-separation of the blended data.
We demonstrate the concept and methodology in 2-D and apply the data-space inversion
scheme to the Marmousi model, where an optimally reconstructed image, free from
crosstalk artifacts, is obtained.
Joint inversion of simultaneous source time-lapse seismic data sets [pdf 2.9M][source] Gboyega Ayeni and Yaxun Tang and Biondo Biondi Target-oriented least-squares migration/inversion with sparseness constraints [pdf 2.4M][source] Yaxun Tang I pose the seismic imaging problem as an inverse problem and present a regularized inversion scheme that tries
to overcome three main practical issues with the standard least-squares migration/inversion (LSI) approach, i.e.,
the high computational cost, the operator mismatch, and the poorly constrained solution due to a limited surface acquisition geometry.
I show that the computational cost is considerably reduced by formulating the LSI problem in a target-oriented fashion
and computing a truncated Hessian operator using the phase-encoding method.
The second and third issues are mitigated by introducing a non-quadratic regularization operator that imposes
sparseness to the model parameters. Numerical examples on the Marmousi model
show that the sparseness constraint has the potential to effectively reduce the null space and produce
an image with high resolution, but it also has the risk of over-penalizing weak reflections.
Target-oriented joint inversion of incomplete time-lapse seismic data sets [pdf 840K][source] Gboyega Ayeni and Biondo Biondi We propose a joint inversion method, based on linear least-squares wave-equation inversion, for imaging incomplete time-lapse seismic data sets.
Such data sets can arise from presence of production facilities or intentional sparse sampling.
These data sets generate undesirable artifacts that degrade the quality of time-lapse seismic images, making them unreliable indicators of production-related changes in reservoir properties.
To solve this problem, we pose time-lapse imaging as a joint linear inverse problem that utilizes concatenations of target-oriented approximations to the least-squares imaging Hessian.
Using a subset of the 2D Marmousi model, we show that the proposed method gives reliable time-lapse seismic images from incomplete seismic data sets.
Near surface velocity estimation using early-arrival waveform inversion constrained by residual statics [pdf 384K][source] Xukai Shen Seismic tests at Southern Ute Nation coal fire site [pdf 2.2M][source] Sjoerd de Ridder and Seth S. Haines We conducted a near surface seismic test at the Southern Ute Nation coal fire site near Durango, CO. The goal was to characterize and image the coal fire and to help plan any future surveys. We collected data along two transects. Data from Line , which overlies unburned coal, shows useful frequency content above Hz and a reflection that we interpret to originate at approximately 11 m depth. Data from Line 2, which crosses the burn front and many fissures, is of lower quality, with predominantly jumbled arrivals and some evidence of reflected energy at one or two shot points. It seems that neither refractions nor reflections image down to the coal layer; in part this is attributed to the presence of unexpected high-velocity layers overlying the coal. The consequence is that possible information about the coal is hidden behind the events from shallow layers. Based on these data, we suggest that further seismic work at the site is unlikely to successfully characterize the coal fire zone of interest.
Source signature and static shifts estimations for multi-component ocean bottom data [pdf 452K][source] Mandy Wong and Shuki Ronen We present an interpretive study to estimate the source signature and source statics of a field ocean bottom dataset. We use the down-going direct arrival to extract the source signature at different offsets. The down-going wavefield is obtained from a simple summation of the pressure (P) and the vertical partical velocity (Z) of the multi-component data. Such a summation is scaled by a factor that depends on offset and is estimated directly from the amplitude of the P and Z values in the domain. In addition, we compare two approaches to estimating the source-side static shifts. Our static shifts estimation give satisfactory result for an absolute offset up to meters.
Effective medium theory for elastic composites [pdf 288K][source] James G. Berryman The theoretical foundation of a variant on effective medium theories for
elastic constants of composites is presented and discussed. The connection between this
approach and the methods of Zeller and Dederichs, Korringa, and
Gubernatis and Krumhansl is elucidated. A review of the known relationships
between the various effective medium theories and rigorous bounding methods
for elastic constants is also provided.
Inversion of up and down going signal for ocean bottom data [pdf 220K][source] Mandy Wong and Biondo L. Biondi and Shuki Ronen We formulate an inversion problem using the up- and down-going signals of ocean bottom data to imaging primaries and multiples. The method involves separating pressure and vertical particle velocity data into up- and down-going components. Afterward, the up- and down-going data can be used for inversion with appropiate modeling operators. To a first-order accuracy, we use mirror imaging to define the up- and down-going modeling operator. A complete modeling scheme can be defined by the composition of the over-under modeling operator and the up-down decomposition operator. This scheme effectively models all primaries, water reveberations and multiples.
Performance of RTM with ODCIGs computation fully offloaded to GPU [pdf 632K][source] Abdullah Al Theyab and Robert G. Clapp Seismic imaging using GPGPU accelerated reverse time migration [pdf 2.3M][source] Nader Moussa In this report, I outline the implementation and preliminary benchmarking of a parallelized program to perform reverse time migration (RTM) seismic imaging using the Nvidia CUDA platform for scientific computing, accelerated by a general purpose graphics processing unit (GPGPU). This novel software architecture allows access to the massively parallel computational capabilities of a high performance GPU system, which is used instead of a conventional computer architecture because of its high throughput of numeric capabilities.
The key aspects of this research concern the hardware setup for an optimized GPGPU computer system, and investigations into coarse-grained, algorithm-level parallelism. I also perform some analysis at the level of the numerical solver for the Finite-Difference Time Domain (FDTD) wave propagation kernel. This paper demonstrates that the GPGPU platform is very effective at accelerating RTM, and this will lead to more advanced processing for better imaging results.
Accelerating 3D convolution using streaming architectures on FPGAs [pdf 176K][source] Haohuan Fu and Robert G. Clapp and Oskar Mencer and Oliver Pell We investigate FPGA architectures for accelerating applications whose dominant cost is 3D convolution, such as modeling and Reverse Time Migration (RTM). We explore different design options, such as using different stencils, fitting multiple stencil operators into the FPGA, processing multiple time steps in one pass, and customizing the computation precisions. The exploration reveals constraints and tradeoffs between different design parameters and metrics. The experiment results show that the FPGA streaming architecture provides great potential for accelerating 3D convolution, and can achieve up to two orders of magnitude speedup.
Short note: SEP data catalog [pdf 384K][source] Abdullah Al Theyab and Gboyega Ayeni and Yunyue (Elita) Li Visualization and data reordering using ecoram [pdf 108K][source] Robert G. Clapp Memory size and input/output (IO) performance have not kept pace with
the ever increasing size of seismic data volumes. Processing steps that involve
random, or pseudo-random, access to data (such as visualizing, sorting, and transposing)
further degrade performance.
I use Spansion's EcoRAM to replace out-of-core visualization and
data transpose schemes.
I show between one and two orders of magnitude improvement in performance
over conventional out-of-core solutions.