SEP136 -- TABLE OF CONTENTS |

We extend the theory of image-space wave-equation tomography to the generalized source domain, where a smaller number of synthesized shot gathers are generated either by data-space phase encoding or image-space phase encoding. We demonstrate how to evaluate the wave-equation forward tomographic operator and its adjoint in this new domain. We compare the gradients of the tomography objective functional obtained using both data-space and image-space encoded gathers with that obtained using the original shot gathers. We show that with those encoded shot gathers we can obtain a gradient similar to that computed in the original shot-profile domain, but at lower computational cost. The saving in cost is important for putting this theory into practical applications. We illustrate our examples on a simple model with Gaussian anomalies in the subsurface.

Phase encoding with Gold codes for wave-equation migration [pdf 408K][source]

Prestack exploding-reflector modeling aims to synthesize a small dataset comprised of areal shots, while preserving the correct kinematics to be used in iterations of migration velocity analysis. To achieve this goal, the amount of data is reduced by combining the modeled areal data into sets, we call super-areal data. However, crosstalk arises during migration due to the correlation of wavefields resulting from different modeling experiments. Phase encoding the modeling experiments can attenuate crosstalk during migration. In the geophysical community, the most used phase-encoding schemes are plane-wave-phase encoding and random-phase encoding. Here, we exploit the application of Gold codes commonly used in wireless communication, radar and medical imaging communities to phase encode data. We show that adequately selecting the Gold codes can potentially shift the crosstalk out of the migration domain, or the region of interest if a target-oriented approach is used, yielding an image free of crosstalk.

An image-focusing semblance functional for velocity analysis [pdf 340K][source]

Analyzing the focusing and defocusing of migrated images provides valuable velocity information that can supplement the velocity information routinely extracted from migrated common-image gathers. However, whereas qualitative focusing analysis is readily performed on ensemble of images generated by prestack residual migration, quantitative focusing analysis remains a challenge. I use two simple synthetic-data examples to show that the maximization of a minimum-entropy norm, a commonly-used measure of image focusing, yields accurate estimates for diffracted events, but it can be misleading in the presence of continuous but curved reflectors. I propose to measure image focusing by computing coherency across structural dips, in addition to coherency across aperture/azimuth angles. Images can be efficiently decomposed according to structural dips during residual migration. I introduce a semblance functional to measure image coherency simultaneously across the aperture/azimuth angles and the dip angles. Using 2D synthetic data examples, I show that the simultaneous evaluation of semblance across aperture-angles and dips can be effective in quantitatively measuring image focusing and also avoiding the biases induced by reflectors' curvature.

Migration velocity analysis with cross-gradient constraint [pdf 236K][source]

Velocity analysis plays a fundamental to seismic imaging. A variety of techniques using pre-stack seismic data exist for migration-velocity analysis, including reflection tomographic inversion methods. However, when the wavefield propagation is complex, reflection tomography may fail to converge to a geologically reasonable velocity estimation. Non-seismic geological properties can be integrated in the reflection-seismic tomography problem to achieve better velocity estimation. Here, I propose to use cross-gradients function as a similarity measure to constrain the tomography problem and enforce a general geometrical structure for the seismic velocity estimates.

Transmission effects of localized variations of Earth's visco-acoustic parameters [pdf 656K][source]

In an effort to understand the transmission effects of localized heterogenities in the subsurface, we present the travel-time and amplitude distortions caused by localized variations in velocity and absorption. To examine the relative impact of velocity and absorption heterogeneities on seismic events, we conducted numerical experiments using visco-acoustic finite-difference modeling of the linearized wave-equation for Newtonian fluids. We analyzed the distortions in the midpoint-offset domain. We find that the distortion caused by an anomaly that is both slow and absorptive is different from that an anomaly that is either slow or absorptive, but not both. Our results also indicate that amplitude distortion of highly absorptive anomalies () can be comparable to that of small velocity variation (less than ), and therefore absorption must be considered in seismic amplitude inversion and AVO analysis.

We discuss two regularized least-squares inversion formulations for time-lapse seismic imaging. Differences in acquisition geometries of baseline and monitor datasets or the presence of a complex overburden can degrade the quality of the time-lapse seismic signature. In such a scenario, the time-lapse amplitude information are poor indicators of the true reservoir property changes. Although the migration operator accurately images the seismic data, it does not remove these amplitude distortions. We pose time-lapse imaging as joint linear inverse problems that utilize concatenations of a target-oriented approximation to the least squares imaging Hessian. In one of the two formulations considered, outputs are inverted time-lapse images, while in the other, outputs are evolving images of the study area. Using a 2D-synthetic sub-salt model, we demonstrate that either joint-inversion formulation can attenuate overburden and geometry artifacts in time-lapse images and that joint wave-equation inversion yields more accurate results than migration or separate inversion.

Modeling, migration, and inversion in the generalized source and receiver domain [pdf 316K][source]

I extend the theory of Born modeling, migration and inversion to the generalized source and receiver domain, a transformed domain that is obtained by linear combination of the encoded sources and receivers. I show how to construct the target-oriented imaging Hessian with encoded sources, with encoded receivers and with simultaneously encoded sources and receivers. I also demonstrate the connection of the imaging Hessian in the generalized source and receiver domain to the phase-encoded Hessian that I developed in

Image segmentation can automatically locate salt boundaries on seismic sections, an often time-consuming and tedious task when undertaken manually. However, using a single seismic attribute (usually amplitude) is sometimes insufficient to achieve an accurate segmentation. Since any quantifiable measure may be employed as an attribute for segmentation, exploring other possible attributes is an important step in developing a more robust segmentation algorithm. Dip variability within a seismic section is one attribute with many advantages for segmentation, and experimenting with different methods for calculating dips can yield improved results. Determining the frequency content of a seismic image offers other opportunities for improvement. Specifically, instantaneous frequency shows promise as another attribute for segmentation, while employing a continuous wavelet transform to study envelope amplitude at different frequencies can improve the performance of the amplitude attribute.

Hyercube viewer: New displays and new data-types [pdf 964K][source]

No single way to view seismic data is effective in all cases. Rather than building separate tools for each viewing approach, we added functionality to SEP's existing hypercube viewing tool. In addition to other functionality improvements, we added the capability to view wiggle traces, contours, out-of-core datasets, and datasets with different number of dimensions and size.

Many-core and PSPI: Mixing fine-grain and coarse-grain parallelism [pdf 288K][source]

Many of today's computer architectures are supported for fine-grain, rather than coarse-grain, parallelism. Conversely, many seismic imaging algorithms have been implemented using coarse-grain parallelization techniques. Sun's Niagara2 uses several processing threads per computational core, therefore the amount of memory per thread makes a strict coarse-grain approach to problems impractical. A strictly fine-grain parallelism approach can be problematic in algorithms that require frequent synchronization. We use a combination of fine-gain and coarse-grain parallelism in implementing a downward continuation based migration algorithm on the Niagara2. We show the best performance can be achieved by mixing these two programing styles.

Reverse time migration: Saving the boundaries [pdf 340K][source]

The need to save or regenerate the source or receiver wavefield is one of the computational challenges of Reverse Time Migration (RTM). The wavefield at each time step can be saved at the edge of the damping/boundary condition zone. The wave equation can be run in reverse, re-injecting these saved points to regenerate the wavefield. I show that this a better choice than checkpoint schemes as the domain grows larger and if the computation is performed on a streaming architecture.

The theory of angle-domain common-image gathers (ADCIGs) is extended to migrations performed in generalized 2D coordinate systems. I develop an expression linking the definition of reflection opening angle to various generalized geometric factors. I demonstrate that generalized coordinate ADCIGs can be calculated directly using Fourier-based offset-to-angle approaches for coordinate systems satisfying the Cauchy-Riemann differentiability criteria. The canonical examples of tilted Cartesian, polar, and elliptic coordinates are used to illustrate the ADCIG theory. I compare analytically and numerically generated image volumes for a set of elliptically shaped reflectors. Experiments with a synthetic data set illustrate that elliptic-coordinate ADCIGs better-resolve the reflection opening angles of steeply dipping structure, relative to conventional Cartesian image volumes, due to improved large-angle propagation and enhanced sensitivity to steep structural dips afforded by coordinate system transformations.

Seismic interferometry versus spatial auto-correlation method on the regional coda of the NPE [pdf 3.3M][source]

A seismic recording of the non-proliferation experiment (NPE) contains the first break of the regional P phases followed by a three minute long coda. The frequency-domain result of seismic interferometry is studied. This procedure is analogous to the spatial auto-correlation (SPAC) method, devised for studying microtremors by Aki (1957). Cross-correlating two receiver stations retrieves, under favorable circumstances, an approximation of the Green's function between these two stations. To first order, this Green's function consists of a direct event traveling between the receivers. In the frequency-domain, the lowest mode in the Green's function is a weighted and scaled zero-order Bessel function of the first kind, . The cross-spectrum from the coda of the NPE is estimated using multitaper spectral analysis. The retrieved Green's functions are fitted to damped functions to recover phase velocity and estimates of the attenuation coefficients. Only energy between 1-4 Hz can be fitted unambiguously with functions, because higher frequencies contain too much spurious energy. This result shows the equivalence of the SPAC method and seismic interferometry for the lowest mode in the Green's function. This study also demonstrates that the coda of a regional event, seemingly unfavorably positioned, can contain energy useful for seismic interferometry.

Seismic data geometries are not always as nice and regular as we want due to various acquisition constraints. In such cases, data interpolation becomes necessary. Usually high-frequency data are aliased, while low-frequency data are not, so information in low frequencies can help us interpolate aliased high-frequency data. In this paper, I present a 3D data interpolation scheme in pyramid domain, in which I use information in low-frequency data to interpolate aliased high-frequency data. This is possible since in pyramid domain, only one prediction error filter (PEF) is needed to represent any stationary event (plane-wave) across all offsets and frequencies. However, if we need to estimate both the missing data and PEF, the problem becomes nonlinear. By alternately estimating the missing data and PEF, we can linearize the problem and solve it using a conventional least-squares solver.

Anti-crosstalk [pdf 76K][source]

Inverse theory can never be wrong because it's just theory. Where problems arise and opportunities are overlooked is where practitioners grab and use inverse theory without recognizing that alternate assumptions might be better. Here I formulate

Natural underground coal fires are a world-wide concern, emitting carbon dioxide and other pollutant gasses into the atmosphere; one such coal fire is located at Durango, Colorado. We carried out elastic modeling in order to investigate the potential of applying P-wave seismic methods to the problem of differentiating between burned and unburned coal in the upper 30 m of the subsurface at the site near Durango. This is a challenging problem for any geophysical method, but preliminary modeling results show that the problem is tractable under certain circumstances. Our highly simplified model suggests that imaging the coal layer can potentially be accomplished with adequately high frequencies (source center frequency 125 Hz); imaging the actual burned zone would be more difficult. The model neglects the major near-surface heterogeneity known to exist at the site; features such as fissures would surely result in diffractions and reflections that could obscure much of the desired signal.

Hubbert math [pdf 76K][source]

Hubbert fits growth and decay of petroleum production to the logistic function. Hubbert's relationship is commonly encountered in four different forms. They are all stated here, then derived from one of them, thus showing they are equivalent.

Tar sands: Reprieve or apocalypse? [pdf 436K][source]

Based on a Hubbert-type analysis two projections are made of tar sands production. With tar sand production growing at 5%/year total petroleum production declines at an annual rate of 1-2%. With tar sand production growing at 10%/year total petroleum production continues rising at almost the historic rate until 2040 followed by a catastrophic rate of 50%/decade.

SEP136 -- TABLE OF CONTENTS |

2008-10-29