SEP149 -- TABLE OF CONTENTS |

Ricker compliant deconvolution spikes the center lobe of the Ricker wavelet. It enables deconvolution to preserve and enhance seismogram polarities. It works by tapering at small lags the anti-symmetric part of the time-domain representation of the log spectrum. A byproduct of this decon is a pseudo-unitary (very clean) debubble filter where bubbles are lifted off the data while onset waveforms (usually Ricker) are untouched.

Ricker wavelet deconvolution of Western Australia data in the radial/time domain [SRC]

Performing deconvolution in the radial-time (r=x/t,t) domain is a method more consistent with the theory of the standard convolutional model than is deconvolution in the offset-time (x,t) domain. In this work we transform marine 2D seismic line data into the (r,t) domain, perform deconvolution along each common velocity panel, and transform back to the (x,t) domain. We also perform deconvolution on the same data (with no transform) along each common-offset panel in the (x,t) domain for comparison. Comparing in the (x,t) domain shows us that only limited differences in illumination are accomplished, even at far offset. Analysis of the spectra also shows us similar source and receiver frequency notches in the wavelet extracted from the data using both types of deconvolution.

t-squared gain for deep marine seismograms [SRC]

Kjartansson's all-purpose gain function

Convergence of full waveform inversion can be improved by extending the velocity model along either the subsurface-offset axis or the the time-lag axis. The extension of the velocity model along the time-lag axis enables us to linearly model large time shifts caused by velocity perturbations. The extension is based on a new linearization of the scalar wave equation where the extended-velocity perturbation is convolved in time with the Laplacian of the background wavefield. This linearization is accurate for both reflected events and transmitted events, and in particular for diving waves recorded at large offsets. The modeling capabilities of the proposed linearization enable the simultaneous inversion of reflections and diving waves even when the starting velocity model is far from being accurate. Numerical tests performed on synthetic data modeled on the ``Caspian Sea" portion of the well-known BP model shows the global-convergence properties as well the high-resolution potential of the method.

Tomographic full waveform inversion (TFWI) by successive linearizations and scale separations [SRC]

Tomographic full waveform inversion (TFWI) provides a framework to invert the seismic data that is immune to cycle-skipping problems. This is achieved by extending the wave equation and adding an offset axis to the velocity model. However, this extension makes the propagation considerably more expensive because each multiplication by velocity becomes a convolution. We provide an alternative formulation which computes the backscattering and the forward scattering components of the gradient separately. To maintain high resolution results from TFWI, the two components of the gradient are first mixed and then separated based on a Fourier domain scale separation. This formulation is based on the Born approximation where the physical medium parameters are broken into a long-wavelength and short-wavelength components. The inversion setup includes two steps that maintain the high resolution results of TFWI. First, the linearized residual are updated in a nested inversion scheme. This step corrects for the underlying assumption that the data contain only primaries and no multiples. Second, the two components of the gradient are first mixed and then separated based on a Fourier domain scale separation to allow for a fully simultaneous inversion of model scales. After deriving the equations, we test the theory with two synthetic examples. The results of both the Marmousi and BP models show that convergence is possible even with large errors in the initial model that would have prevented convergence in conventional FWI.

Wave-equation migration velocity analysis using partial-stack power maximization [SRC]

We proposed to use a partial-stack power maximization objective function in wave-equation migration velocity analysis. Instead of stacking the angle-domain common-image gathers all at once, the partial-stack power maximization objective function stacks them in smaller groups. It improves the robustness against the cycle-skipping problem and can achieve better global convergence. We also added a normalization term to the partial-stack power maximization objective function to balance the different reflector amplitudes. We tested our objective function using the Marmousi model. The results demonstrate that using the partial-stack power maximization criterion can achieve better global convergence. We also observed that the normalization of reflector amplitudes is very important in order to better constrain the tomography problem.

Anti-noise wave-equation traveltime inversion and application to salt estimation [SRC]

The convergence of full-waveform inversion relies heavily on a good starting model. Such a model can be provided by wave-equation traveltime inversion. However, results from wave-equation traveltime inversion are susceptible to data noise, which is particularly obvious in real data applications. I propose to address this problem by replacing the real data term in back-propagation kernel with a synthetic data term that honors the traveltime information. This modification makes the inversion immune to noise in recorded data waveforms, yet results in a model that is still good for subsequent full-waveform inversion. I demonstrate this with an example of salt estimation. Salt estimation by direct full-waveform inversion is challenging due to a combination of inadequate starting models and insufficient low-frequency data. Reflection wave-equation traveltime inversion with my modification provides a good starting model for subsequent full-waveform inversion of salt estimation, even with data that lack low-frequency components.

Multi-dimensional angle gather construction for AVA and velocity analysis is a computational challenge due primarily to the accompanying increase in volume size which forces the gathers to be stored in a computationally more expensive memory level. Compressive sensing can be used to mitigate this challenge as long as the full angle gathers can be successfully recovered. Multi-dimensional wavelet transforms is a sufficiently sparse basis function to allow for 90% reduction in needed correlations. A modified version of iterative soft thresholding, which applies a different thresholding approach to different wavelet levels, proves a successful

Accelerating residual-moveout-based wave-equation migration velocity analysis with compressed-sensing [SRC]

Residual-moveout-based wave-equation migration velocity analysis uses residual-moveout to characterize the kinematic error caused by an inaccurate velocity model and computes velocity updates using wave-equation tomographic operators. However, the per-iteration cost of this method is even more expensive than that of full waveform inversion because of the construction and back-projection of offset/angle-domain common-image gathers. In order to speed up residual-moveout-based migration velocity analysis, we examine its work flow, and propose the following acceleration scheme: 1) by using the compressed-sensing technique, we can very well reconstruct the angle-domain common image gathers from a fraction of the subsurface offset common-image gathers, therefore saving significant computation cost; 2) after we extract the residual-moveout information from the reconstructed angle-domain common image gathers, we reduce the cost of back-projection by synthesizing an approximation of the original image perturbation, which can be back-projected with much lower cost and can yield a velocity gradient with the same behavior.

Salt delineation via interpreter-guided 3D seismic image segmentation [SRC]

Although it is a crucial component of seismic velocity model building, salt delineation is often a major bottleneck in the interpretation workflow. Automatic methods like image segmentation can help alleviate this bottleneck, but issues with accuracy and efficiency can hinder their effectiveness. However, a new graph-based segmentation algorithm can, after modifications to account for the unique nature of seismic data, quickly and accurately delineate salt bodies on 3D seismic images. In areas where salt boundaries are poorly imaged, limited manual interpretations can be used to guide the automatic segmentation, allowing for interpreter insight to be combined with modern computational capabilities. A successful 3D field data example demonstrates that this method could become an important tool for interactive interpretations tasks.

Large scale linearised inversion with multiple GPUs [SRC]

As our computational power develops and evolves so does our desire to process more data with more advanced algorithms. Graphical Processing Units (GPUs) have been shown to accelerate algorithms that have a high computation to memory access ratio. However, their relatively small global memory (6 Gb) poses additional restrictions on how we must adapt our problem. Herein will be described how multiple GPUs can be used to accelerate the problem of wave-equation linearised inversion in such a way that poses no model or data size restrictions. Furthermore by splitting the internal and external parts of our domains we can achieve close to linear strong-scaling with the number of GPUs that we are using. Consequently we can design a method that outperforms CPU based inversion while overcoming the traditional restrictions of GPU based inversion.

Efficient velocity model evaluation: 2D and 3D field data tests [SRC]

Interpretation tools like image segmentation allow for fast generation of velocity models, but evaluating them can be computationally demanding. Previous work has shown the effectiveness of using Born-modeled synthesized wavefields to image targeted locations for this purpose, but only on synthetic data. Here, the model evaluation method is successfully demonstrated on both a 2D line and small 3D cube taken from a wide-azimuth field survey. The importance of a quantitative measure of focusing or image quality is also shown, especially for five-dimensional datasets that are difficult to visualize and judge qualitatively.

The challenge for applying least-squares reverse-time migration (LSRTM) to area with sharp velocity contrast is discussed. Least-squares migration (LSM), also known as linearized inversion, is an advanced imaging technique. It provides true relative amplitude information while suppressing acquisition footprints and migration artifacts. In most cases when the velocity is smoothly varying, the observed data can be used directly as input for LSRTM. However, when the velocity field has a sharp contrast like in the transition from sediment velocity of salt velocity, a background data term needs to be calculated and subtracted from the observed data before supplying as input for LSRTM. Therefore, special care is needed when performing LSM in regions like Gulf of Mexico where strong salt-reflection is presence. While straight forward with synthetic, subtracting the background data is a non-trivial problem for field data. I introduce a salt-dimming technique for handling sharp velocity contrast in LSM. I demonstrate the concept and methodology in 2D with a modified version of the Sigsbee2B model.

Least-squares RTM with wavefield decomposition [SRC]

In least-squares reverse-time migration, the adjoint of the linearized forward-modeling operator suffers from back-scattering artifacts that severely hamper the speed of convergence. To avoid the back-scattering artifacts, I propose to use least-square reverse-time migration with wavefield decomposition. The incident and scattering wavefields are decomposed into up- and down-going direction such that only the forward-scattering component is used in imaging and modeling. Compared with using the conventional Laplacian preconditioning method, the proposed technique converges faster because it does not bias the inversion towards higher-frequency content. I demonstrate the concept and methodology with the 2D Seam model.

Extended image space separation of continuously recorded seismic data [SRC]

Conventional seismic surveying requires good temporal separation of shot points which can often lead to long waiting times, especially in techniques that use multiple source vessels. It is well established that by recording overlapping shot points we can reduce the cost of surveying. However, this makes data processing and imaging more difficult. Many existing methods have been suggested that can separate overlapping data; the caveat of all these is that the requirement of random time delays between shot points. By posing the problem in the extended image space it is possible to isolate and separate these data with a wide variety of different time delays, including linear time delays. The ongoing work described herein details how this method can be made computationally feasible and how to design these algorithms.

The basis for successful application of passive seismic interferometry depends on the characteristic of the ambient noise. This paper introduces four continuous recordings made at the Valhall Life of Field Seismic (LOFS) Array. The ambient seismic field is characterized through creation and analysis of various spectra, focusing particularly on the microseism energy between 0.175 and 1.75 Hz. Beam forming shows how the microseism noise is generally incident equally strongly from all directions for long periods of time, with certain exceptions. Finally, seismic interferometry is used to create virtual seismic sources for all receivers at Valhall. By cross-correlating different components at different stations, I retrieve a full virtual seismic Green's matrix that reveals interface waves of Scholte and Love wave types. A first overtone of Scholte is observed as well.

Daily reservoir-scale subsurface monitoring using ambient seismic noise [SRC]

Seismic interferometry is applied to continuous seismic recordings spanning five days and over 2200 stations at the Valhall Life-of-Field Seismic (LOFS) array in the Norwegian North Sea. We retrieve both fundamental-mode and first-overtone Scholte-waves by cross-correlation. Ambient-seismic-noise tomography (ASNT) using the vertical component of this dense array produces group-velocity maps of fundamental-mode Scholte waves with high repeatability from only 24 hours of recording. This repeatability makes daily reservoir-scale near-surface continuous monitoring of the subsurface feasible. Such monitoring may detect production-related changes over a long time-scale (months to years) and may be useful for early detection of short time-scale hazards (days to weeks) such as migrating gases and fluids. We validate our velocity maps by comparing them with maps obtained independently from controlled-source data.

Noise characterization and ambient noise cross-correlations at Long Beach [SRC]

The dense seismic array in Long Beach, California is located in an urban environment along the Pacific Ocean. There are a variety of noise sources influencing the ambient seismic noise field, both natural and anthropogenic in origin. To understand the temporal and spatial influences of these sources, we calculate power spectral densities (PSD) and apply beamforming to ambient seismic noise data. From spatial distribution maps of noise PSD, we find that energy from the Pacific Ocean dominates the noise field at frequencies below 2 Hz, while energy from local roads and Interstate 405 dominate the noise field at frequencies above 2 Hz. From spectrograms, we observe diurnal fluctuations in energy that are in accord with expected patterns in human activity. From beamforming, we find that directed, low-frequency energy from the Pacific Ocean is prevalent throughout the array, while the directivity of high-frequency energy varies throughout the array. Near Interstate 405, noise energy is clearly directed outwards from the freeway, but at a distance from the freeway, noise energy arrives more evenly across azimuths. Based on these observations, we expect the noise source distribution to be generally more homogeneous at higher frequencies than at lower frequencies. Ambient noise cross-correlation results at frequencies spanning 0.5-2 Hz and 2-4 Hz reinforce the validity of this expectation. A more exciting observation is the emergence of a P-wave at the higher-frequency range in our virtual super-source gather. This is a promising first step toward potentially retrieving other body-wave arrivals.

In this paper, I present a computationally efficient technique for extrapolating seismic waves in an isotropic elastic medium. The method is based on factoring the full elastic wave equation into the product of two ``one-way'' pseudo-differential operators. The operators are shown to be stable in their respective propagation directions. P-SV mode conversions are modeled by imposing continuity of tractions in the direction of maximum model contrast. I test the method on heterogeneous models featuring both sharp contrasts and smooth gradient, and compare results with a two-way time domain modeling method. I achieve a significant reduction in the cost of elastic imaging compared to the currently prevalent time and frequency domain numeric methods.

Practically stable unstable orthorhombic finite differences [SRC]

Intrigued by an instability result presented at the last SEG meeting (Chu, 2012), we analyze it and some variants to understand the nature and extent of such instabilities. We investigate an unexpected dependency of stability on the order of our spatial derivative approximation. We find that even the apparently stable scheme in that SEG abstract is actually slightly unstable, though the instability is not qualitatively manifest until tens of thousands of time steps are taken.

Applications for rotational seismic data [SRC]

Seismic systems today record up to four components which provide the particle displacement and the pressure. The pressure is proportional to the divergence of the displacement. The curl of the displacements can be recorded using rotation sensors. To evaluate the added information that would come from rotation sensors we use elastic modeling. In our synthetic data experiment, we predict the effect of a seabed scatterer on seven-component OBS data: three component geophones, three component rotation sensors and hydrophones. The synthetic data comprises P-waves, S-waves and surface waves. We apply singular value decomposition in order to identify the polarization vectors for each wave type. Our evaluation is that the added information from rotation sensors is useful for identifying and separating surface waves from body waves. Additionally, we use elastic modeling to predict how a change in rock-physics parameters affects the AVO curve of the rotational-motion components, and compare the response to a standard AVO obtained from the hydrophone component.

Synthetic model building using a simplified basin modeling approach [SRC]

Generating a realistic synthetic model is a challenging problem in a geophysical research environment. Models are often too simple, lacking much of the character of real data, or too complex making debugging difficult. I propose a different way to generate synthetic models, by allowing the user to specify a series of geologic events such as deposition, erosion, and compaction. The result of each event is approximated on the current model. This approach has the benefit of allowing complex models to be built, easily extendable to multiple model parameters, and allows the user to ``turn off'' events, conveying the construction of simpler models by stages. Early results indicate that this approach allows complex synthetic models to be generated with minimal effort.

Accurate implementation of two-way wave-equation operators [SRC]

I present a complete derivation of wave-equation operators for nonlinear modeling, linearized modeling and migration, tomographic forward and adjoint operators and wave-equation migration velocity analysis (WEMVA) operators. The derivation is done in time domain using a chain of simple linear operators. The results show that all linearizations and adjoints are correct for any media.

Anisotropic model building using surface seismic data is a well-known underdetermined and nonlinear problem. To stabilize the inversion, a regularization term is often added into the data fitting objective function assuming a priori anisotropic model. In this paper, we build the anisotropic prior model using stochastic rock physics modeling for shale anisotropy. We consider shale anisotropy from four aspects: mineral elastic anisotropy for the constitutes of the rock, the mineral transition due to compaction and temperature, particle alignment during compaction and shale/sand lamination. The input parameters for the rock physics modeling are provided by two different sources: quantitative inversion results from seismic impedance in three dimensional space and well-log measurements at isolated well locations. For each instance of modeling, we perturb the key parameters for the rock physics modeling to produce a set of random models. The modeling results are compared at three different depth levels in a statistical manner. The similarity between both modeling results justifies the use of the seismic inversion results in the deeper region that the well logs do not cover. To better utilize the seismic and the rock physics information, we also propose a new parameterization scheme for wave-equation migration velocity analysis.

Image-guided WEMVA for azimuthal anisotropy [SRC]

Azimuthal anisotropy is common in layered basins with strong folding and fracturing effects. Traditional processing on individual azimuths usually yields images with inconsistent depths. In this paper, we propose to use an image obtained from one azimuth to constrain the image-space velocity analysis on other azimuths. Instead of using the traditional differential semblance penalty function, we define the image penalty weight according to an existing image of one azimuth. This method directly tackles the differences in anisotropic parameters among different azimuths. By keeping the vertical velocity constant across the azimuths, we separate the kinematic effects due to the anisotropic parameters from those due to the velocity error. We test the image-guided migration velocity analysis algorithm on a simple example with flat reflectors and a homogeneous orthorhombic subsurface. We compare the residual images and the first

Wave-equation migration Q analysis (WEMQA) [SRC]

Q model building, which is conventionally done in the data space using ray-based tomography, is a notoriously challenging problem due to issues like spectral interference, low signal-to-noise ratio, diffractions, and complex subsurface structure. To produce a reliable Q model, I present a new approach with two major features. First, this method is performed in the image space, which uses downward-continuation imaging with Q to stack out noise, focus and simplify events, and provide a direct link between the model perturbation and the image perturbation. I develop two methods to generate the image perturbation for the following scenarios: models with sparse reflectors and models with dense reflectors. Second, this method uses wave-equation Q tomography to handle complex wave propagation. Two synthetic tests on two different 2-D models with Q anomalies shows the capability of this method on models with sparse events. Tests with a modified SEAM model also demonstrate the feasibility of this method for a model with dense events.

In order to overcome the restrictions on trace header information inherent in SEG-Y or SEG-D, I have written a a ProMAX/SeisSpace module to write directly into SEP3D format, preserving all numeric-valued trace headers. One feature of this module is that it does not need any SEP software environment in order to operate.

PhD genealogy of Jon Claerbout: Ancestry and legacy Latest version

PhD genealogy is the practice of tracing ones thesis adviser's adviser, and so on. To first order this provides a lineage of academic teaching and thinking. For the Stanford Exploration Project's 40th anniversary, I compiled Jon Claerbout's academic lineage. The SEP is one of the world's most infamous academic research groups in seismic imaging, and Jon Claerbout graduated many PhD students through this project himself. This report goes beyond the usual academic genealogy in that it also attempts to compile an academic legacy for Jon Claerbout. An online, and up-to-date copy of this academic genealogy, will be made available online.

SEP149 -- TABLE OF CONTENTS |

2013-5-29