We use ambient noise interferometry on data recorded by a distributed acoustic sensing (DAS) array to extract signals mimicking active source surveys without the cost and permitting requirements of a traditional active survey for geotechnical characterization. Between September 2016 and March 2018, we passively recorded DAS data on an array of fibers in existing telecommunications conduits under the Stanford University campus. We analyze time-lapse changes in the ambient noise field throughout campus and observe diurnal, weekday/weekend, and some annual variation trends. We calculate noise correlation functions (NCFs) throughout the 18 months of recording to test whether the array's NCFs were sensitive to near-surface velocity changes tied to seasonal saturation cycles. During rainier winter months, we see higher signal-to-noise ratios (SNR) in one-bit cross-correlations. To understand whether temporal changes in the ambient noise field could cause spurious changes in NCFs, we compare two methods for calculating monthly NCFs and their resulting dispersion images. Evidence does not suggest that the array detects a velocity shift correlated to saturation changes, but it is possible SNR of NCFs at far offsets may provide a qualitative indicator of saturation.

Plane-wave extraction from the Stanford DAS array: adjoint and inverse [pdf 1.1M]

We develop a beamforming algorithm (adjoint method) to extract compressional plane wave and shear plane wave components from distributed acoustic sensing (DAS) data. We also set up inverse methods regularized by the L1 and L2 norm. To understand the performance of beamforming and our inversions, we test and compare them on synthetic wavefields composed of compressional and shear waves. We consider two scenarios regarding the frequency content of the waves: impulsive plane waves and reverberant plane waves. We found that beamforming can give us a good estimation in the impulsive wave case, but it does not work well in the reverberant case. Also, beamforming is subject to crosstalk artifacts between the compressional wave and shear wave model spaces. In all our test cases, the inverse algorithm regularized by the L1 norm did the best job giving us highly focused results with no clear crosstalk artifacts.

Automated ambient noise processing applied to fiber optic seismic acquisition (DAS) [pdf 3.5M]

Distributed acoustic sensing (DAS) is an emerging technology used to record seismic data that employs fiber optic cables as a probing system. Since September 2016, a DAS array has been deployed beneath Stanford campus in the existing fiber optic telecommunication conduits. Because we can so easily use our telecomm infrastructure for continuous, dense, seismic acquisition, data collected in such a manner will go to waste unless we significantly automate ambient noise processing. Herein we present relevant data features for exploratory data analysis and identify coherent noise sources which inhibit reliable extraction of useful signals. We then train a convolutional neural network for detecting traffic noise and selectively filter it out to generate ambient seismic noise fields that are suitable for interferometry purposes. Further, we use Markov decision processes to reconstruct the array geometry from the data, which gives us the potential to extend this type of acquisition to other existing fiber optic networks.

Copyright © 1997, 1998, 1999, Ross Moore, ...

Jump-starting neural network training for seismic problems [pdf 3.2M]

Deep learning algorithms are immensely data-hungry and rely on large amounts of labeled data to achieve good performance. However the earth is intrinsically unlabeled and we are often confronted to fuzzy boundaries, uncertain labels, and absence of ground truth. Moreover, deep learning models do not always generalize well to conditions that are different from the ones encountered during training. In this context, it can be difficult to leverage deep learning algorithms for seismic problems. Herein we introduce strategies for overcoming these limitations, using synthetic data generation and transfer learning to jump-start the training of neural networks. We present this methodology through two case studies: earthquake detection using the Northern California Seismic Network (NCSN); and targeted noise filtering for ambient seismic noise recorded by a fiber optic array underneath Stanford campus.

Comparing Deep Neural Network and Full Waveform Inversion Velocity Estimation [pdf 1.1M]

We explore the feasibility of a deep learning approach for tomography by comparing it with the current velocity prediction techniques used in the industry. This is accomplished through quantitative and qualitative comparisons of velocity models predicted by a Machine Learning (ML) system and those of two variations of full-waveform inversion (FWI). Additionally, we compare computational aspects of the two approaches. The results show that the ML-based reconstructed models are competitive to the FWI-produced models in terms of selected metrics, and widely less expensive to compute.

Stratigraphic sequence estimation from seismic traces using convolutional neural networks [pdf 1.6M]

We train a 1D convolutional neural network to estimate stratigraphic sequences from seismic data, and evaluate which frequencies are required to obtain accurate estimates. While seismic volumes are typically unlabeled data, well logs allow us to label portions of seismic data with their corresponding geological stratigraphy. We boost the training set by generating additional synthetic well logs using Markov chain modeling. We demonstrate that the estimation accuracies increase with seismic frequency content, and while accuracies remain fairly low for frequencies under 504#4Hz, we achieve accuracies over 80% when pushing towards higher frequencies.

Convolutional neural networks explained [pdf 1.6M]

Convolutional networks, also known as ConvNets, convolutional neural networks or CNNs, are a specialized kind of neural network for processing data that has a known, grid-like topology, such time-series data or image data. They have been successful in practical applications, ranging from face recognition to self-driving cars, and could be used for processing seismic data. In this paper, I explain how to build a convolutional network.

The high computational cost of elastic full-waveform inversion (FWI) limits its applicability to real exploration datasets. We propose a target-oriented approach that alleviates the computational burden associated with elastic FWI by limiting the inversion process to only a portion of the subsurface where an accurate and high resolution elastic model is needed (e.g., reservoir level). The proposed method is based on the reconstruction of the data generated within the target area at a depth level directly above it. This data reconstruction is performed by an extended least-squares migration of the surface data followed by a demigration to the desired depth level. We demonstrate the efficacy of this approach on a layered model in which a complex reflector is considered to be our inversion target and only pressure data are recorded.

Surface waves in vertically heterogeneous media [pdf 4.7M]

This work is aimed at understanding the theoretical basics of surface wave propagation and the way they naturally arise in the study of wave equation. The analytical derivations shed light on their key features such as velocity dispersion and existence of several modes of propagation and potentially will lead to an intuitively clear picture of this complex phenomenon. This, in turn, will make the interpretation of surface waves on the real seismic records easier and allow understanding of the methods for their analysis.

In our previous report we developed a rock physics workflow that combines various sources of data such as well logs, mud weights, Bottom Hole Temperature, and basin history to model pore pressure-velocity relationship, taking into account both shale diagenesis and mechanical compaction. In this report, we apply this workflow to a field data collected from the Gulf of Mexico and build velocity models for imaging and inversion. We examine a number of different pore pressure gradient scenarios and velocity models. We access the feasibility of these models based on the quality of their resulting images and flatness of angle gathers. Our results show that rock physics and basin modeling integration gives a velocity model that not only fits the observed data better but also is geologically more plausible.

Toward time-lapse study of anisotropic parameters over the Genesis field [pdf 4.6M]

We estimate a 3D vertical transverse isotropy (VTI) model for the Genesis dataset. We reprocess the time-lapse seismic data set to attenuate the spatial aliasing problem, and to minimize the impact of different processing workflows conducted by different companies. Time-lapse reverse-time migration (RTM) from the Genesis dataset, suggests that we have obtained an accurate initial VTI model. Production-induced change has been observed with the help of angle domain common image gathers (ADCIG). We compute the gradient for velocity and anisotropic parameters, and the results suggest that long offset data may help us identify anisotropic parameter change during production.

The main challenge inherent to full waveform inversion (FWI) is its inability to correctly recover the Earth's subsurface seismic parameters from inaccurate starting models. This behavior is due to the presence of local minima in the FWI objective function. To overcome this problem, we propose a new objective function in which we modify the nonlinear modeling operator of the FWI problem by adding a correcting term that ensures phase matching between predicted and observed data. This additional term is computed by demigrating an extended model variable, and its contribution is gradually removed during the optimization process while ensuring convergence to the true solution. Since the proposed objective function is quadratic with respect to the extended model variable, we make use of the variable projection method. We refer to this technique as full waveform inversion by model extension (FWIME). We provide a theoretical description of our method and we illustrate its potential on two synthetic examples for which FWI fails to retrieve the correct solution. First, by inverting data generated in a borehole setup. Then, by inverting diving waves recorded with a standard surface acquisition geometry. In both cases, we purposely choose a very inaccurate initial model and we show that FWIME manages to recover the true solution.

Frequency domain tomographic full waveform inversion [pdf 1016K]

We develop the theory of frequency domain tomographic full waveform inversion (TFWI) with the goal of alleviating some of the computational bottlenecks present in its time domain implementation. Fourier transforming the time coordinate converts the extended Born modeling operator into a pointwise multiplication operator in the frequency domain, which is computationally cheap as compared to its time domain counterpart that involves computing expensive convolutions in time. This transformation leads to significant computational gains for the inner loop sub-problem in TFWI, where for each frequency band being inverted for, the factorization of the sparse Helmholtz matrix only needs to be performed once for the whole sub-problem. The theory is developed for the acoustic case in a constant density medium, ans we provide examples of some preliminary 2D numerical tests.

Improving reflectivity using linearized waveform inversion with velocity updating [pdf 296K]

In this report I show how linearized waveform inversion with velocity updating can correct the estimated reflectivity amplitude in the presence of residual inaccuracies in the velocity field, in contrast to conventional linearized waveform inversion.

Velocity estimation with alternating gradiometry and wavefield reconstruction inversion [pdf 292K]

We implement a wave equation based time-domain inversion that recovers an earth model from sparse seismic data and avoids pitfalls of other popular wave equation based solutions. This method iteratively solves for an optimal wavefield through Wavefield Reconstruction Inversion (WRI) that is used to update the earth model using gradiometry. We show examples of both WRI and gradiometry that illustrate their feasibility as individual inverse problems. Finally, we combine the methods into a joint solution and demonstrate that the optimal wavefield retrieved by WRI can be used as the input of gradiometry for updating the earth model in the direction of the true earth solution by gradiometry.

3D synthetic FWI gradient testing on a Gulf of Mexico model [pdf 2.3M]

In order to demonstrate level set updating on a real dataset, I've chosen to work with an ocean-bottom node (OBN) Gulf of Mexico (GOM) dataset provided by Shell Exploration & Production Company. This data appears to have a shallow salt body inclusion that may be a good target for inversion. In this work, I demonstrate the selection of parameters for the inversion problem specific to this dataset. I walk through the process of analyzing the data, choosing propagation parameters, and perform FWI gradient testing using a synthetic example based on the real acquisition and velocity model.

Shape Optimization Using Randomized Radial Basis Functions and Interpreter Guidance [pdf 824K]

Shape optimization can be used to invert for the position of a sharp salt boundary in a way that minimizes the full-waveform inversion (FWI) objective function. However, we are also interested in using interpreter information to guide the inversion away from what are perceived as local minima. In this work we demonstrate a method of including interpreter guidance, while still maintaining the integrity of being a data driven method. With this approach, we achieve improved convergence on a synthetic 2D model.

Time-lapse full waveform inversion with non-local shaping regularization: Toward integrated reservoir monitoring [pdf 416K]

We present a regularization strategy to integrate geomechanical modeling results into the time-lapse full waveform inversion (FWI) workflow. The method constructs a non-local shaping regularization based on the time-lapse attributes change from geomechanical modeling linearly correlated with the seismic velocity change. The regularization pushes the velocity change to have similar shape as the geomechanical attributes, overcoming the challenge of unknown scaling factors between different attributes. We show the potential of the proposed methods on synthetic models with both noise free data and noisy data.

We present the theory and initial results for deblending multicomponent simultaneous source data using a pattern-based approach based on multidimensional prediction-error filters (PEFs). In using this pattern-based approach, we provide a method for PEF estimation that makes use of the directional information recorded on the horizontal geophone components in order to improve the source separation on the hydrophone. We provide synthetic numerical examples and an example from a FreeCable 8#8 data set to demonstrate that using PEFs estimated on all data components results in better separation than using only the hydrophone component.

Combining direct imaging of blended data and data-space deblending using a pattern-based approach [pdf 444K]

We develop a new algorithm for directly imaging blended data via waveform inversion. The algorithm relies on performing a data-space deblending step at each iteration of the waveform inversion. Following a pattern-based approach, this data-space deblending step is done through independent modeling of the source wavefields on which filters can be estimated and used to deblend the blended data. As the velocity model is updated, the filters will be estimated on increasingly more accurate data and therefore will provide improved deblending results from iteration to iteration. We show that with the introduction of these filters, the waveform inversion results contain significantly fewer artifacts than those obtained with conventional waveform inversion of blended data.

Adaptive prediction error filters [pdf 1.2M]

The theory and the applications of prediction error filters are well known. Most of the time, however, they are built to be stationary (not varying in space and time) and therefore, they carry some global statistical information of the signals. Hence, these filters cannot be expected to be optimal in a nonstationary environment. Here I address the problem of designing a nonstationary prediction error filter (PEF) using a gradient adaptive lattice and recursive least-squares filters. Their one- and two-dimensional applications (deconvolution) to nonstationary signals show better whitening properties compared to conventional stationary PEFs.

Nonstationary Signal Tutorial [pdf 5.8M]

Copyright © 1997, 1998, 1999, Ross Moore, ...

We develop the theory for implementing multiparameter full-waveform inversion using high-order accurate summation-by-parts finite difference operators and weak enforcement of boundary conditions. With these discrete operators, we derive the semi-discrete adjoint equations and gradients that closely mimic those of the continuous problem. We provide a numerical example in which we estimate the source time function from synthetic pressure data. We anticipate that this formulation will be useful for complicated modeling scenarios such as modeling point sources directly on the ocean bottom interface.

Minimizing wave propagation dispersion by optimal parameter search in mimetic finite differences. [pdf 248K]

The discrete nature of the numerical operators used in wave propagation leads to dispersion errors in the simulation. The effects of dispersion can be mitigated by resorting to high-order operators, which yield more precise results at a greater computational cost. The mimetic finite-difference method introduced by Castillo et al. consists of finite-difference operators that retain a high order of accuracy when tackling Dirichlet boundary conditions, such as the free-surface condition in onshore acquisitions. The mimetic operators can be constructed by adjusting six free parameters. I present a study of the impact that varying the parameters has on the dispersion of a wave propagating in a 1-D medium, while looking for the optimal combination to minimize the numerical error at no increased computation cost.

We adopt a pipeline approach to accelerate 3D time-domain finite-difference waveform inversion codes using graphics cards (GPUs), without having to do domain decomposition. The key designs include streaming through the volume one block at a time and propagating this block as many time steps as possible while it is on the device. This approach allows us to process an arbitrarily large volume with a single GPU, which is particularly suitable in a cloud environment where fast inter-nodal connection is not guaranteed. Moreover, two parameters, block size and number of updates, give developers flexibility to adapt to available resources at hand. The most significant advantage that the pipeline approach offers, in our opinion, is the ability to compute subsurface offset gathers on GPUs. In this paper we describe our implementation on the pseudo-acoustic anisotropic wave equations and show that the pipeline technique achieves nearly linear scaling with number of GPUs.

Snell tomography using quantum annealing [pdf 292K]

By restricting the constituent materials in a horizontally stratified medium to a specific set with well-known acoustic (or elastic) properties, transmission tomography can, in principle, provide the relative fraction of each material within the set of beds over which the rays traverse. In this paper, we formulate an algorithm for this “super-resolution” calculation suitable for quantum computing.

2018-06-06