SEP176 -- TABLE OF CONTENTS |

We propose a new model-space multi-scale approach using spline interpolation in order to improve the convergence of waveform inversion techniques. We develop a 2D spline interpolation algorithm using basic spline (B-spline) functions that allows us to represent the unknown model parameters on a coarse and nonuniform grid. We simultaneously invert all available frequency components in the data while gradually refining the spline grid with iterations. The inverted model for a given grid is then used as the initial guess for the following inversion performed with a finer grid. The spline grid refinement rate allows us to slowly increase and control the wavenumber content of the model updates without having to adopt a data-domain multi-scale approach. We believe this proposed method is crucial to improve the efficiency of techniques such as full waveform inversion by model extension (FWIME) or reflection full waveform inversion (RFWI), since all data components (including reflections) need to be simultaneously inverted in order to reconstruct the low-wavenumber components of the velocity model. In this report, we evaluate our proposed approach on conventional FWI in order to gain better insight on its advantages and limitations. We find that when FWI converges to the optimal solution, our proposed method also manages to do so, which leads us to think it may successfully be applied in the context of FWIME. Moreover, when the acquired data lack low-frequency content, both data- and model-domain multi-scale approaches converge to local minima.

Full waveform inversion by model extension using a model-space multi-scale approach [SRC]

We successfully show that full waveform inversion by model extension (FWIME) can mitigate the cycle-skipping issue inherent to conventional full waveform inversion (FWI). We develop and apply a model-space multi-scale workflow using spline interpolation on a 2D synthetic dataset using only reflected data with no energy below 4 Hz and where conventional FWI fails to recover the optimal solution. We simultaneously invert all available frequency content in the data without adopting a data-space multi-scale approach. By gradually refining the node spacing of our model representation on the spline grid, we control the wavenumber content of the updates during the inversion process. We first apply FWIME to recover an accurate velocity model, and we use it as an initial guess for a conventional FWI workflow. The final inverted model is not cycle-skipped and very close to the true solution.

Waveform inversion of blended data: How does data blending influence different scales of the velocity model? [SRC]

I explore direct imaging of blended data via linearized and non-linear waveform inversion and how cross-talk artifacts from data blending influence different scales of the velocity model. I first examine the back-scattered component of the velocity model by analyzing the Gauss-Newton Hessian matrix of conventional and blended linearized-waveform inversion. I then look at the tomographic component of the velocity model by computing the gradients of both conventional and blended full-waveform inversion. In both back-scattered and tomographic cases, wavenumber spectra show that high-wavenumber components are more adversely affected due to data blending. Regardless of this loss information, I show on a synthetic dataset that with iteration, accurate and interpretable models can recovered.

Illumination compensation of shadow zones in extended least squares migrated images by solving the linear inverse problem in tomographic full waveform inversion [SRC]

We successfully show that full waveform inversion by model extension (FWIME) can mitigate the cycle-skipping issue inherent to conventional full waveform inversion (FWI). We develop and apply a model-space multi-scale workflow using spline interpolation on a 2D synthetic dataset using only reflected data with no energy below 4 Hz and where conventional FWI fails to recover the optimal solution. We simultaneously invert all available frequency content in the data without adopting a data-space multi-scale approach. By gradually refining the node spacing of our model representation on the spline grid, we control the wavenumber content of the updates during the inversion process. We first apply FWIME to recover an accurate velocity model, and we use it as an initial guess for a conventional FWI workflow. The final inverted model is not cycle-skipped and very close to the true solution.

Waveform inversion using one-way wave extrapolation operators [SRC]

We continue investigating the problem of waveform inversion using one-way wave extrapolation operators. The nonlinear modeling operator is generalized to account for lateral velocity variations. Born scattering operator is formulated via a linearization of the one-way wave extrapolation and its forward and adjoint operators are derived. Finally, the validity this approach for velocity model reconstruction is demonstrated on a synthetic example.

Low Frequency De-noising using High Frequency Prediction Error Filters [SRC]

We propose a method to de-noise the low-frequency data below 4 Hz using prediction error filter estimated from the high-frequency data which has a higher signal-to-noise ratio. Such filter optimally captures the multi-dimensional spectrum of the data. Via axis dilation, the prediction error filter is applied to the low-frequency component in order to remove all what is not consistent with the high-frequency data, whilst preserving the phase and amplitude of the primary energy. The resultant noise-free low frequencies can be used in full waveform inversion for a more robust update of the model low wavenumbers.

A nonlinear scheme to perform linearized waveform inversion with velocity updating [SRC]

In this paper we change the original proposal for linearized waveform inversion with velocity updating into an iterative nonlinear scheme. The previous linear scheme resulted in an objective function, in which the term responsible for maximization of the image power was unbounded along all directions of perturbations of the background velocity model. In the new scheme that we experiment with in this paper, this problem is not present. One important difference of the new method as compared to the original is that in the original method, the goal was to solve a single linear inverse problem to estimate both the perturbations in the background and reflectivity models, but in the new method we solve a sequence of linear inverse problems. We first present an analysis of the previous method identifying the problem, and then present the details of the new iterative scheme that we are experimenting with to overcome it. It should be noted that we are in the beginning stages of development of the method, and it will likely change in the future as we develop a better understanding of how it is performing. We present some numerical experiments to support our claims that provide insight into the properties of the objective function.

We observe distributed acoustic sensing records of guided waves excited by perforation shots in a low-velocity shale layer. Thanks to the high spatial and temporal resolution of distributed acoustic sensing acquisition, unaliased high frequencies of up to 700 Hz can be observed. We analyze and validate the existence of such waves by comparing the recorded data with synthetic data computed using acoustic modeling as recorded in both the horizontal and the vertical segment of the well. These guided waves are trapped within the low velocity organic shale reservoir. Using a simple acoustic-modeling experiment, we show that these waves are sensitive to small velocity changes induced by hydraulic stimulation.

Seismic velocity estimation using passive downhole distributed acoustic sensing records - examples from the San Andreas Fault Observatory at Depth [SRC]

Structural imaging and event localization require an accurate estimation of the seismic velocity. However, active seismic surveys are expensive and time-consuming. During the last decade, fiber-optic-based distributed acoustic sensing (DAS) has emerged as a reliable, enduring, and high-resolution seismic sensing technology. In this study, we show how downhole DAS passive records from the San Andreas Fault Observatory at Depth can be used for seismic velocity estimation. Using data recorded from earthquakes propagating near-vertically, we compute seismic velocities using first-break picking as well as slant-stack decomposition. This methodology allows for the estimation of both P- and S- wave velocity models. We also use records of the ambient seismic field for interferometry and P-wave velocity model extraction. Results are compared to a regional model obtained from surface seismic as well as a conventional downhole geophone survey. We find that using recorded earthquakes we obtain the highest P-wave model resolution. In addition, it is the only method that allows for S-wave velocity estimation. Obtained P and S models unravel three distinct layers at the depth range of 50-750 m which were not present in the regional model. In addition, we find high Vp/Vs values near the surface and a possible Vp/Vs anomaly about 500 m deep. We confirm its existence by observing a strong S-P mode conversion at that depth.

Seismic Signal and Noise Separation on Downhole Distributed Acoustic Sensing at SAFOD [SRC]

Distributed Acoustic Sensing (DAS) is an emerging technology that is promising in monitoring earthquakes with a low cost per sensor, high spatial and temporal resolution, and the ability to cover a long distance with a single interrogator. We implement neural networks to denoise earthquakes recorded in a vertical array. We implement a U-Net based model with an encoder-decoder structure, which is trained to learn simultaneously a sparse representation of the data and a non-linear function mapping the representation to masks of signal and noise. The masks are used for signal separation. To train our networks, we synthesize clean DAS data using 1-D geophone data with high signal-to-noise ratio recorded by North California Seismic Network. The models are trained on 180,000 synthetic clean-noisy pairs. Using the signal-to-noise ratio as a denoising metric, we show that our network significantly removes noise while minimally altering the signal waveforms for all five randomly chosen synthetic and field datasets.

Preserving the elastic amplitude behavior of the recorded primary reflections is necessary to perform any amplitude-versus-offset (AVO) analysis within a prestack image space. We show how the extended subsurface-offset image space is able to preserve the elastic behavior of the primary reflections even when these events are acoustically migrated using a reverse-time-migration (RTM) approach performed in a least-squares fashion. On a single interface model, we show that the amplitude of the angle-domain-transformed subsurface-offset image closely follows the theoretical Zoeppritz response even at critical angle. In addition, on a multi-layer model we demonstrate how a regularization term can improve the coherency of the amplitude across different reflection angles when coarse shot sampling causes uneven illumination.

Elastic Wavefield Reconstruction Operators [SRC]

We describe an Elastic Wavefield Reconstruction Inversion (EWRI) formulation and derive the necessary staggered grid wave equation operator. We then demonstrate that the elastic wave equation operator is the inverse of elastic wave propagation operator through a numerical example.

We present a seismic processing workflow for identification, separation, and removal of specific wave modes, combining feature extraction by two-dimensional continuous wavelet transforms (2-D CWT) with machine learning algorithms. It addresses the challenges arising from temporally and spatially transient phases, which cannot be effectively addressed using conventional stationary filtering. We first transform the seismic data into a domain in which the signal of interest and the noise are well-separated. We establish a representation of the 2-D CWT output that is intuitive to understand, visualize and label. We characterize the noise in this domain and use a machine learning classifier to automate the noise identification process. We then design a filter to remove the unwanted noise modes and transform the data back to its original time domain. We demonstrate the effectiveness of the method by applying it to noisy marine acquisition shot gathers. The described method is computationally robust and its theory can be extended to higher dimensions. As a consequence, the methodology is applicable to any temporally and spatially continuous seismic dataset, both pre-stack and after imaging.

Seismic image focusing analysis using deep learning and residual migration [SRC]

We present a workflow for training a deep convolutional neural network (CNN) that aids in the interpretation of geological features that are sub-optimally focused due to a complex overburden. We show that by training a CNN with prestack Stolt residual migration images, the network is able to segment poorly-focused channel features. While we trained the network on synthetic examples, we demonstrate the effectiveness of the network on a 3D test field dataset.

Stratigraphy estimation from seismic data using deep learning [SRC]

Seismic interpretation of deep stratigraphy is a challenging and time-consuming task. We present a modular and scalable data preparation pipeline and deep learning framework to assist with this task. Deep neural networks require large amounts of labeled data to train reliable models. Since the earth is intrinsically unlabeled, herein we generate synthetic data to create hundreds of thousands of labeled examples. We derive field data stratigraphy statistics from well logs from the Wilcox formation in the Gulf of Mexico and use Markov chains for data augmentation. We leverage these statistics to generate synthetic 3-D earth models and their corresponding seismic image volumes. We generate multiple seismic images using source wavelets of decreasing frequency. We train a deep neural network for image segmentation to estimate the stratigraphy from these seismic sections. We demonstrate that while the accuracy of the stratigraphy estimation decreases with the seismic data bandwidth, we can increase the accuracy at lower frequencies using transfer learning.

Supervised and unsupervised learning for velocity model building [SRC]

We present a deep learning (DL) workflow that takes analog velocity models and realistic raw seismic waveforms as inputs and produces subsurface velocity models as output. When insufficient data is used for training, DL algorithms tend to either over-fit or fail completely. Gathering large amounts of labeled and standardized seismic data sets is not straight forward. We address this shortage of quality data by building a Generative Adversarial Network (GAN) (Goodfellow et al., 2014) to augment our original training data set, which then is used by the DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy, after being trained with the GAN generated velocity models.

Texture Based Classification Of Seismic Image Patches Using Topological Data Analysis [SRC]

WIn seismic imaging, a long sought after goal has been either full or partial automation of the seismic image segmentation and interpretation processes. In this study, we present a novel supervised learning method for textural classification of seismic image patches, based on a topological tool called persistent homology. The basic workflow starts by taking an image and calculating its persistent homology, which gives us a list of birth-death pairs for different homology dimensions. Polynomial feature vectors are then extracted from these pairs, which are used to train three commonly used classifiers --- support vector machines, random forests, and neural networks, whose performances we compare. In addition, we experiment with different derived textural attributes and test the impact of using them instead of the raw images in the workflow. Our proposed method is tested on the publicly available LANDMASS datasets, which contains two sets of 2D seismic image patches grouped into four classes. The results indicate that persistent homology derived features can be powerful for automated textural segmentation of seismic images.

In this paper I study the sensitivity of different parameter combinations to seismic data in acoustic anisotropic FWI in VTI media. Through synthetic experiments, I examine eight parameterizations and observe that those combining one velocity (vertical, $v_z$, horizontal, $v_h$, or NMO, $v_n$) and two Thomsen anisotropic parameters (among $\epsilon$, $\delta$, and $\eta$) produce the best estimated models with lowest cost functions and model residuals and more focused final RTM images. I also find that the choice of parameters determines correlation between model updates. For example, if the inversion is set up in terms of $(v_z,\epsilon,\delta)$, final $\epsilon$ update has a positive correlation with those of the other two parameters. However, when used in $(v_h,\epsilon,\delta)$, $\epsilon$ update is in the opposite direction from the other two. When choosing to invert for one velocity and two anisotropic parameters, it might be tempting to perform a mono-parameter inversion by updating only the most influential parameter, velocity, if the smooth background models for Thomsen parameters are deemed good enough. I find that a simultaneous inversion of all three parameters results in a better velocity estimation than that from a one-parameter inversion, even though inverted Thomsen parameters might be erroneous due to crosstalk from velocity.

Building time-lapse VTI models from coupled fluid-geomechanical simulation [SRC]

A synthetic time-lapse (4D) model with changes that closely match those from a producing reservoir would improve our understanding of the 4D seismic data. In this paper we present a workflow to build a time-lapse vertical transversely isotropic (VTI) model based on coupled fluid-geomechanical simulation data.

Cloud based object store offers large bandwidth but significantly increased latency compared to block and file store systems. To achieve performance on cloud based object store systems datasets can be broken into multiple objects, and IO operations are done in parallel. Performance on Google's Cloud Processing system using this technique is described. To scale to hundreds to thousands of instances datasets structure must be redesigbed to allow the concatenation of thousands datasets with minimal additional booking. A modification to SEP's SEP-3D grid based processing system is described.

A flexible library for geophysical inverse problems -- structure and usage [SRC]

We implement an object-oriented inversion library based on the concept of operators that is able to be easily applied to both large- and small-scale inverse problems. By using general mathematical vector and function concepts, we design classes, or objects, that can be used to minimize convex and non-convex objective functions. We report different applications of the library to demonstrate its potential on various inverse problems.

SEP176 -- TABLE OF CONTENTS |

2019-05-03