The separation and imaging of continuously recorded seismic data
by Chris Leader
- Thesis + Source codes tar.gz
- Thesis pdf
Table of contents
- Chapter 1: Introduction
- Chapter 2: Wave-equation imaging and linearized inversion
- Chapter 3: Phase-encoding and randomized sampling
- Chapter 4: Simultaneous shot separation
- Chapter 5: Field data example
- Chapter 6: High performance computing solutions
Imaging targets for exploration seismology surveys are increasingly associated with subsurface conditions that pose serious challenges for both seismic model building and imaging. In particular, these zones of interest are often sub-salt, and can feature dipping, anisotropic overburdens. In order to construct representative Earth models, and hence an accurate image, reflection datasets with large offsets, high levels of reflection point redundancy, and rich azimuthal coverage are sought. This creates a requirement for denser and more efficient field acquisition techniques, which can be achieved through blended data acquisition. Here, multiple source points are acquired simultaneously and receivers continuously record the overlapping wavefields. Separating these simultaneously acquired seismic data into their conventionally acquired equivalent is the link between more efficient data acquisition and conventional imaging and model building techniques.
The focus of this thesis is the development of a robust methodology for taking these continuously recorded, blended datasets and accurately separating each shot into an individual record, free from blending contamination. Successful existing techniques unanimously feature strict requirements on the randomness of the source distribution during acquisition, as a consequence, a technique which can separate data with repeatable delay patterns is demonstrated.
By using extended seismic migration and inverse forward modeling, separation is demonstrated which is less sensitive to shooting pattern. The image space has several features amenable to the separation problem; it is of lower dimensionality, it has a higher signal-to-noise ratio and methods such as preconditioning and regularization can be employed. Images with a reduced quantity of interference-related noise are created, then a forward modeling process provides a separated output dataset. While forward modeling alone provides a high level of separation, amplitudes subleties of these input data can be lost. By posing the data recovery process as an inverse problem, representative amplitude character is recovered.
Posing the problem in the extended image space relaxes requirements on the accuracy of the migration velocity model. This extended space preserves all kinematic and amplitude information present in the input data, and there is no loss of information during imaging. This methodology is demonstrated on three synthetic datasets, of varying complexity, and a 3D Ocean Bottom Node dataset. Qualitative and quantitative comparisons for the synthetic tests show the separation is accurate to the order of 5% after ten iterations. Qualitative analysis of the OBN separation shows that comparable images are achieved after separation, compared to an unblended input dataset.