next up previous [pdf]

Next: Inversion With Random Boundaries Up: Leader and Almomin: Encoding Previous: Leader and Almomin: Encoding

Introduction

Reverse time migration (RTM) can provide accurate subsurface images because it applies the full, two-way wave equation. Thus steep dips, multiples, and prismatic waves can be imaged. However, RTM is the adjoint of an idealised modelling operator and not a full inverse operation, meaning that images can suffer from artefacts such as acquisition footprints, low-frequency noise and decreased resolution. We can approximate the inverse of this modelling procedure by using iterative least-squares inversion. However this can quickly become prohibitively expensive as each iteration is roughly twice the cost of a single migration. Furthermore, the formulation of RTM (the adjoint procedure) requires the source-side wavefield to be modelled, reversed, and sequentially correlated with the receiver-side wavefield. This modelling and reversal provides computational difficulties, as we have to save and re-inject a 4D source wavefield when applying 3D RTM. These two problems - inversion cost and source wavefield time reversal, - can be solved by taking advantage of correlation attributes and data redundancy.

When modelling the source wavefield we have to use a finite computational domain. This domain creates artificial boundary reflections, which must be removed else they create high-amplitude, coherent artefacts within the image. However, removing these boundary reflections causes the modelling to be non-reversible, thus the entire wavefield must be saved and reused when correlating with the recorded data. We can use random boundaries (Clapp (2009); Fletcher and Robertsson (2011); Shen and Clapp (2011)) to make this computation time-reversible. By only saving the final wavefield snapshots we can now back propagate the wavefield to within numerical accuracy. Provided that boundary reflections are sufficiently incoherent, the RTM imaging condition and subsequent stacking over shots can reduce any residual incoherent noise to an imperctible level. Such a method is particularly useful for GPU computing. Here data must be read from disk, to the CPU, and then sent to the GPU, compounding any disk access. By removing the need for disk-saved source wavefields we accelerate GPU based RTM significantly.

We can address the inversion cost in a slightly different way. One method is to reduce the data size that we are imaging by combining sources. This can be done by shifting, weighting and summing shots to create one or several 'super shots' (Morton and Ober (1998); Romero et al. (2000)). The weights and shifts that we apply to individual shots are referred do as the encoding. Such a method can also be used for full waveform inversion (Gao et al. (2010);Krebs et al. (2009)). We can either combine sources into a single super shot or several super shots. When combined into one super shot the inversion is now independent of the number of sources, reducing the inversion cost by roughly this number (assuming full aperture for all shots). However, many crosstalk artefacts are seen when wavefields from different source experiments correlate coherently. These are slowly reduced when iterating, but by changing the encoding between iterations these are suppressed much faster. Romero et al. (2000) and Krebs et al. (2009) show that by using a single sample random encoding the best convergence rates are seen. The caveat of such a scheme is that we must recalculate the initial residual each time (since we have changed our observed data), making this method about 1.5 times more expensive.

Both random boundaries and phase-encoding can be very effective in accelerating linearised inversion. However, they both introduce a considerable amount of noise into the system and rely on correlation, stacking and inversion to reduce this. By combining these methods we may be making the system too incoherent, slowing the inversion process down as a function of cost. Here we investigate convergence properties of these techniques and how we can try to create cleaner gradients within each iteration.


next up previous [pdf]

Next: Inversion With Random Boundaries Up: Leader and Almomin: Encoding Previous: Leader and Almomin: Encoding

2012-05-10