My Research

Separation and imaging of continuously recorded seismic data

My main topic of research pertains to the separation and consequent imaging of seismic data that has been continuously recorded. These sort of data are also referred to as blended, simultaneous, or multishot data. Conventionally when a seismic survey is acquired the crew must wait for sufficient time such that the induced wavefield from the previous shot will not interfere with the next shot, simultaneous acquisition does not have this constraint and wavefields are permitted to overlap. This overlap can be negligible, but it can also be dense and aggressive. The interest in acquiring such data in the field is one mainly of economics and is twofold - reduced waiting time allows for more shots per unit time, and more shots provide more data giving better signal to noise when imaging.
Blended data processing requires more steps than a conventional scheme, as the data must either by directly inverted or separated. The problem of direct inversion is relatively straightforward, and has been shown to give good results using Least Squares Reverse Time Migration (LSRTM) in relatively few iterations. The flaw for such a method is that knowledge of the exact Earth model is assumed. A good model estimate can be previously available over repeat survey sites (for example in time lapse imaging) however for an appraisal survey this approach is untenable. The problem I am researching is that of shot separation, whereby the result is a set of shot gathers which are the data that would have been acquired if the survey had been shot conventionally.
Many separation methods rely on random time delays between shots and then some sort of coherency threshold is set, in either the Fourier or tau-p domain, however if the shooting pattern is not truly random then many artifacts can remain. My goal is to use an image space based separation, that assumes only partial velocity knowledge, for shot separation. Such a method should be more robust to either linear or pseudo random shot time delays than coherency pass methods.

Acceleration of 3D wavefield modelling and reverse time imaging using GPUs

Propagating wavefields in large 3D Earth models requires tera-flops of calculations, hence there is a common desire amongst Geophysicists to accelerate this process. General Purpose Graphical Processing Units (GPGPUs) are graphics cards that, through the use of the language CUDA, can be used for general data processing. The architecture of the GPU allows for massively parallel programming one a single card, with hundreds of threads running concurrently. Through mass execution and resource partitioning these threads and blocks and can run indepently, giving negligible memory latency.
If the problem in hand is both scalable and parallelisable then such a scheme can result in considerable acceleration when compared to CPU programming. For wavefield modelling GPUs provide a 28x speed up compared to CPU based propagation using a blocked scheme over 8 cores.

Regularisation and interpolation using spectral methods

Imaging and processing techniques that use wave-equation based approaches (one-way, PSPI, RTM etc) fundamentally require data to be regularly sampled in time and space. A variety of methods exist to interpolate these data to a regular grid, such as sinc interpolation, however many are insufficient once data becomes too irregular and also if we approach Nyquist frequencies. Spectral (Fourier) methods can create these regular data by exploiting the stationarity of parts of the data, and then by applying a unitary, irregular, discrete Fourier transform we can reconstruct the spectrum to that of a regular grid.

Helioseismic deconvolution using Kolmogoroff factorisation

Pre-processing and event identification of microseismic data

Sensitivity of anisotropic migration in tilted coordinates