next up previous [pdf]

Next: General purpose graphics processing Up: Leader and Clapp: Accelerating Previous: Leader and Clapp: Accelerating

Introduction

For advanced, accurate imaging Reverse Time Migration (RTM) is now the standard method. Despite the computational requirements relative to Kirchhoff or one-way wave-equation migration methods it has many characteristics that are desirable. In complex velocity fields the assumptions underlying one-way Wave Equation Migration (WEM) are insufficient, as only upward primary reflections are used, velocity approximations (FFD, PSPI etc) are made and only approximate phase and amplitude information is retained. Hence events with multiple reflections, overturning waves and steep dip information are not correctly positioned. Kirchhoff migration can image these steep dips, however the high frequency assumption results in sections of the data being incorrectly positioned and high angle artifacts in complex velocity structures. Conversely RTM images can resolve turning waves, horizontal waves and prismatic waves. In addition, no velocity, amplitude or phase approximations are necessary; though for practical purposes often such approximations are made.

For practical implementations, 3D RTM imposes large data handling problems. Generally, the source is fully propagated and saved in memory; the recorded data is then back propagated and correlated with the forward modeled shot at each imaging time step. Once geometries are 3D, it is not possible to hold this source wavefield fully in memory.

In addition to these large memory requirements, computational problems must also be addressed. As 3D surveys continue to become larger, with longer crossline and inline offsets and denser source sampling, then to fully benefit from this additional data the source modeling and receiver back propagation have to be performed over large velocity models. This problem is further compounded by the fact that finite difference wave propagation is less stable in as dimensionality increases, meaning smaller time steps have to be used (Dablain, 1986). Typical model dimensions can range from 500mb to tens of gigabytes, and the number of time steps needed to accurately model these data without dispersion is often around 10000. Using a spatial stencil order of 8 and temporal of 2, then over 1 giga point (Gpts) of calculations are needed per time step for both the source and the receiver.

In the following sections I will discuss how using GPUs can greatly accelerate finite difference time domain wave propagation, and how random velocity boundaries can improve the IO performance of RTM. Initially I will give a brief overview of GPUs. A standard measure for kernel performance is the number of model points calculated per second (Micikevicius, 2009), and this is the metric that will be used subsequently.


next up previous [pdf]

Next: General purpose graphics processing Up: Leader and Clapp: Accelerating Previous: Leader and Clapp: Accelerating

2011-05-24