Memory efficient 3D reverse time migration |
For practical implementations, 3D RTM imposes large data handling problems. Generally, the source is fully propagated and saved in memory; the recorded data is then back propagated and correlated with the forward modeled shot at each imaging time step. Once geometries are 3D, it is not possible to hold this source wavefield fully in memory.
In addition to these large memory requirements, computational problems must also be addressed. As 3D surveys continue to become larger, with longer crossline and inline offsets and denser source sampling, then to fully benefit from this additional data the source modeling and receiver back propagation have to be performed over large velocity models. This problem is further compounded by the fact that finite difference wave propagation is less stable in as dimensionality increases, meaning smaller time steps have to be used (Dablain, 1986). Typical model dimensions can range from 500mb to tens of gigabytes, and the number of time steps needed to accurately model these data without dispersion is often around 10000. Using a spatial stencil order of 8 and temporal of 2, then over 1 giga point (Gpts) of calculations are needed per time step for both the source and the receiver.
In the following sections I will discuss how using GPUs can greatly accelerate finite difference time domain wave propagation, and how random velocity boundaries can improve the IO performance of RTM. Initially I will give a brief overview of GPUs. A standard measure for kernel performance is the number of model points calculated per second (Micikevicius, 2009), and this is the metric that will be used subsequently.
Memory efficient 3D reverse time migration |