next up previous [pdf]

Next: Modeling costs Up: RTM Previous: RTM

Bottlenecks

There are four potential bottlenecks in this process. The first two involve the cost of propagating the wavefield from $t=0..t_{\rm max}$. The explicit time marching scheme described above has an operation count proportional to the number time steps $nt$ times the size of the domain $nx*ny*nz$. In order to obtain a stable and non-dispersive solution, we are constrained in choosing our sampling in time and space. If we use too large of time steps or sample our medium too coarsely for a given velocity or frequency in our data, we can run into problems with stability, dispersion, and/or accuracy. Our sampling is controlled by the minimum and maximum velocity in the media, the maximum frequency we want propagated, and the order of accuracy of the derivative approximations. By using higher approximations to the space and time derivatives, we can get away with coarser sampling in time and space with the trade-off of a larger number of operations per sample. This is generally a beneficial trade-off, particularly for the spatial derivatives, because we trade off linear growth in our derivative approximation with cubic growth in the number of samples.

An unmentioned aspect of the time marching scheme is that our physical experiment takes place in a half-space while our computational experiment happens in a more limited domain. As a result the computational experiment has artificial reflections from the boundaries of the domain. The reduction of these effects is usually handled through one or more of the following techniques: introducing a damping region around the computation domain (Cerjan et al., 1985); introducing a boundary condition that attempts to kill energy propagating at some limited range of angles perpendicular to the boundary (Clayton and Engquist, 1980); or, most effective, costly, and complex, using Perfectly Matched Boundary (PML) (Berenger, 1996) which simulates propagating a complex wave whose energy decays as it approaches the boundary.

A third problem is that it is impractical to store the 4-D volumes $s$ and $g$ in memory. These volumes are often multiple terabytes in size. As a result, for a second-order time approximation, we normally keep in memory only the previous two time steps needed to update the wavefield. This causes a problem with our imaging condition where the source field must propagated from $t=0$. to $t={\rm max time}$ while the receiver wavefield must be propagated from $t={\rm max time}$ to $t=0$ yet the fields must be correlated at equivalent time positions. To solve this problem, one propagation must be stored either completely or in a check-pointed manner to disk. Symes (2007) and Dussaud et al. (2008) discuss checkpointing methods to handle the different propagation directions.

A fourth potential problem is the very large size of the image domain required when constructing subsurface offset gathers. This is a particular problem for some accelerators with limited memory.


next up previous [pdf]

Next: Modeling costs Up: RTM Previous: RTM

2009-10-16