next up previous print clean
Next: Numerical experiments Up: Qian & Symes: Adaptive Previous: WENO for advection equations

Adaptive grid method and its implementation

The adaptive multigrid method has been widely used for the accurate and efficient solution of PDE Berger and LeVeque (1997); Berger and Oliger (1984), which uses an adaptive local error estimate to refine or coarsen the computational grid near the singularity or discontinuity of solution. For initial value problems of ODE, most state-of-the-art software uses adaptive timestepping algorithms where the timesteps are inductively chosen so that some estimate of the local (one step) error at each step is less than some quantity related to the user-defined tolerance.

The essential principle is simple. It is based on a hierarchy of difference schemes of various orders. Presumably a higher order step is more accurate than a lower order step, so the higher order step can serve as an ersatz for the exact solution of the differential equation with the same data. Therefore, one can combine the step computations of two different orders to obtain a so-called a posteriori estimate of the truncation error for the lower order step. Since the lower and higher order truncation errors stand in a known asymptotic relation, this permits an estimate of the higher order truncation error as well. The asymptotic form of the truncation error then permits prediction of a step that will result in a lower order truncation error less than a user-specified tolerance. So long as the steps are selected to maintain this local error, standard theory predicts that the higher order global error, i.e., the actual error in the solution computed using the higher order scheme, will be proportional to the user-specified tolerance.

This straightforward idea is embedded in most modern software packages for solutions of ordinary differential equations Gear (1971). Its use for partial differential equations is a little more complicated because it is usually necessary to adjust the grid of the non-evolution variables along with the evolution step. The solution of the (paraxial) eikonal equation changes in a sufficiently predictable way to make grid adjustment practical.

To initialize our algorithm, the user supplies a local error tolerance $\epsilon$; $\sigma_{1}$ and $\sigma_{2}$ are two user-defined positive functions of $\epsilon$ which are used to control the coarsening and refinement, for example, we can take $\sigma_{1}= 0.1\epsilon$ and $\sigma_{2}= \epsilon$. We use the 2nd and 3rd order eikonal solvers (equations (5) and (6)), and estimate the truncation error of the 2nd order scheme at the kth step as $e_2^k = \delta_2^2 \tau^k - \delta_3^3 \tau^k$. So long as $\sigma_{1}(\epsilon) \le e_2^k \le \sigma_{2}(\epsilon)$, we simply proceed to the next step; it is well known and explained in the references already cited that efficient adaptive stepping requires rather loose control of the local error. When $e_2^k < \sigma_{1}(\epsilon)$, we increase the step by a factor of two, i.e., $\Delta z \leftarrow 2\Delta z$, and recompute e2k. Similarly, when $e_2^k \gt \sigma_{2}(\epsilon)$, we decrease the step by a factor of two. As soon as the local error is once again within the tolerance interval we continue depth-stepping. A very important point: we retain the 3rd order (a more accurate one) computation of $\tau$ at the end of each depth step, discarding the 2nd order computation, which is used only in step control.

The usual step adjustment in ODE solvers would change $\Delta{z}$ by a factor computed from the asymptotic form of the truncation error. This is impractical for a PDE application, because it would require an arbitrary adjustment of the spatial grid (i.e., the x-grid in the eikonal scheme) and, therefore, expensive interpolation. Scaling $\Delta{z}$ by a factor of two, however, implies that stability may be maintained by scaling $\Delta x$ by the same factor. For coarsening, this means simply throwing out every other grid point, i.e., no interpolation at all, which dramatically reduces the floating point operations required. Since the typical behaviour of the traveltime field is to become smoother as one moves away from the source, the truncation errors tend in general to decrease. Therefore, most of the grid adjustments are coarsenings and very little or no interpolation is required.

One final detail must be supplied to produce a working algorithm. Since the traveltime field is nonsmooth at the source point, the truncation error analysis on which the adaptive step selection criterion is based is not valid there. So it is necessary to produce a smooth initial traveltime field. We do this by estimating the largest $z_{\rm init} \gt$, at which the constant velocity traveltime is in error by less than $\sigma_{2}(\epsilon)$. Details of the $z_{\rm init}$ calculation are given in Belfi and Symes (1998). Having initialized $\tau$ at $z_{\rm init}$, the algorithm invokes adaptive gridding. Since $z_{\rm init}$ is quite small, $\tau$ changes rapidly, resulting in a large number of grid refinements at the outset. However, no interpolation is performed, as $\tau$ is given analytically on $z=z_{\rm init}$. This initial very fine grid is rapidly coarsened as the depth stepping proceeds.

In our current implementation, we maintain a data structure for the computational grid that is independent of the output grid; the desired quantities are calculated on the computational grid and interpolated back to the output grid.

A simplified algorithm framework is as follows:

next up previous print clean
Next: Numerical experiments Up: Qian & Symes: Adaptive Previous: WENO for advection equations
Stanford Exploration Project