previous up next print clean
Next: DOWNWARD CONTINUATION IN THREE Up: Nichols: Predictor-corrector 3-D migration Previous: Nichols: Predictor-corrector 3-D migration

Introduction

Depth extrapolation of the one-way wave equation in three dimensions can be solved using a finite difference method in the $(\omega,x,y,z)$domain (Claerbout, 1985). The implicit formulation of this problem produces a pentadiagonal system of equations. The direct solution of a pentadiagonal system is an expensive process. If we consider an $n\times n$ grid of data the cost for the simplest methods is O(n2) operations; whereas, for methods that are efficient on vector or parallel machines the cost is $O(n^2\log n)$. This is thought to be impractical for seismic applications. A common approximation used in solving this problem is called ``splitting,'' resulting in 2n tridiagonal systems. There are several O(n) algorithms for solving tridiagonal systems that can be efficiently implemented on many different computer architectures. Unfortunately, for practical depth step sizes, the splitting approximation results in anisotropic wave propagation, the waves propagating along the axes of the grid are faster than those at $45^\circ$ to the grid.

There have been several methods proposed to overcome these problems. Hale (1990) uses a stable explicit scheme and a McLellan transform to downward continue the wavefield. However, this method requires long explicit operators to ensure stability. These operators are expensive to apply and may have a smoothing effect in the presence of very sharp lateral velocity variations. Cole (1989) used iterative methods to solve the pentadiagonal system of equations. He compared the Gauss-Seidel and Jacobi methods and found that these methods did not converge well at low frequencies, and in some cases became unstable.

In this paper I propose the use of the conjugate-gradient (CG) method (Hestenes & Stiefel, 1952) to overcome the stability problems, however, the CG method suffers from the same problem as the other iterative methods, convergence is slow at low frequencies. Because the cost of running the conjugate gradient method to completion would be prohibitive, I wish to modify the problem so that it converges in a few iterations. This can be done in two ways, either by preconditioning, or starting the algorithm with an estimate that is close to the correct solution. The second method is easy to implement, the initial solution is produced by a cheap explicit algorithm. The potential instability of the explicit method is not a problem because it is only used to generate an initial estimate. The solution at each depth step is generated by solving the implicit equations that are known to be stable.

The CG algorithm requires repeated application of the Laplacian operator. This can be implemented efficiently on mesh connected parallel computers. My program is implemented purely in terms of Laplacian operators and fully parallel multiplications and additions; it will run efficiently when the problem is scaled to a larger size.


previous up next print clean
Next: DOWNWARD CONTINUATION IN THREE Up: Nichols: Predictor-corrector 3-D migration Previous: Nichols: Predictor-corrector 3-D migration
Stanford Exploration Project
12/18/1997