previous up next print clean
Next: GIVENS ROTATIONS Up: Cole: Simplifying 3-D migration Previous: Cole: Simplifying 3-D migration

Introduction

Claerbout (1985) showed that the two-dimensional Laplacian operator, which appears in 3-D finite-difference migration, has the form shown in Table 1.
 
Table 1: The two-dimensional matrix of coefficients for the Laplacian operator. Taken from Claerbout (1985).
-4 1 1
1 -4 1 1
1 -4 1 1
1 -4 1
1 -4 1 1
1 1 -4 1 1
1 1 -4 1 1
1 1 -4 1
1 -4 1 1
1 1 -4 1 1
1 1 -4 1 1
1 1 -4 1
1 -4 1
1 1 -4 1
1 1 -4 1
1 1 -4

The addition of the crossline or y direction has added two diagonals to what is, in 2-D migration, a tridiagonal matrix. This addition presents some practical problems. Direct methods for solving this pentadiagonal system of equations have costs proportional to the square of the model size n (which is 4 in the case of Table 1), as compared to solution methods of order n for the tridiagonal case. A common compromise is splitting, also described by Claerbout, where the pentadiagonal system is split into two tridiagonal systems that are solved successively. Splitting's drawback is that it yields effective migration operators that are anisotropic; the response is particularly poor for energy propagating at angles far from either the inline or crossline directions.

In SEP-60, (Cole, 1989) I examined using iterative schemes such as Jacobi and Gauss-Seidel to solve the pentadiagonal system. I found the methods to be unconditionally stable for the 15 degree equation, but the 45 degree scheme was unstable at low frequencies (and where stable, very slow to converge for the low frequencies).

When Dave Nichols (Nichols, 1991) began working on his predictor-corrector method using conjugate gradients, I decided that the iterative schemes I had studied deserved another look. Like conjugate gradients, they operate via convolution in 2-D, which performs very well on the Connection Machine. It would be interesting to see how the two methods compared.

I sought a way to stabilize the 45 degree Jacobi iterative scheme. The problem is that the matrix is not sufficiently diagonally dominant. My hope was that a scheme that attacked the off-diagonal elements would make the matrix sufficiently well-conditioned that the Jacobi iterative scheme would be stable. Givens rotations are a tool to accomplish such an elimination of off-diagonal elements.

As a side benefit, such an improvement in the diagonal dominance of the matrix would accelerate convergence both for the Jacobi iterative scheme and conjugate gradients. In both these schemes, the convergence rate depends on the largest eigenvalue of the matrix, which is related to the diagonal dominance. This implies that Givens rotations might be a useful preconditioner for Jacobi iterations and conjugate gradients.


previous up next print clean
Next: GIVENS ROTATIONS Up: Cole: Simplifying 3-D migration Previous: Cole: Simplifying 3-D migration
Stanford Exploration Project
12/18/1997