Next: Application
Up: Etgen: Time domain finite-difference
Previous: Introduction
The approach I take here begins in the same vein as Holberg; I set up an optimization problem to find
spatial difference coefficients that minimize the dispersion of the resulting finite-difference scheme.
First we need an expression for the numerical "velocity" of the finite-difference scheme as a function of frequency and propagation direction
exporessed in terms of the finite-difference coefficients.
There are 2 commonly used (and ultimately equivalent) approaches to accomplish this.
The first approach is to substitute the expression for a trial plane-wave
solution into the difference expression.
The second approach, which I follow here, is to take a few liberties with Fourier transforms and compute the
multi-dimensional Fourier transform of the difference scheme directly.
To illustrate this, let's take the 2D acoustic wave equation discretized with second-order finite-differences.
Moving
and recalling how to transform a series of impulses, write the Fourier transform of the difference equation as:
The numerical phase velocity is given simply by the ratio:
|  |
(1) |
where I have left the dependence on all possible variables in the expression.
At this point, I make my first deviation from the method that Holberg used.
Holberg used group velocity in his expressions and ultimately matched the
numerical group velocity of the difference scheme to the desired velocity.
In the method here, I use phase velocity.
One of the nice things about using phase velocity is that it is straight-forward to make simulated "far field signatures" of your
finite-difference method without going to the trouble
of coding the actual finite-difference algorithm.
One possible criticism of using phase velocities is that they tend to show less variability
than group velocities, that is, you might think that phase velocities understate the dispersion. Since phase velocities are computed from
the ratio of
and
, phase velocities are more sensitive to the lower wavenumbers (and thus the lower frequencies)
than group velocities which are computed from the slope of
as a function of
.
I'll leave it as an exercise for the reader to show whether weighting functions would make the the two approaches equivalent.
The second and more important difference from Holberg's derivation is that I do not consider the error of the time
discretization to be negligible.
Near the begining of his paper,
Holberg assumes that computational time step sizes will be small enough to not contribute to the error of the scheme.
For long (hence hopefully accurate) spatial difference operators, the error from time discretization is often the largest
error at moderate frequencies/wavenumbers.
Better still, the error from the time discretization is usually opposite in sense from that of the space discretization, so the two can be
balanced against each other for improved accuracy.
Making time steps coarser tends to lead to phase velocities that are too fast while
inaccuracy in the spatial difference operator leads to phase velocities that are too slow.
It's that fact that leads to the curious situation in 1D where the two sources of error
can be balanced exactly.
Although there's no way to balance the errors exactly in 2D or 3D,
we should use an error expression that captures the contributions from both the time and space discretizations and
balances them against each other to the best possible effect.
I begin by creating an expression for the sum over the wavenumber
domain of the squared differences between the numerical phase velocity of
the difference scheme and the desired phase velocity;
| ![\begin{displaymath}
{Err}=\sum_{\bf k}{ W({\bf k}) \left[ V_{phase}({\bf k})-V_{true}\right]^2}.\end{displaymath}](img9.gif) |
(2) |
Then we need the expression for Vphase as a function of the coefficients we wish to find.
I express the difference operator
for a second spatial derivative with respect to x as:
| ![\begin{displaymath}
\displaystyle \frac{\partial^2 U}{ \partial x^2} \approx
\f...
...{i=1}^n{a_i}\left[ U(x+i\Delta x)-2U(x)+U(x-i\Delta x)\right]. \end{displaymath}](img10.gif) |
(3) |
You'll notice that the difference operator, chosen this way, is essentially a series of weighted and gapped standard
second difference operators. I did this to preserve the correct behavior of the approximate second derivative at zero wavenumber.
Using the corresponding expressions for the differences in z (and in 3D y), Fourier transforming them,
and using the expression from equation 2
for the Fourier transform of the time difference expression write:
| ![\begin{displaymath}
2\cos{(\omega \Delta t)}-2=\frac{v^2 \Delta t^2}{ \Delta x^2...
...} \sum_{i=1}^n{b_i \left[2\cos{(i k_z \Delta z)}-2\right]}\ \ .\end{displaymath}](img11.gif) |
(4) |
To compress the notation a bit, introduce:
Finally I arrive at a reasonably compact expression for Vphase as a function of the coefficients I seek
and other parameters such as velocity, time step size
and grid spacing.
| ![\begin{displaymath}
V_{phase}({\bf a} , {\bf b})=
\displaystyle\frac{\arccos \le...
...a z^2} F_z({\bf b}) +1 \right] }
{\Delta t \vert{\bf k}\vert}.\end{displaymath}](img13.gif) |
(5) |
To find the coefficient vectors
and
, differentiate the sum squared error (equation 4) with respect
to each ai and bi and set each of those equations equal to zero.
Of course if
and
are the same,
we can use the same coefficients for each,
but for now I'll keep the expressions general because we want the method to work when
.
This is a system of non-linear equations in the unknowns ai and bi. To solve them, I use a Gauss Newton iterative method which leads to
the following system of linear equations (notation compressed a bit further):
I solve these equations using the conjugate gradient method, and perform outer iterations until the gradient of the sum squared error doesn't decrease, which usually takes 3 or 4 iterations.
Next: Application
Up: Etgen: Time domain finite-difference
Previous: Introduction
Stanford Exploration Project
5/6/2007