The basic method for solving differential equations in a computer
is
*
finite differencing.
*
The nicest feature of the method is that it allows analysis of
objects of almost any shape, such as earth topography or geological structure.
Ordinarily, finite differencing is a straightforward task.
The main pitfall is instability.
It often happens that a seemingly reasonable approach
to a reasonable physical problem
leads to wildly oscillatory, divergent calculations.
Luckily,
there is a fairly small body of important
and easily learned tricks
that solves many stability problems,
and we will be covering them here.

Of secondary concern are the matters of cost and accuracy. These must be considered together since improved accuracy can be achieved simply by paying the higher price of a more refined computational mesh. Although the methods of the next several pages have not been chosen for their accuracy or efficiency, it turns out that in these areas they are excellent. Indeed, to my knowledge, some cannot be improved on at all, while others can be improved on only in small ways. By ``small'' I mean an improvement in efficiency of a factor of five or less. Such an improvement is rarely of consequence in research or experimental work; however, its importance in matters of production will justify pursuit of the literature far beyond the succeeding pages.

- The lens equation
- First derivatives, explicit method
- First derivatives, implicit method
- The explicit heat-flow equation
- The leapfrog method
- The Crank-Nicolson method
- Solving tridiagonal simultaneous equations
- The xxz derivative
- Difficulty in higher dimensions

10/31/1997