Medical tomography avoids a problem that is unavoidable in earth-science tomography. In medicine it is not difficult to surround the target with senders and receivers. In earth science it is nearly impossible. It is well known that our reconstructions tend to be indeterminate along the dominant ray direction. Customarily the indeterminacy is resolved by minimizing power in a roughened image. The roughening filter should be inverse in spectrum to the desired image spectrum. Unfortunately that spectrum is unknown and arbitrary. Perhaps we can replace this arbitrary image smoothing by something more rational in the space of the missing data.
Recall the well-to-well tomography problem. Given a sender at depth zs in one well, a receiver at depth zg in the other well, and given travel times tk(zs,zg), the rays are predominantly horizontal. Theory says we need some rays around the vertical. Imagine the two vertical axes of the wells being supplemented by two horizontal axes, one connecting the tops of the wells and one connecting the bottoms, with missing data traveltimes tm(xs,xg). From any earth model, tk and tm are predicted. But what principles can give us tm from tk? Obviously something like we used in Figures 1-5. But data for the tomographic problem is two-dimensional: Let the source location be measured as the distance along the perimeter of a box, where the two sides of the box are the two wells. Likewise receivers may be placed along the perimeter. Analogous to the midpoint and offset axes of surface seismology (see my other book ``Imaging the Earth's Interior''), we have midpoint and offset along the perimeter. Obviously there are discontinuities at the corners of the box, and everything is not as nice as in medical imaging with a circle where the source and receiver locations as measured by angles, angles measured from an origin at the center of the box. Anyway, the box gives us a plane in which to layout the data, not just the recorded data, but all the data that we think is required to represent the image.
A simple idea to fill in the missing data is to represent the two-dimensional data plane as an array of one-dimensional signals. Then the program that produced Figures 1-5 is directly usable. Of course there is the issue of whether the one-dimensional signals should be laid out along the receiver (perimeter) axis or the sender axis, or the midpoint or the offset axis. Also, if we hope to be able to extrapolate around the corners of the box we had better eliminate discontinuities in traveltime slope by doing mormal moveout--traveltimes should be divided by the distance of the corresponding ray across the box.
There are other possible ways to interpolate the data. Some are intrinsically two-dimensional and some may use the tomographic operator. Alas, all this effort cannot produce information where none was recorded. But it should yield an image that is not overwhelmed by anisotropy from the obvious heterogeneity of the data-collection geometry.
With the filling of data space will it still be necessary to smooth the model explicitly (by minimizing energy in a roughened model)? Mathematically the question is one of ``completeness'' of the data space. I believe there are analytic solutions well known in medical imaging that prove a circle of data is enough information to specify completely the image. So we can expect that little or no arbitrary image smoothing is required to resolve the indeterminacy--it should be essentially resolved by the assertion that statistics gathered from the known data are applicable to the missing data. I regret I have not carried this concept beyond developing the hypothetical processing scheme, but chapter 7 does show you many completed field-data examples in filter theory.