next up previous print clean
Next: From theory to practice Up: R. Clapp: Velocity uncertainty Previous: Tomography review


My goal in this paper is to produce multiple, equi-probable velocity models. A straight forward implementation of the approach described above, just accounting for data uncertainty, would lead to the fitting goals,
\bf 0&\approx&\bf n = \bf N_{n,v} \bf N_{n,c} ( \bf \Delta t- \... 0&\approx&\bf r_{m} = \epsilon \bf A( \bf s_o + \bf \Delta s)
Implementing these fitting goals is problematic because of the space that $\bf \Delta t$reside in. Unlike the Super Dix problem of Clapp (2003a), our data space is not regular. Normally our $\bf \Delta t$ values lie along a series of reflectors or a semi-random set of points Clapp (2001a). In either case, constructing $\bf N_{n,c}$ is problematic. When selecting points along reflectors, we are limited to a covariance description only along the reflectors with no easy way to describe continuity between reflectors. If we choose the random point methodology, we are limited to simply minimizing differences of nearby points, an unsatisfying option.

One solution to this problem is to introduce a mapping operator that maps $\bf \Delta t$ from an irregular space to a regular space. This solution holds promise, but the interaction between $\bf N_{n,v}$ and $\bf N_{n,c}$ becomes confusing.

Another option is to move the data variability problem. As mentioned earlier, our data isn't actually travel time differences, but $\gamma$ values calculated by doing SRM. Normally we choose the $\gamma$ value corresponding to the maximum semblance at a given location or some smooth version of the maximum Clapp (2003b). The selecting of the $\gamma$ value is really where our data uncertainty problem lies. The selection problem has some convenient and some not so convenient properties. On the positive side, we are working with a regular grid and we know that we want some consistency along reflectors. As a result, a steering filter becomes a very obvious choice for our covariance description. On the negative side, the selection problem shares all of the non-linear aspects of the semblance problem Toldi (1985).

To get around these issues I decides to borrow something from both the geostatistics world and the geophysics world. Instead of thinking of the problem in terms of selecting the best $\gamma$value, I am going to think of the problem in terms of selecting a value within a distribution. I am going to construct my distributions in a similar manner to Rothman (1985). Rothman (1985) was trying to solve the non-linear residual statics problem using simulated annealing. He built a distribution based on stack power values from static-shift traces based on the surface locations of the sources and receivers. In this case, my distribution is going to be constructed based on the semblance values at given $\gamma$values.

I do not want the rough solution that () was looking for, instead I am looking for a smooth solution. If I set up the inverse problem
\bf n &\approx&\bf c \nonumber \\ \bf 0&\approx&\epsilon \bf A\bf c
I will get a vector $\bf c$ that contains random numbers that have been colored according to the spectrum of $\bf A$. I now have a field that has the spectrum I want but the vector $\bf c$ will tend to have a normal distribution with zero mean. At this stage, I am going to borrow something from the geostatistical community. When dealing with problems where the variable does not have a normal distribution, they use what they refer to as a normal-score transform Deutsch and A.G. (1992); Isaaks and Srivastava (1989). They build an operator $\bf G$ that relates a value a in an arbitrary distribution to a value c in standard normal distribution, $a = {\bf G} c $ , based on the cumulative distribution function (cdf) of both distributions. I am going to apply the same trick. I will solve for all values of the variable $\bf c$ simutaneously and then apply $\bf G$ to convert these values to $\bf \gamma$ values.