Up to this point I have imposed only minimal constraints on the
RMS velocity, namely those necessary to justify use of the
convolutional model. Most velocity analysis imposes far more
stringent constraints, either explicitly or implicitly, in the
form of *parsimonious parametrization* or *regularization*.
In the former case, the choice of parameters (eg. how many spline
nodes, where to place them) is *ad hoc*. In the latter,
the type of regularization (first derivative, second derivative,...)
and the choice penalty weight are also obscure.

In this section I suggest that the differential semblance objective itself supplies a mechanism for constraining the velocity to a parsimoniously parametrized space. I'll propose a choice of subspace within which

- the global minimum is unique for noise free data;
- the error in RMS square slowness is proportional to the error in the data, and so
- any stationary values are proportional to the square of the data error energy, so essentially global minima.

Assume until further notice that the data is free of noise:

The Key Lemma proved in the last section then implies that the Hessian takes the form While the expression for above is not easily computable, the approximation is simply the stack of the squared prestack reflectivity estimates, and therefore an inexpensive byproduct of the computation. At
Now suppose that *u ^{*}* differs from a reference square slowness

Since the interval velocities, hence the RMS square slownesses, are
supposed to vary over a bounded set in , membership in
entails a bound on the *W* norm of *u*-*u ^{0}*.

Let *g*(*t _{0}*

Next suppose that *H*[*u*] is uniformly
positive definite for all . That is, there exist
for which

Then there exists a similar uniform bound for ,since the latter differs from *H*[*u*] by a diagonal scaling operator with
uniform upper and lower bounds over . For the same reason,

That is: within *u ^{0}*+

Moreover, consulting the estimates of the last section, you see that if
the search is limited to *u ^{0}*+

Finally, how does one lay hands on such a paragon of a function space as
*W* with the properties supposed here? The operator *H*[*u*] is symmetric positive
semidefinite on . An optimal choice for *W* is
the direct sum of eigenspaces of corresponding to the eigenvalues above the
cutoff level *h _{*}*. A computable estimate of this space is the corresponding
direct sum of eigenspaces of

Note that if there is little data in a *t _{0}* interval,

It remains to analyse this ``picking'' effect, and to devise good algorithms
for choosing the eigenvalue cutoff as a function of data quality and success
in fitting moveout (i.e. minimizing *J _{0}*), so as to justify the assumption
that . But that's another story...

4/20/1999