S(t,x)=S*(t,x)+E(t,x)where ``model-consistent'' means as before
and E(t,x) is arbitrary (but finite ``energy'' = mean square).
Since there are several data running around in this part of the discussion, include the name of the data in the notation for the differential semblance objective:
Here and in the following, C will stand for a constant uniform over (though the precise value may vary from display to display).
Similarly, the gradients with respect to RMS square slowness u satisfy
Suppose that u (or its corresponding v) is a stationary point of J0[v,S], i.e. . Then
If you presume that the data error is less than 100%, i.e. ,which seems reasonable (or pick any other fixed percentage, if 100% seems wrong to you - just absorbs in C), then this becomes
Theorem: At a stationary point of the differential semblance objective, its value is bounded by a -uniform multiple of the distance of the data to the set of model-consistent data.
Thus for a family of data converging to model-consistent data, any set of corresponding stationary points of J0 must have J0 values which converge to zero, modulo asymptotic errors.
This result may well not imply that stationary points for noisy data are global minima. Indeed, substitute the ``target'' velocity v* in the expression for J0[v,S]: from the expansion and estimates above you easily see that
Certainly one hopes that the asymptotic error is no worse than other errors, in particular than the data error E, so this inequality effectively implies that the global minimum value of is proportional to for near consistent data, whereas the theorem shows only that the stationary values are proportional to , so presumeably larger at least in some cases.
In the next section I will show that when the differential semblance minimization is supplemented with proper constraints on the velocity model, in addition to those already imposed, the error in the RMS square slowness is proportional to the error in the data. It then follows from the estimates above that stationary values conforming to these constraints are indeed proportional to the square of the error level, hence essentially global minima. It would be interesting to know whether relaxing these constraints actually permits anomalously large stationary values.