next up previous print clean
Next: Data driven model parametrization Up: Symes: Differential semblance Previous: Global Analysis of Stationary

Noise: General Case

Suppose that the data S(t,x) is the sum of model-consistent data and another field, regarded as noise or error:

S(t,x)=S*(t,x)+E(t,x)

where ``model-consistent'' means as before

\begin{displaymath}
S^*(t,x)=r^*(T^*_0(t,x))+O(\lambda)\end{displaymath}

and E(t,x) is arbitrary (but finite ``energy'' = mean square).

Since there are several data running around in this part of the discussion, include the name of the data in the notation for the differential semblance objective:

\begin{displaymath}
J_0[v,S]=\frac{1}{2}\Vert H\phi F[v]WG[v]S\Vert^2\end{displaymath}

etc. Then

J0[v,S]=J0[v.S*]+J0[v,E]+K[v,S*,E]

where

\begin{displaymath}
K[v,S^*,E]=<H\phi F[v]WG[v]S^*,H\phi F[v]WG[v]E\gt\end{displaymath}

\begin{displaymath}
=\int\int\,dx\,dt\,\phi\left(
\frac{\partial}{\partial x}+p\...
 ...rtial}{\partial x}+p\frac{\partial}{\partial t}
\right)E\right]\end{displaymath}

satisfies

\begin{displaymath}
\vert K[v,S^*,E]\vert\le C\Vert S^*\Vert\Vert E\Vert\end{displaymath}

Here and in the following, C will stand for a constant uniform over $v \in \cal{A}$ (though the precise value may vary from display to display).

Likewise,

\begin{displaymath}
J[v,E]\le C\Vert E\Vert^2\end{displaymath}

Similarly, the gradients with respect to RMS square slowness u satisfy

\begin{displaymath}
\nabla J_0[v,S]=\nabla J_0[v.S^*]+\nabla J_0[v,E]+\nabla K[v,S^*,E]\end{displaymath}

and

\begin{displaymath}
\Vert\nabla K[v,S^*,E]\Vert\le C\Vert S^*\Vert\Vert E\Vert\end{displaymath}

\begin{displaymath}
\Vert\nabla J_0[v,E]\Vert\le C\Vert E\Vert^2\end{displaymath}

Suppose that u (or its corresponding v) is a stationary point of J0[v,S], i.e. $\nabla J_0[v,S]=0$. Then

\begin{displaymath}
J_0[v,S] \le J_0[v,S^*]+C\Vert E\Vert(\Vert S^*\Vert+\Vert E\Vert)\end{displaymath}

\begin{displaymath}
\le C(<u-u^*,\nabla J_0[v,S^*]\gt +\Vert E\Vert(\Vert S^*\Vert+\Vert E\Vert))+O(\lambda)\end{displaymath}

\begin{displaymath}
= C(<u-u^*,\nabla J_0[v,S] - \nabla J_0[v,E] - \nabla K[v,S^*,E]\gt
+\Vert E\Vert(\Vert S^*\Vert+\Vert E\Vert))+O(\lambda)\end{displaymath}

\begin{displaymath}
\le C\Vert E\Vert(\Vert S^*\Vert+\Vert E\Vert)+O(\lambda)\end{displaymath}

If you presume that the data error is less than 100%, i.e. $\Vert E\Vert\le \Vert S^*\Vert$,which seems reasonable (or pick any other fixed percentage, if 100% seems wrong to you - just absorbs in C), then this becomes

\begin{displaymath}
J_0[v,S] \le C\Vert E\Vert + O(\lambda)\end{displaymath}

That is,

Theorem: At a stationary point of the differential semblance objective, its value is bounded by a $\cal{A}$-uniform multiple of the distance of the data to the set of model-consistent data.

Thus for a family of data converging to model-consistent data, any set of corresponding stationary points of J0 must have J0 values which converge to zero, modulo asymptotic errors.

This result may well not imply that stationary points for noisy data are global minima. Indeed, substitute the ``target'' velocity v* in the expression for J0[v,S]: from the expansion and estimates above you easily see that

\begin{displaymath}
J[v^*,S] \le C(\Vert E\Vert^2 + O(\lambda)\Vert E\Vert)\end{displaymath}

Certainly one hopes that the asymptotic error is no worse than other errors, in particular than the data error E, so this inequality effectively implies that the global minimum value of $J_0[\cdot,S]$ is proportional to $\Vert E\Vert^2$ for near consistent data, whereas the theorem shows only that the stationary values are proportional to $\Vert E\Vert$, so presumeably larger at least in some cases.

In the next section I will show that when the differential semblance minimization is supplemented with proper constraints on the velocity model, in addition to those already imposed, the error in the RMS square slowness is proportional to the error in the data. It then follows from the estimates above that stationary values conforming to these constraints are indeed proportional to the square of the error level, hence essentially global minima. It would be interesting to know whether relaxing these constraints actually permits anomalously large stationary values.


next up previous print clean
Next: Data driven model parametrization Up: Symes: Differential semblance Previous: Global Analysis of Stationary
Stanford Exploration Project
4/20/1999