next up previous print clean
Next: Acknowledgement Up: Symes: Differential semblance Previous: Noise: General Case

Data driven model parametrization and optimal error estimates

Up to this point I have imposed only minimal constraints on the RMS velocity, namely those necessary to justify use of the convolutional model. Most velocity analysis imposes far more stringent constraints, either explicitly or implicitly, in the form of parsimonious parametrization or regularization. In the former case, the choice of parameters (eg. how many spline nodes, where to place them) is ad hoc. In the latter, the type of regularization (first derivative, second derivative,...) and the choice penalty weight are also obscure.

In this section I suggest that the differential semblance objective itself supplies a mechanism for constraining the velocity to a parsimoniously parametrized space. I'll propose a choice of subspace within which

Assume until further notice that the data is free of noise:

\begin{displaymath}
S(t,x)=r^*(T_0^*(t,x)) + O(\lambda)\end{displaymath}

The Key Lemma proved in the last section then implies that the Hessian $\nabla \nabla J_0$ takes the form

\begin{displaymath}
\nabla \nabla J_0[u] \delta u(t_0) = \delta u(t_0) \int \,dx...
 ...0,
x)(r^*(\Gamma^{-1}(t_0,x)))^2 + O(\lambda, \Vert u-u^*\Vert)\end{displaymath}

\begin{displaymath}
= \tilde{R}[u](t_0)\delta u(t_0)\end{displaymath}

While the expression for $\tilde{R}$above is not easily computable, the approximation

\begin{displaymath}
R[u](t_0) = \int \,dx\,(r^*(\Gamma^{-1}(t_0,x)))^2\end{displaymath}

is simply the stack of the squared prestack reflectivity estimates, and therefore an inexpensive byproduct of the computation. At u=u*,

\begin{displaymath}
\nabla \nabla J_0[u^*] \delta u(t_0) = b[u](t_0)R[u^*](t_0)\delta u(t_0)\end{displaymath}

i.e. the Hessian is actually the approximation followed by a positive diagonal scaling.

Now suppose that u* differs from a reference square slowness u0 (in practice, an initial estimate) by a member of a space W. Introduce an inner product in the space of W by

\begin{displaymath}
\left<w_1,w_2\right\gt _2 = \int\,\frac{d^2 w_1}{dt_0^2}\frac{d^2w_2}{dt_0^2}\end{displaymath}

To make this inner product positive definite, thus defining a Hilbert space structure, assume furthermore that

\begin{displaymath}
w(0)=\frac{dw}{dt_0}(0)=w(t_0^{\rm max})=\frac{dw}{dt_0}(t_0^{\rm max})=0\end{displaymath}

Thus W is a subspace of the Sobolev space $H^2_0([0,t_0^{\rm max}])$.

Since the interval velocities, hence the RMS square slownesses, are supposed to vary over a bounded set in $C^{\infty}$, membership in $\cal{A}$ entails a bound on the W norm of u-u0.

Let g(t01,t02) be the Green's function for the operator

\begin{displaymath}
\frac{d^4}{dt_0^4}\end{displaymath}

with the boundary conditions stated above. Then the W gradient of J0 restricted to u0+W is

\begin{displaymath}
\nabla_W J_0[u] = {\cal G} \nabla J_0[u]\end{displaymath}

in which ${\cal G}$ denotes the operator with kernel g. Similarly,

\begin{displaymath}
\nabla_W \nabla_W J_0[u] = {\cal G} \nabla \nabla J_0[u]\end{displaymath}

Next suppose that H[u] is uniformly positive definite for all $u \in {\cal A}$. That is, there exist $0 < h_* \le h^*$ for which

\begin{displaymath}
h_*\Vert w\Vert^2 \le \left<w,H[u]w\right\gt _2 \le h^*\Vert w\Vert _2^2\end{displaymath}

for all $w\in W, u \in {\cal A}$.

Then there exists a similar uniform bound for $\nabla_W \nabla_W J[u]$,since the latter differs from H[u] by a diagonal scaling operator with uniform upper and lower bounds over $\cal{A}$. For the same reason,

\begin{displaymath}
\Vert\nabla_W J_0[u]\Vert _2=\Vert{\cal G} b[u]H[u](u-u^*)\Vert _2 \ge l_*\Vert u-u^*\Vert _2\end{displaymath}

for a suitable l*>0.

That is: within u0+W, u* is the unique stationary point of J0.

Moreover, consulting the estimates of the last section, you see that if the search is limited to u0+W, then at a stationary point u,

\begin{displaymath}
J_0[u,S] \le C(\Vert E\Vert^2)\end{displaymath}

as claimed, since the cross-term K[u,S*,E] in the notation of that section is bounded by a multiple of $\Vert u-u^*\Vert _2$.

Finally, how does one lay hands on such a paragon of a function space as W with the properties supposed here? The operator H[u] is symmetric positive semidefinite on $H^2_0([0,t_0^{\rm max}])$. An optimal choice for W is the direct sum of eigenspaces of $\nabla_W \nabla_W J_0[u^*]$corresponding to the eigenvalues above the cutoff level h*. A computable estimate of this space is the corresponding direct sum of eigenspaces of H[u]. A basis consists of eigenfunctions of the Sturm-Liouville problem

\begin{displaymath}
\frac{d^4w}{dt_0^4}=\frac{R[u]}{\lambda}w,\end{displaymath}

\begin{displaymath}
w(0)=\frac{dw}{dt_0}(0)=w(t_0^{\rm max})=\frac{dw}{dt_0}(t_0^{\rm max})=0\end{displaymath}

To construct W, find the eigenfunctions of this problem, and choose those whose eigenvalues lie above a ``suitable'' cutoff.

Note that if there is little data in a t0 interval, R[u] will be small in that interval and eigenfunctions of the 4th derivative operator will smoothly interpolate values to either side. Thus my suggested space implicitly ``picks events'' with significant energy, pins the RMS velocity down at those places, and interpolates between ``events'' - just as a human velocity analyst would.

It remains to analyse this ``picking'' effect, and to devise good algorithms for choosing the eigenvalue cutoff as a function of data quality and success in fitting moveout (i.e. minimizing J0), so as to justify the assumption that $u^* \in u^0+W$. But that's another story...


next up previous print clean
Next: Acknowledgement Up: Symes: Differential semblance Previous: Noise: General Case
Stanford Exploration Project
4/20/1999