next up previous print clean
Next: Example: a CMP from Up: Symes: Coherent noise Previous: Introduction


Given a relative noise level $\sigma$, we seek the velocity function v and the data perturbation of root mean square relative size at most $\sigma$ which together yield the flattest image gather. Transformation of this idea into an implementable algorithm requires definition of operators, functions, and optimization methods. This section gives a sketch of these mathematical details.

The forward map F[v] is a linear operator depending on a velocity function v. The velocity function depends on all or part of the subsurface coordinates; the examples presented below use depth-dependent velocity. F[v] is a prestack forward modeling operator; it takes an image volume or bin-dependent reflectivity as input, and outputs a seismic data volume. In the examples presented below, the data will be a common midpoint gather, each bin will contain a single trace, and the bin parameter is offset. Thus the input reflectivity also has the appearance of a common midpoint gather, and can be identified with the image gather in this setting.

The inverse map G[v] is an approximate inverse to F[v]. That is, if data d and reflectivity r satisfy d=F[v]r, then $r 
\simeq G[v]d$. For multioffset data and multidimensional models, Beylkin (1985) showed how to build such operators as weighted diffraction sums. For layered modeling, G[v] is essentially moveout correction, after compensation for amplitude and wavelet deconvolution; F[v] inverts these steps.

Differential semblance measures nonflatness by comparing neighboring image bins. That is, if the image is r = G[v]d, and the bin index is i, then the differential semblance is the mean square power of $(Dr)_i = {\rm
const.}\,\times\,(r_{i+1}-r_i)$. A convenient notation for root mean square of a field, say r, is $\Vert r\Vert$. The dot product of two fields (viewed as vectors of samples), say r1 and r2, is <r1,r2>. Thus $\Vert r\Vert^2=<r,r\gt$. The basic (``raw'') semblance operator is W[v] = F[v]DG[v]. The application of the modeling operator F[v] after formation of the bin difference makes the power of the output independent of amplitude, up to an error which decays with increasing signal frequency. [This trick was discovered by Hua Song Song (1994)]. Since the data is differenced in formation of W[v]d, we bring its high frequency content back into consistency with that of the data via a smoothing operator H of order -2 (k-2 filter).

The dual regularization objective function $J_{\sigma}$ is then

J_{\sigma}[v;d] = {\rm min}_r \,\frac{1}{2}<W[v]r, H W[v]r\gt 
\,\,{\rm subj} \,\Vert r-d\Vert \le \sigma \end{displaymath}

Note that the differential semblance objective explored in the above cited references is the special case of this one with $\sigma=0$. In general, a Lagrange multiplier $\lambda$ exists for which the solution r satsifies the normal and secular equations:

W[v]^THW[v] r + \lambda (r-d) = 0,\,\,\Vert r-d\Vert=\sigma\end{displaymath}

These two equations together determine $\lambda=\lambda[v;\sigma]$ and $r=r[v;\sigma]$. Thus $J_{\sigma}$ is

J_{\sigma}[v;d] = \frac{1}{2}<W[v]r[v;\sigma],HW[v]r[v;\sigma]\gt\end{displaymath}

First order perturbation with respect to v, $v \rightarrow v + \delta v$,gives

\delta J_{\sigma}[v;d] = 
<W[v]r[v;\sigma],H \delta W[v] r[v;\sigma]\gt\end{displaymath}

after simplifications due to the normal and secular equations. From this expression follows a formula for the gradient of $J_{\sigma}$ in terms of the first order perturbation of W[v] and its adjoint operator, which may in turn be expressed as products of the operators F[v],D, and G[v], their first order perturbations, and the adjoints of these.

Besides accurate numerical implementations of the operators described above, we require methods for solving the system of normal and secular equations. For the latter, we use the Moré-Hebden algorithm Björk (1997) with the linear systems occuring in this method solved approximately by conjugate gradient iteration. The best estimate of v results from gradient-based optimization (a quasi-Newton method) applied to $J_{\sigma}$. In the experiments reported here, we have used the Limited Memory Broyden-Fletcher-Goldfarb-Shanno method as developed in Nocedal (1980).

next up previous print clean
Next: Example: a CMP from Up: Symes: Coherent noise Previous: Introduction
Stanford Exploration Project