next up previous print clean
Next: Real Data Results Up: Least Squares Formulation Previous: Estimating and Handling Random

Estimating and Handling Correlated Data Errors

We assume that the measured VSP depths, $\boldsymbol \zeta_{vsp}$, consist of the ``true'' depth, $\bf \tilde{\boldsymbol \zeta}$, plus a random error vector, $\bold e_{vsp}$:
\begin{displaymath}
\boldsymbol \zeta_{vsp} = {\bf \tilde{\boldsymbol \zeta}} + \bold e_{vsp}\end{displaymath} (11)
Furthermore, we assume that the measured seismic depths, $\boldsymbol \zeta_{seis}$, are the sum of the true depth, a random error vector, $\bold e_{seis}$, and a smooth perturbation, $\bf \Delta \boldsymbol \zeta$:
\begin{displaymath}
\boldsymbol \zeta_{seis} = {\bf \tilde{\boldsymbol \zeta}} + \bold e_{seis} + \bf \Delta \boldsymbol \zeta\end{displaymath} (12)
When the data residuals are correlated, the optimal choice for $\bf W$ in equation (10) becomes the square root of the inverse data covariance matrix. Guitton (2000) noted that after applying a hyperbolic Radon transform (HRT) to a CMP gather, coherent noise events, which are not modeled by the HRT, appear in the data residual. He iteratively estimated a prediction error filter from the data residual and used it as the (normalized) square root of the inverse data covariance.

If we subtract $\bf \Delta \boldsymbol \zeta$ from $\boldsymbol \zeta_{seis}$, then the error is random, as desired. Unfortunately, $\bf \Delta \boldsymbol \zeta$ is unknown. We iteratively estimate $\bf \Delta \boldsymbol \zeta$, and the velocity pertubation which is assumed to produce $\bf \Delta \boldsymbol \zeta$, $\bf \Delta v$, using the following tomography-like iteration:


  $\bf \Delta \boldsymbol \zeta = 0$ 
		 		 iterate {
		 		 		 Solve equation (10) for $ \bold v = [\bold v_0 \;\; \bold k]^T$:  $\;\;\;\; \left[\begin{array}
{c}
 \bf W A \\  \epsilon \bold C
 \end{array}\rig...
 ...ldsymbol \zeta + \bf \Delta \boldsymbol \zeta) \\  \bold 0
 \end{array}\right] $

Solve equation (10) for $\bf \Delta v$: $\;\;\;\; \left[\begin{array}
{c}
 \bf W A \\  \epsilon \bold C
 \end{array}\rig...
 ...y}
{c}
 \bold W ( \bf A v - \boldsymbol \zeta) \\  \bold 0
 \end{array}\right] $

${\bf \Delta \boldsymbol \zeta} = \bf A \Delta v$ }

The first stage of the iteration solves equation (10) for an interval velocity function, using the VSP data, and the corrected seismic data. In the second stage of the iteration, we estimate a correction velocity function from the residual, which is similar to Guitton's approach. By forcing the correction velocity to obey equation (10), we force it to be ``reasonable'', and hence, we ensure that the estimated correction depth results from this reasonable velocity. One future feature that we envision is the ability to force the correction velocity to be zero in one or more layers. For example, if we had strong evidence to believe that most of the seismic/VSP mistie was caused by anisotropy in one prominent shale layer, we might only want nonzero correction velocity in this layer.

By forcing the correction velocity to be reasonable (continuous, for example), our estimated depth correction may not fully decorrelate the residual in the first step of the iteration. For this reason, we must do more than one iteration. We find that for this example, the correction velocity changes very little after 15 nonlinear iterations, and that 5 nonlinear iterations gives a decent result.

We admit that our nonlinear iteration may be risky. We have solved a very simple analog to the classic reflection tomography problem, where traveltimes depend both on reflector position and on velocity. Our approach was to completely decouple optimization of velocity from correction depth. Modern tomography approaches attempt to simultaneously optimize reflector position and velocity, and we should attempt to improve our method similarly.


next up previous print clean
Next: Real Data Results Up: Least Squares Formulation Previous: Estimating and Handling Random
Stanford Exploration Project
4/29/2001