Qualitatively, we can express the desired attributes of the output model
:
This is sound logic; the well log data is (for the sake of simplicity)
assumed to be perfectly accurate, so at the well locations, the output model
is rigidly constrained to match these measurements. On the other hand, when
we are far from any well location, the only source of information is
.
This method is strictly a vertical correction. It is naïve to assume that the misfit between the seismic horizon and the well logs is purely vertical, but well log measurements place no direct horizontal constraints on the output model. However, in the ``Future Work'' section I discuss how simple anisotropy can manifest itself in seismic data through vertical misfit, and how, through a simple inversion scheme, we may be able to quantify the degree of anisotropy.
We can directly measure the vertical discrepancy between
and
at the well locations, (xw,yw):
![]() |
||
| (3) |
Now solve a missing data problem to obtain a smooth 2-D function
(x,y), whose value is already known at the well locations
(
(x,y)).
| |
(4) | |
| (5) |
is the ``well location selector'' operator, similar to
in Equation (1).
I choose the 2-D Laplacian,
, as the regularization operator,
because the vertical misfit is in general not dependent on the geology of the
horizon which we're trying to update. Therefore we opt for the isotropic
smoothing qualities of
, rather than those of a PEF. However,
as a future project I suggest implementing this methodology in a recursive
layer-stripping scheme. In this case, making the regularization operator
dependent on the geometrical and physical properties of the overlying horizons
makes sense, though the details have not yet been formalized.
This problem is grossly underdetermined (
15 known values vs. 1600
points in model space), so even for a good choice of
,
convergence is slow - too slow, in fact, to make this method practical for
application to
real-world problems. To solve this problem, I use
preconditioning, defined as any change of variables which speeds convergence
of an iterative scheme.
In general, the preconditioning operator
is defined by
| |
(6) |
where
is the model variable and
is the so-called
``preconditioned variable.'' The choice of
is arbitrary.
However, Fomel et al. (1997) proposes a simple, yet highly effective
choice for
: the inverse of the regularization operator.
Our regularization operator (
) is 2-D convolution, so its
inverse is obtained by 2-D deconvolution - in general an ill-defined
process. But the Helix Transform Claerbout (1997a) comes to the
rescue, allowing us to use the simple 1-D algorithm to perform 2-D
deconvolution.
First wrap
onto a helix, then unroll it to create a long 1-D filter,
as shown in Figure 4.
Through the process of spectral factorization, we compute a minimum phase
filter
whose autocorrelation is
:
.
Since
is minimum phase, successive convolution with
and
is
multiplication with lower and upper diagonal matrices, respectively.
Therefore, the inverse operation, deconvolution, is matrix inversion:
,
which can be done very quickly in this case with simple backsubstitution,
an
operation! For more
details on the spectral factorization methods used at SEP, see
Claerbout (1997b), or more recently, Sava et al. (1998).
![]() |
Now that we can compute the preconditioner
with confidence,
repose the original problem. First apply the following change of variables:
| |
(7) |
Substitute Equation (7) into Equations (4) and (5) to obtain
| |
||
| (8) |
We solve for
iteratively, then convert back to
via equation
7. Figure 5 shows the resultant
(x,y)
for both the shallow and deep horizons. Does preconditioning work? Yes!
In this example, the rate of convergence is increased by two orders of magnitude
by preconditioning, e.g., roughly 1000 versus 10 CD iterations.
![]() |
Now we compute the final output model, ![]()
| |
(9) |
The results for both shallow and deep horizons are shown in figures 6 and 7, respectively.
![]() |
![]() |