People who are already familiar
with ``geophysical inverse theory''
may wonder what new they can gain
from a book focused on ``estimation of images.''
Given a matrix relation
between
model
and
data
,common sense suggests that
practitioners should find
in order to minimize the length
of the residual
.A theory of Gauss suggests that a better
(minimum variance, unbiased) estimate
results from minimizing the quadratic form
,where
is the noise covariance matrix.
I have never seen an application in which
the noise covariance matrix was given,
but practitioners often find ways to estimate it:
they regard various sums as ensemble averages.
Additional features of inverse theory are exhibited by the partitioned matrix
![]() |
(1) |
Simple inverse theory suggests we should minimize which amounts to setting the null space to zero.
Baysian inverse theory says we should use
the model covariance matrix
and minimize
for a better answer
although it would include some nonzero portion of the null space.
Never have I seen an application in which
the model-covariance matrix was a given prior.
Specifying or estimating it is a puzzle for experimentalists.
For example, when a model space
is a signal
(having components that are a function of time) or,
a stratified earth model (with components that are function of depth z)
we might supplement the fitting goal
with a
``minimum wiggliness'' goal like
.Neither
the model covariance matrix
nor
the null space
seems learnable from the data and equation (0.1).
In fact, both the null space and the model covariance matrix
can be estimated from the data and that is one of the novelties of this book.
To convince you it is possible
(without launching into the main body of the book),
I offer a simple example of an operator and data set
from which your human intuition
will immediately tell you
what you want
for the whole model space, including the null space.
Consider the data to be a sinusoidal function of time (or depth)
and take so that
the operator
is
a delay operator with truncation
of the signal shifted off the end of the space.
Solving for
, the findable part of the model,
you get a back-shifted sinusoid.
Your human intuition, not any mathematics here,
tells you that the truncated part of the model,
,should be a logical continuation of the sinusoid
at the same frequency.
It should not have a different frequency nor become a square wave
nor be a sinusoid abruptly truncated to zero
.
Prior knowledge exploited in this book is that unknowns are functions of time and space (so the covariance matrix has known structure). This structure gives them predictability. Predictable functions in 1-D are tides, in 2-D are lines on images (linements), in 3-D are sedimentary layers, and in 4-D are wavefronts. The tool we need to best handle this predictability is the multidimensional ``prediction-error filter'' (PEF), a central theme of this book.