Next: Inverse Theory for signal
Up: Background and definitions
Previous: An example of calculating
If the convolution is expressed in matrix form
as , where is the convolution matrix of ,the filter can be solved for to get the least-squares minimum
of .The normal equations expression for the least-squares inverse
is ,or .This expression for may be decomposed into simpler expressions
in the frequency domain since may be expressed as .The expression may be transformed into the frequency domain as
or
.(Here indicates a component of the Fourier transform
of the data, and
indicates the complex conjugate of ).
Canceling out gives
.Thus in the frequency domain,
where filtering is described as a multiplication such as
,inversion is simply division, or
.The values of
, , and are scalars (although they
are complex numbers).
The term in the denominator
is the Fourier transform of the autocorrelation of .If is the identity matrix , will be constant.
This corresponds to an input with a white spectrum.
If all the terms of are constant,
will be non-zero only at ,and the inversion will be unstable.
This corresponds to a data series containing a constant.
It can be seen that is a measure
of the information available at , and
is a function of the uncertainty, or variance,
at .The original autocorrelation matrix is the information matrix,
and its inverse is
the covariance matrixStrang (1986).
The expression
will generally have a stabilizer in the denominator to avoid
having approach infinity
when gets small.
Adding this stabilizer in the frequency domain corresponds to
adding a small value to the diagonal of the autocorrelation matrix.
In the cases discussed here,
the stabilizer will seldom be needed since random noise in the data
generally keeps from going to zero.
Next: Inverse Theory for signal
Up: Background and definitions
Previous: An example of calculating
Stanford Exploration Project
2/9/2001