 |
 |
 |
 | Maximum entropy spectral analysis |  |
![[pdf]](icons/pdf.png) |
Next: Computing the Prediction Error
Up: Berryman: MESA
Previous: Introduction
Given a discrete (possibly complex) time series
of
values with sampling interval
(and Nyquist frequency
),
we wish to compute an estimate of the power spectrum
, where
is the
frequency. It is well known that
 |
(A-1) |
where the autocorrelation function
is defined by (for
)
 |
(A-2) |
Now suppose that we use the finite sequence
to estimate the first
autocorrelation values
. (Methods of obtaining these estimates are
discussed in the section on Computing the Prediction Error Filter.)
Then, () has shown that maximizing the average entropy
(see Appendix A for a derivation)
![$\displaystyle h = \frac{1}{4W}\int_{-W}^W \ln\left[2WP(f)\right] df,$](img26.png) |
(A-3) |
subject to the constraint that (1) is equivalent to extrapolating the autocorrelation
for
in the most random possible manner.
Doing the math, we find that
 |
(A-4) |
The
's are Lagrange multipliers to be determined. That the variation of
with respect to
for
should be zero is the essence of the variational principle. The value of
is then
stationary with respect to changes in the
's, which are unknown. We can infer from Equation (4)
that
 |
(A-5) |
Making the
-transform to
, Equation (5) becomes
a polynomial of the complex parameter
:
 |
(A-6) |
Since
is necessarily real and nonnegative, Equation (6)
can be uniquely factored as
![$\displaystyle P^{-1}(f) = 2WE_M^{-1}\left[\sum_m a_mZ^m\right]\left[\sum_n a_n^* Z^{-n}\right] = 2WE_M^{-1}\left\vert\sum_n a_nZ^n\right\vert^2,$](img37.png) |
(A-7) |
with
. The first sum in (7) has all of its zeroes outside
the unit circle (minimum phase) and the second sum has its zeroes inside
the unit circle (maximum phase).
Fourier transforming Equation (1), we find that
 |
(A-8) |
Substituting (7) into (8), we find (after a few more transformations) that
is given by the contour (complex) integral
 |
(A-9) |
The integrand of (9) can have simple poles inside the contour of integration at
and at any zero of the maximum phase factor. The poles for
can be eliminated by taking
a linear combination of Equation (9) ``for various values of
.'' Using the Cauchy integral
theorem, we find that
 |
(A-10) |
since
.
Equation (10) and its complex conjugate for the
are exactly the standard equations for the
maximum and minimum phase spike deconvolution operators
and
,
respectively.
Notice that, if we define the
matrix
as the equidiagonal matrix of autocorrelation
values whose elements are given by
![$\displaystyle \left[T_{N-1}\right]_{ij} \equiv R_{i-j},$](img50.png) |
(A-11) |
then Equation (10) may be seen as a problem of inverting the matrix
to find the
vector
. Equation (10) can be solved using the
well-known Levinson algorithm for inverting a Toeplitz matrix (, ). Therefore,
a power spectral estimate can be computed by using (10) to find the
's, and
(7) to compute the spectrum.
One gap in the analysis should be filled before we proceed. That the variational principle is a
stationary principle (i.e.,
) is obvious. That it is truly a maximum principle however
requires some proof. First note that the average entropy
computed from substituting
(7)
into (3) is exactly
 |
(A-12) |
This fact can be proven by writing (3) as
 |
(A-13) |
The first integral in (13) vanishes identically as is shown in Appendix B.
The second integral vanishes because its argument is analytic for all
except for
,
and the residue there is
. The third integral can be rewritten as
 |
(A-14) |
where the
's are the
zeroes of the maximum phase factor
. Each of the
integrals on the right side of (14) vanishes because of the identities proven in Appendix B.
For small deviations from the constraining values of
, and from the values of
computed from (8) once
is known, we can expand
in a Taylor series:
 |
(A-15) |
The
's are small deviations in the
's. The
's are defined by (4).
The matrix elements of
are given by
 |
(A-16) |
with
.
is obviously Hermitian and is seen to be positive
definite because
 |
(A-17) |
where
is an arbitrary complex vector and the equality in
(17) holds only when
is identically zero.
The result (17) is sufficient to prove that
is not only stationary, but actually a maximum.
The analysis given in this section has at least two weak points: (a) For real data, we never measure
the autocorrelation function directly. Rather, a finite time series is obtained and an autocorrelation
estimate is computed. Given the autocorrelation estimate, an estimate of the minimum phase operator must
then be inferred.
A discussion of various estimates of the autocorrelation is given in the next
section on Computing the Prediction Error Filter,
along with a method of estimating the prediction error filter without computing an
autocorrelation estimate. (b) Even assuming we could compute the ``best'' estimate of the autocorrelation,
that estimate is still subject to random error. The probability of error increases as we compute values of
with greater lag
. Since there is a one-to-one correspondence between the
's and the
's, the length of the operator can strongly affect the accuracy of the estimated MESA power spectrum.
A method of estimating the optimum operator length for a given sample length
will be discussed
in the subsequent section on Choosing the Operator Length.
 |
 |
 |
 | Maximum entropy spectral analysis |  |
![[pdf]](icons/pdf.png) |
Next: Computing the Prediction Error
Up: Berryman: MESA
Previous: Introduction
2009-04-13