next up previous print clean
Next: A new maximum-entropy imaging Up: MAXIMUM-ENTROPY IMAGING METHODS Previous: Motivation

Maximum-entropy imaging

One approach to imaging with corrupted data or relatively small data sets is known as Maximum-Entropy Imaging (Gull and Daniell, 1978; Schmidt, 1979; Gull and Skilling, 1984). I will now describe this method briefly, although another method that is also based on the maximum entropy concept is the main focus of this section.

Perhaps the most common use of maximum-entropy imaging arises in astronomy, and especially in radioastronomy (Gull and Daniell, 1978). The goal is to find a map of the distribution of radio brightness across the sky. But the data collected and the methods used can produce a series of such maps, each having different resolutions and noise levels. There does not appear to be any preferred or ``best'' sky map associated with the data from this point of view. To resolve this ambiguity, we can choose to use an image that satisfies a maximum-entropy objective criterion (to be explained soon). The resulting maximum-entropy image should not be thought of as the ``true'' map of the sky in this context, but rather as a map that does not lead to conclusions for which there is either very little or no evidence in the data. The analogy to our problem in acoustics of finding relatively isolated targets in an otherwise homogeneous background is also apparent.

The reader interested in understanding the general context of entropy and its role in statistical mechanics and information theory would do well to read the paper by Jaynes (1957). The history of maximum-entropy in astronomy is also beyond our scope here, but I will point out that Gull and Daniell (1978) introduced this algorithm in part as an alternative to the CLEAN algorithm (Högbom, 1974), which is another popular imaging algorithm (although not as popular now as is the maximum-entropy approach) for use with incomplete and noisy data.

Following Gull and Daniell (1978), we define an intensity mij at the pixel in the (i,j) position of a test map and let $\hat{m}_{kl}$ be the Fourier transform of mij. Then, suppose further that the data come to us in the form of measurements $\hat{n}_{kl}$ of the Fourier transform $\hat{m}_{kl}$. Assuming that these measurements have Gaussian errors, with standard deviations $\sigma_{kl}$, the data fitting part of the algorithm is a weighted least-squares error term of the form $\sum_{kl} \vert\hat{m}_{kl}-\hat{n}_{kl}\vert^2/\sigma^2_{kl}$.The objective constraint applied is that the entropy functional $S = -\sum_{ij} m_{ij}\ln m_{ij}$, determined by the nonnegative intensities in the final map, is a maximum. This maximum-entropy objective is natural for imaging a sparsely occupied field because it corresponds to a situation where all the intensities have the same value, thus, providing a sky map with no information in the absence of data. Using these two terms to form an overall objective functional, including a Lagrange multiplier for the data constraints, gives

Q(m_ij,) -_ij m_ijm_ij - 2 _kl m_kl-n_kl^2/^2_kl.   One advantage of the second sum in Q is that it satisfies a $\chi^2$ distribution, and, therefore, the value of this sum can be used as a measure of goodness of the data fit. In particular, the expected value of the sum is equal to the number of terms in the sum. The disadvantage of this sum is that it requires prior knowledge of the standard deviations. For other imaging applications, the second term in (Q) could be replaced by an output least-squares functional together with a tolerance value as in Morozov's discrepancy criterion (Morozov, 1967; 1984; Tikhonov and Arsenin, 1979; Groetsch, 1984; Hanke, 1997; Haber et al., 2000).

We will not pursue this approach further here, but instead introduce another approach that makes use of the maximum-entropy imaging concept.


next up previous print clean
Next: A new maximum-entropy imaging Up: MAXIMUM-ENTROPY IMAGING METHODS Previous: Motivation
Stanford Exploration Project
9/18/2001