next up previous print clean
Next: MAXIMUM-ENTROPY IMAGING METHODS Up: IMAGING AND INVERSION USING Previous: MUSIC and variations

Extended MUSIC for inversion

MUSIC as described so far is what I would call an ``imaging method'' in the sense that it is attempting to locate objects, but not to quantify their properties (such as the scattering strength q). We will show now that, with only a little more computational effort, we can also determine the scattering coefficients. The resulting algorithm will then be distinguished as an ``inversion method,'' rather than imaging.

Our data are again the matrix elements of the response matrix ${\bf K}$. We know that ${\bf K}$ can be represented equally in two ways by its scattering expansion and its SVD so that

K = _n=1^N q_nH_n H_n^T = _m=1^M _mV_m V_m^T,   recalling that there are M sensors in our array, N scatterers, and that the singular vectors are assumed to be normalized to unity.

One formulation of the inversion problem that is very similar in spirit to MUSIC is therefore to find a matrix

#<299#>K = _r=1^L^2 q_rH_r H_r^T,   whose elements agree with those of the measured ${\bf K}$ according to some criterion such as least-squares. The sum in (Ktilde) is taken over grid points in an $L\times L$ (pixel units) model space in 2D. Generalization to model spaces having other shapes and/or to 3D is straightforward, as there is no use made here of any symmetries in the choice of sensor array arrangement, or of the locations of the test pixels chosen as possible targets and indexed by r. We will want to take advantage of the orthogonality of the singular vectors ${\bf V}_m$. So we suppose that it is possible to solve an equation of the form

_r=1^L^2 q_rH_r H_r^T _m=1^M _mV_m V_m^T,   in the least-squares sense for the coefficients $\tilde{q}_r$ associated with some set of points (pixel or voxel centers) in the model space.

Applying the singular vectors ${\bf V}^*_m$ to both the right and left sides of (KeqK), we find

_r=1^L^2 q_r(H_r^T V^*_m)^2 = _m,   which can then be rewritten in matrix notation as

(H_1^TV^*_1)^2 & ...& ( H_L^2^TV^*_1)^2 (H_1^TV^*_2)^2 & ...& ( H_L^2^TV^*_2)^2 & & (H_1^TV^*_M)^2 & ...& ( H_L^2^TV^*_M)^2 q_1 q_2 q_L^2= _1 _2 _M.   We see that estimates of the coefficients can now be found by solving this rather large ($M\times L^2$) and underdetermined least-squares system. This form is similar to various kinds of tomography, with the singular values taking the role of the data, and the singular vectors taking the role of the discrete scanning modality. (For comparison, in traveltime tomography the data are the traveltimes, the matrix elements are the ray-path lengths through the cells, and the model parameters are the slownesses in the cells.) Note that the elements of the present matrix are in general complex, as are the scattering coefficients $\tilde{q}_r$, while the singular values $\sigma_m$ are all nonnegative real numbers.

The computational situation can be improved significantly by noting the similarity of the matrix elements to the direction cosines already treated in MUSIC. In fact, if we take the magnitude of each matrix element, we have the square of a direction cosine. Furthermore, if we sum these magnitudes along the columns of the matrix, then we have obtained exactly the quantity

H_r^2_m=1^M^2(V_m,H_r),   which -- except for the normalization factor $\vert{\bf H}_r\vert^2$ -- is the sum over all the direction cosines associated with the vector ${\bf H}_r$. This column sum is therefore a measure of what we might call the ``coverage'' of the pixel centered at ${\bf r}$.If there is no coverage, this column sum is zero and the pixel does not contain a measurable target. If the normalized coverage is close to unity, then this pixel is one that MUSIC will clearly identify as a target. For intermediate values, the situation is somewhat ambiguous, as it is in the normal MUSIC algorithm, but clearly the closer the normalized sum is to unity, the more likely it is that the pixel contains a target.

Now we have a clear pixel selection criterion available, based on these column sums. Thus, we can reduce the size of the inversion problem posed in (bigmatrixeqn) very quickly by throwing out all pixels that have small column sums, and/or, equivalently, those whose values are $<< \vert{\bf H}_r\vert^2$. We can classify the remaining pixels by the magnitudes of the normalized sums, choosing to keep only (say) those pixels having the M largest normalized column sums. Or if there are clearly only N << M singular values that are nonzero, then we could reduce the problem still further, and keep only those pixels having the N largest normalized column sums. If we have an $M\times M$ matrix when we have finished this selection process, then we can simply invert the matrix to find the values of the remaining $\tilde{q}$'s. If on the other hand, we have an $M\times N$ matrix that is not square remaining after this process of elimination, then we will again have to solve the inversion problem, by using overdetermined least-squares.

Another possibility when the singular values have a gradual fall off in magnitude as $m \to M$, but no precipitous drop to zero, is to multiply (bigmatrixeqn) on the left by a diagonal matrix whose nonzero elements are the singular values to some power p. Then, the resulting equation is

_r=1^L^2 q_r_m^p(H_r^T V^*_m)^2 = _m^p+1,   or, in vector/matrix notation,

_1^p(H_1^TV^*_1)^2 & ...& _1^p( H_L^2^TV^*_1)^2 _2^p(H_1^TV^*_2)^2 & ...& _2^p( H_L^2^TV^*_2)^2 & & _M^p(H_1^TV^*_M)^2 & ...& _M^p( H_L^2^TV^*_M)^2 q_1 q_2 q_L^2= _1^p+1 _2^p+1 _M^p+1.   We can apply a MUSIC-like processing scheme to this matrix (that could then be called ``weighted MUSIC,'' which is similar to some methods suggested by other authors). This approach permits the singular values to determine in a natural way the cutoff (if any) in the contributions from those singular vectors of negligible importance to the inversion. For example, when p=2, these column sums of the magnitudes of these elements are just $\vert{\bf K}^*{\bf H}_r\vert^2$, which are the matrix elements of the time-reversal operator with each vector ${\bf H}_r$.


next up previous print clean
Next: MAXIMUM-ENTROPY IMAGING METHODS Up: IMAGING AND INVERSION USING Previous: MUSIC and variations
Stanford Exploration Project
9/18/2001