Resolution matrices for linear inversion problems can be understood most easily by considering a matrix equation of the form

= and asking the question: Given and , what solves this equation? When the matrix is square and invertible, the answer to the question is relatively easy: .However, it often happens in geophysical inversion problems that is not square, or not invertible even if it is square. In these situations, I need to introduce an approximate inverse called the Moore-Penrose pseudoinverse (Moore, 1920; Penrose, 1955a; Penrose, 1955b). (Although other choices of approximate inverse are known [see Rao (1965)], I will restrict discussion here to the best known approximate inverse.) Then, when I multiply (Mst) on the left by , I find

^= ^. If it were true that (the identity matrix), then I would have solved the inversion problem exactly. But it is precisely because no such inverse exists in some problems that I need to consider the analysis that follows. Following Backus and Gilbert (1968), I define the matrix coefficient of in (MMsMt) as the resolution matrix

^. The deviations of from the identity matrix , given by the matrix , determine the degree of distrust I should place on the components of the solution vector that are most poorly resolved.

To be more explicit, consider a crosshole
tomography problem. Then, is an ray-path matrix,
is an *m*-vector of first arrival traveltimes, and is
the slowness (inverse velocity) *n*-vector. I seek the slownesses
given the
measured traveltimes in and the estimates of the ray paths between
source and receiver locations contained in the matrix [see Berryman
(1991)]. Then, the resolution matrix defined above is the model resolution,
since the slowness vector is the desired model of acoustic wave slowness.
I can also define a data resolution matrix. First, multiply (MMsMt)
on the left by so

^= ^, then compare (MMMsMMt) to (Mst) and note that the matrix product multiplying should equal the identity matrix if the approximate inverse is a true inverse. Again, deviations of this matrix from the identity provide information about the degree to which the solution I compute makes use of all the data in . Thus, I have the data resolution matrix (Wiggins, 1972; Jackson, 1972) is

_data ^ and the model resolution matrix (Backus and Gilbert, 1968; Jackson, 1972) defined previously is

I can clarify the significance of these two matrices by considering the singular value decomposition (SVD) of the matrix , given by

= _i=1^r _i _i_i^T, where the vectors and are the eigenvectors of determined by

_i = _i_i and _i^T= _i_i^T
and the s are the eigenvalues. The eigenvectors also satisfy
orthonormality
conditions and .The rank *r* of (the number of nonzero eigenvalues) has a value
. The Moore-Penrose pseudoinverse is then known to be given by

^= _i=1^r _i^-1_i_i^T, so the resolution matrices are

_data = _i=1^r _i_i^T. When expressed in this form, it is clear that the resolution matrices simply determine the completeness of the resolved model or data spaces respectively.

Now it is important to recognize that, although the resolution matrices are
*defined* by the equations (Rm) and (Rd), these matrices may
nevertheless be *computed* in other ways. For example, consider
the SVD of the normal matrix (from least-squares analysis)

^T= _i=1^r _i_i_i^T_j=1^r _j _j_j^T = _i=1^r _i^2_i_i^T. Then, I find easily that

_model = (^T)= _i=1^r _i_i^T. It is also straightforward to show that

_model = ^T(^T)^. Similarly, I find that the data resolution is given by

In the following discussion, I will show how these equivalent formulas for the resolution matrices aid in their computation during the process of finding an approximate inverse of using iterative methods.

11/17/1997