next up previous print clean
Next: Application to Kirchhoff imaging Up: Clapp et al.: Kirchhoff Previous: Introduction

Review of resolution matrices

Model resolution operator $\bold{R}$ defines the connection between the true model $\bold{m}$ and the model estimate from least-squares inversion $\hat{\bold{m}}$, as follows:  
 \begin{displaymath}
 \bold{\hat{m} = R m}\;.\end{displaymath} (1)
In the case of least-squares Kirchhoff migration, $\bold{m}$corresponds to true reflectivity, $\hat{\bold{m}}$ is the output image, and the estimation process amounts to minimizing the least-square norm of the residual $\bold{r = d - L m}$, where $\bold{d}$ is the observed data, and $\bold{L}$ is the Kirchhoff modeling operator. Recalling the well-known formula  
 \begin{displaymath}
 \bold{\hat{m} = (L' L)^{
\dag 
} L' d}\;, \end{displaymath} (2)
where $\bold{L'}$ stands for the adjoint operator (Kirchhoff migration), and the dagger symbol denotes the pseudo-inverse operator, we can deduce from formulas (3) and (2) that  
 \begin{displaymath}
 \bold{R = (L' L)^{
\dag 
} (L' L)}\;.\end{displaymath} (3)

In the ideal case, when all model components are perfectly resolved, the model resolution model matrix is equal to the identity. If the model is not perfectly constrained, the inverted $\bold{L' L}$ matrix will be singular, and the model resolution will depart from being the identity. It means that the model contains some null-space components that are not constrained by the data. The diagonal elements of the resolution matrix will be less than one in the places of unresolved model components.

Berryman and Fomel (1996) derive the following remarkably simple formula for the model resolution matrix:  
 \begin{displaymath}
 \bold{R} = \sum_{i=1}^{N} \bold{\frac{g_i g'_i}{g'_i g_i}}\;, \end{displaymath} (4)
where N corresponds to the model size, and the $\bold{g_i}$'s are the model-space gradient vectors that appear in the conjugate-gradient process Hestenes and Stiefel (1952). In large-scale problems, such as a typical Kirchhoff migration, we cannot afford performing all N steps of the conjugate-gradient process, required for the theoretical convergence of the model estimate to the one defined in formula (2). However, formula (4) is still valid in this case, if we replace number N with the actual number of steps. In this case, the matrix R corresponds to the actual resolution of our estimate. To reduce the computational effort, we can use formula (4) only with a few significant gradient vectors $\bold{g_i}$ to obtain an effective approximation of the model resolution. The most significant $\bold{g_i}$'s will turn out to be those have large components in the direction of eigenvectors having large eigenvalues (or singular vectors have large singular values).

The next section exemplifies this approach with synthetic and real data tests.


next up previous print clean
Next: Application to Kirchhoff imaging Up: Clapp et al.: Kirchhoff Previous: Introduction
Stanford Exploration Project
10/25/1999