next up previous print clean
Next: Model space weighting methods Up: Approximate covariance calculation Previous: Approximate covariance calculation

Iterative methods

The iterative methods for obtaining the generalized inverse are based on the SVD method. The available methods are among others Conjugate Gradient (CG) (Hestenes and Stiefel, 1952) and LSQR (Paige and Saunders, 1982). Berryman (2001b) provides a complete overview of the iterative methods, and an analysis of their capacities. All the iterative methods lead directly to approximations of the generalized inverse $\mathbf{A}^{\dagger}$. In general, carrying these iterative processes to completion will produce a result closely approximating $\mathbf{A}^{\dagger}$.

Yao et al. (1999) and Berryman (2001b) provide methods for calculating the a posteriori covariance for iterative methods based on SVD. For a thorough description of the methods I refer to these papers. The principle is as follows; each iteration provides one extra vector in the solution space. After K iterations these K vectors are used for calculating the a posteriori covariance. As a consequence, if more iterations are performed, not only the approximated inverse but also the a posteriori covariance becomes more accurate. Figure 9b,c shows a posteriori covariance matrices for M=33 model parameters obtained by respectively CG and LSQR. Both matrices resemble the real a posteriori covariance matrix (Fig.9a). Note that both methods compute the complete covariance matrix, so also the non diagonal elements.


next up previous print clean
Next: Model space weighting methods Up: Approximate covariance calculation Previous: Approximate covariance calculation
Stanford Exploration Project
9/18/2001