next up previous print clean
Next: Geometric interpretation of the Up: Least-squares solution of the Previous: Inversion of a 22

Inversion of the Hessian

Using the results above, the least-squares estimate of ${\bf m}$ in equation ([*]) is derived. Assuming that ${\bf
 C_d}=\sigma_d^2{\bf I}$, the fitting goal is
\begin{displaymath}
{\bf 0} \approx {\bf r_d} = {\bf Lm}-{\bf d},\end{displaymath} (97)
with ${\bf L}=({\bf L_s} \;\; {\bf L_n})$ and ${\bf m'}=({\bf m_s} \;\; {\bf m_n})$.The normal equations are given by  
 \begin{displaymath}
\left( \begin{array}
{cc} 
 {\bf L_s'L_s} & {\bf L_s'L_n} \\...
 ...gin{array}
{c} 
 {\bf L_s'd} \\  {\bf L_n'd}\end{array}\right),\end{displaymath} (98)
where ${\bf m_s}$ and ${\bf m_n}$ are the unknowns. The least-square estimate $\hat{{\bf m_s}}$ of ${\bf m_s}$ can be derived from the bottom row of equation ([*]). The least-square estimate $\hat{{\bf m_n}}$ of ${\bf m_n}$ can be derived from the top row of equation ([*]). We have, then,
\begin{eqnarray}
\hat{{\bf m_s}}&=&({\bf L_s'L_s}-{\bf L_s'L_n}({\bf L_n'L_n})^{...
 ...
 {\bf L_s'L_n})^{-1}{\bf L_n'L_s}({\bf L_s'L_s})^{-1}{\bf L_s'd},\end{eqnarray} (99)
(100)
which can be simplified as follows:
\begin{eqnarray}
\hat{{\bf m_s}}&=&({\bf L_s'}({\bf I}-{\bf L_n}({\bf
 L_n'L_n})...
 ...bf L_n'}({\bf I}-{\bf L_s}({\bf
 L_s'L_s})^{-1}{\bf L_s'}){\bf d}.\end{eqnarray} (101)
(102)
${\bf L_n}({\bf L_n'L_n})^{-1}{\bf L_n'}$ is the coherent noise resolution matrix, whereas ${\bf L_s}({\bf L_s'L_s})^{-1}{\bf L_s'}$ is the signal resolution matrix Tarantola (1987). Denoting $\overline{{\bf R_s}} = {\bf I}-{\bf L_s}({\bf L_s'L_s})^{-1}
{\bf L_s'}$ and $\overline{{\bf R_n}} = {\bf I}-{\bf L_n}({\bf L_n'L_n})^{-1}
{\bf L_n'}$ yields the following simplified expression for $\hat{{\bf m_s}}$ and $\hat{{\bf m_n}}$: 
 \begin{displaymath}
\left( \begin{array}
{c} 
 \hat{{\bf m_s}} \\  \hat{{\bf m_n...
 ..._s}L_n})^{-1}{\bf L_n'\overline{R_s}}\end{array}\right){\bf d}.\end{displaymath} (103)
By property of the resolution operators, ${\bf \overline{R_n}}$ and ${\bf \overline{R_s}}$ perform noise and signal filtering, i.e.,
\begin{displaymath}
\begin{array}
{ccc} 
 {\bf \overline{R_n}d}&=&{\bf \overline...
 ...
 {\bf \overline{R_s}d}&=&{\bf \overline{R_s} n}, 
 \end{array}\end{displaymath} (104)
if the noise and signal are well predicted by the noise and signal modeling operators. Nemeth (1996) demonstrates that the inverse of the Hessian in equation ([*]) is well conditioned if the noise and signal operators are orthogonal, meaning that they predict distinct parts of the model space without overlapping. If overlapping occurs, a model regularization term can improve the signal/noise separation.


next up previous print clean
Next: Geometric interpretation of the Up: Least-squares solution of the Previous: Inversion of a 22
Stanford Exploration Project
5/5/2005