previous up next print clean
Next: THE WINNERS Up: Claerbout: Unitary operators in Previous: Introduction

REVIEW

We define abstract vectors for data $\bold d$,model $\bold m$ and a linear operator $\bold B$and we define ``modeling'' by

 
 \begin{displaymath}
\bold d \quad =\quad
 \bold B
 \bold m\end{displaymath} (1)
Many practical problems stem from nonlinearity, but these are not the focus of this short note which ignores nonlinearity.

Conventional theoretical analysis generally involves singular-value decomposition which begins from an eigenvalue analysis of $\bold B'\bold B$ where $\bold B'$ is the conjugate transpose of $\bold B$.Such analysis is meaningless unless the components of $\bold d$ have identical physical dimensions, likewise for the components of $\bold m$.When data components have various physical dimensions, the arbitrariness can be resolved by dividing each component by its estimated variance. This still leaves the model scaling unspecified. Variance scaling leads to questions of covariances, and soon other questions arise.

Setting aside eigen analysis, the well-known theoretical inverse to equation (1) is  
 \begin{displaymath}
\bold m_{\rm theoretical} \quad =\quad
 ( \bold B'\bold B)^{-1} \bold B' \bold d\end{displaymath} (2)
The well-recognized problem with equation (2) is that the matrix may not be invertible. This problem is acknowledged by the so-called ``damped'' solution.  
 \begin{displaymath}
\bold m_{\rm damped} \quad =\quad
 ( \bold B'\bold B+\epsilon )^{-1} \bold B' \bold d\end{displaymath} (3)
where $\epsilon$ hides a multitude of possible models of increasing sophistication, beginning from a small scalar times an identity matrix, going to a diagonal matrix of inverse variances, going on to an inverse covariance matrix that requires considerable imagination to conjure up, or considerable skill to estimate.

Beginning instead from the practical side things, the first step going from data space to model space is given by the conjugate operator  
 \begin{displaymath}
\bold m_{\rm image} \quad =\quad\bold B' \bold d\end{displaymath} (4)
I teach Claerbout (1992b) that computation of the conjugate is a straightforward adjunct to the operation itself and I advocate that no program that embodies a linear operator is complete without the few extra steps required to make the conjugate option available. There exist in competitive industrial practice, imaging operators that amount to nothing more than the conjugate operator. The conjugate operation, equation (4), is always a starting point for data analysis and how much further we should go toward equation (2) is an issue that needs reexamination in each application.

The unadorned conjugate operator gives an answer that is generally incorrect in its scale factor. In imaging, however, the scale factor is generally chosen according to the image variance, so this is not an important issue. In more general applications, however, scaling can be very important, and to be meaningful, a minimum requirement is that components of $\bold d$scaled to each have the the same physical dimension, and likewise for $\bold m$.More generally, scaling can involve statistical models of increasing complexity. We cannot hope to pursue this complexity through all its logical ramifications, yet we cannot ignore it because of the many opportunities to learn more from our data that may be hidden there, particularly in the covariance structure.


previous up next print clean
Next: THE WINNERS Up: Claerbout: Unitary operators in Previous: Introduction
Stanford Exploration Project
11/17/1997