(1) |
Conventional theoretical analysis generally involves singular-value decomposition which begins from an eigenvalue analysis of where is the conjugate transpose of .Such analysis is meaningless unless the components of have identical physical dimensions, likewise for the components of .When data components have various physical dimensions, the arbitrariness can be resolved by dividing each component by its estimated variance. This still leaves the model scaling unspecified. Variance scaling leads to questions of covariances, and soon other questions arise.
Setting aside eigen analysis, the well-known theoretical inverse to equation (1) is
(2) |
(3) |
Beginning instead from the practical side things, the first step going from data space to model space is given by the conjugate operator
(4) |
The unadorned conjugate operator gives an answer that is generally incorrect in its scale factor. In imaging, however, the scale factor is generally chosen according to the image variance, so this is not an important issue. In more general applications, however, scaling can be very important, and to be meaningful, a minimum requirement is that components of scaled to each have the the same physical dimension, and likewise for .More generally, scaling can involve statistical models of increasing complexity. We cannot hope to pursue this complexity through all its logical ramifications, yet we cannot ignore it because of the many opportunities to learn more from our data that may be hidden there, particularly in the covariance structure.