Strictly speaking, a rectangular matrix does not have an inverse. Surprising things often happen, but commonly, when is a tall matrix (more data values than model values) then the matrix for finding is invertible while that for finding is not, and when the matrix is wide instead of tall (the number of data values is less than the number of model values) it is the other way around. In many applications neither nor is invertible. This difficulty is solved by ``damping'' as we will see in later chapters. The point to notice in this chapter on adjoints is that in any application where or equals (unitary operator), that the adjoint operator is the inverse by either equation (32) or (33).
Theoreticians like to study inverse problems where is drawn from the field of continuous functions. This is like the vector having infinitely many components. Such problems are hopelessly intractable unless we find, or assume, that the operator is an identity or diagonal matrix.
In practice, theoretical considerations may have little bearing on how we proceed. Current computational power limits matrix inversion jobs to about 104 variables. This book specializes in big problems, those with more than about 104 variables, but the methods we learn are also excellent for smaller problems.