Although a linear operator does not have defined subscripts,
you can determine what would be the operator value at any subscript:
by applying the operator to an impulse function, you would get a matrix column.
The adjoint operator is one from which we can extract the transpose matrix.
For large spaces this extraction is unwieldy,
so to test the validity of adjoints,
we probe them with random vectors,
say and
,to see whether
.Mathematicians define adjoints by this test,
except that instead of using random vectors,
they say ``for all functions,'' which includes the continuum.
This defining test makes adjoints look mysterious.
Careful inspection of operator adjoints,
however, generally reveals that they are built up from simple matrices.
Given adjoints
,
,and
,the adjoint of
is
.Fourier transforms and linear-differential-equation solvers
are chains of matrices,
so their adjoints can be assembled
by the application of adjoint components in reverse order.
The other way we often see complicated operators being built from simple ones
is when operators are put into components of matrices,
typically a
or
matrix containing two operators.
An example of the adjoint of a two-component column operator is
![]() |
(29) |
Although in practice an operator might be built from matrices, fundamentally, a matrix is a data structure whereas an operator is a procedure. A matrix is an operator if its subscripts are hidden but it can be applied to a space, producing another space.
As matrices have inverses, so do linear operators.
You don't need subscripts to find an inverse.
The
conjugate-gradient method and
conjugate-direction method
explained in the next chapter
are attractive methods of finding them.
They merely apply
and
and use inner products to find coefficients
of a polynomial in
that represents the inverse operator.
Whenever we encounter a positive-definite matrix we should recognize
its likely origin in a nonsymmetric matrix
times its adjoint.
Those in
natural sciences often work on solving simultaneous equations but fail
to realize that they should return to the origin of the equations
which is often a fitting goal; i.e.,
applying an operator to a model should yield data,
i.e.,
where the operator
is a partial derivative matrix
(and there are potential underlying nonlinearities).
This begins another story with new ingredients,
weighting functions and statistics.