Next: MATRIX FILTERS, SPECTRA, AND
Up: Matrices and multichannel time
Previous: REVIEW OF MATRICES
Sylvester's theorem provides a rapid way to calculate functions of a matrix.
Some simple functions of a matrix of frequent occurrence are
and (for N large).
Two more matrix functions
which are very important in wave propagation
are and .Before going into the somewhat abstract proof of Sylvester's theorem,
we will take up a numerical example.
Consider the matrix
| |
(26) |
It will be necessary to have the column eigenvectors and the eigenvalues of
this matrix; they are given by
| |
(27) |
| (28) |
Since the matrix A is not symmetric,
it has row eigenvectors which
differ from the column vectors.
These are
| |
(29) |
| (30) |
We may abbreviate equations (27) through (30) by
| |
|
| |
| (31) |
| |
The reader will observe that r or c could be multiplied by an
arbitrary scale factor and (31) would still be valid.
The eigenvectors are said to be normalized if scale factors have been chosen
so that and .It will be observed
that and ,a general result to be established in the exercises.
Let us consider the behavior of the matrix .
Any power of this matrix is the matrix itself, for example its square.
This property is called idempotence (Latin for self-power).
It arises because
.The same thing is of course true of
. Now notice that the matrix
is ``perpendicular'' to the matrix
, that is
since and are perpendicular.
Sylvester's theorem says that any function f of the matrix A may be
written
The simplest example is
| |
|
| (32) |
Another example is
The inverse is
The identity matrix may be expanded in terms of the eigenvectors of the
matrix A.
Before illustrating some more complicated functions let us see what it takes
to prove Sylvester's theorem.
We will need one basic result which is in all the books on matrix theory,
namely, that most matrices (see exercises) can be diagonalized.
In terms of our example this takes the form
| |
(33) |
where
| |
(34) |
Since a matrix commutes with its inverse, (34) implies
| |
(35) |
Postmultiply (33) by the row matrix
and premultiply by the column matrix.
Using (35),
we get
| |
(36) |
Equation (36) is (32) in disguise,
as we can see by writing (36) as
Now to get we have
Using the orthonormality of and this reduces to
It is clear how (36) can be used to prove Sylvester's theorem
for any polynomial function of A.
Clearly, there is nothing peculiar about matrices either.
This works for .Likewise, one may consider infinite series functions in A.
Since almost any function can be made up of infinite series,
we can consider also transcendental functions
like sine, cosine, exponential.
Exponentials arise naturally as the solutions to differential equations.
Consider the matrix differential equation
| |
(37) |
One may readily verify the power series solution
This is the power series definition of an exponential function.
If the matrix A is one of that vast majority which can be diagonalized,
then the exponential can be more simply expressed by Sylvester's theorem.
For the numerical example we have been considering, we have
The exponential matrix is a solution
to the differential equation (37) without regard to boundaries.
It frequently happens that physics gives one a differential equation
| |
(38) |
Subject to two boundary conditions on either of y1 or y2 or a
combination.
One may verify that
is the solution to (38) for arbitrary constants k1 and k2.
Boundary conditions are then used to determine
the numerical values of k1 and k2.
Note that k1 and k2 are just y2 (x = 0) and y2 (x = 0).
An interesting situation arises with the square root of a matrix.
A matrix like A will have four square roots
because there are four possible combinations for choice
of plus or minus signs on and .In general, an matrix has 2n square roots.
An important application arises in a later chapter,
where we will deal with the differential operator
.The square root of an operator is explained in very few books and few people
even know what it means.
The best way to visualize the square root of this differential operator
is to relate it to the square root of the matrix M
where
The right-hand matrix is a second difference approximation
to a second partial derivative.
Let us define
Clearly we wish to consider M generalized to a very large size so that
the end effects may be minimized.
In concept, we can make M as large
as we like and for any size we can get square roots.
In practice there will be only two square roots of interest,
one with the plus roots of all the eigenvalues
and the other with all the minus roots.
How can we find these ``principal value'' square roots?
An important case of interest is where we can use the binomial theorem so that
The result is justified by merely squaring the assumed square root.
Alternatively, it may be justified by means of Sylvester's theorem.
It should be noted that on squaring the assumed square root
one utilizes the fact that I and T commute.
We are led to the idea that the square root of
the differential operator may be interpreted as
provided that k is not a function of x.
If k is a function of x,
the square root of the differential operator still has meaning but is not
so simply computed with the binomial theorem.
EXERCISES:
- Premultiply (31)b by and postmultiply
(31)c by ,then subtract.
Is a necessary condition for and to be perpendicular?
Is it a sufficient condition?
- Show the Cayley-Hamilton theorem, that is, if
then
- Verify that, for a general matrix A, for which
where and are eigenvalues of A. What is the
general form for ?
- For a symmetric matrix it can be shown that there is always a complete
set of eigenvectors.
A problem sometimes arises with nonsymmetric matrices.
Study the matrix
as to see why one eigenvector is lost.
This is called a defective matrix.
(This example is from T. R. Madden.)
- A wide variety of wave-propagation problems in a stratified medium
reduce to the equation
What is the x dependence of the solution when ab is positive?
When ab is negative?
Assume a and b are independent of x.
Use Sylvester's theorem.
What would it take to get a defective matrix?
What are the solutions in the case of a defective matrix?
- Consider a matrix of the form where v
is a column vector and is its transpose.
Find in terms of a power series in .[Note that collapses to times a scaling factor,
so the power series reduces considerably.]
- The following ``cross-product'' matrix often arises in electrodynamics.
Let
- (a)
- Write out elements of .
- (b)
- Show that or
.
- (c)
- Let be an arbitrary vector. In what geometrical
directions do Uv,
, and point?
- (d)
- What are the eigenvalues of U?
[Hint: Use part (b).]
- (e)
- Why cannot U be canceled from ?
- (f)
- Verify that the idempotent matrices of U are
Next: MATRIX FILTERS, SPECTRA, AND
Up: Matrices and multichannel time
Previous: REVIEW OF MATRICES
Stanford Exploration Project
10/30/1997