previous up next print clean
Next: HESTENES' METHOD Up: Rub: Singular value decomposition Previous: Rub: Singular value decomposition

Introduction

A singular value decomposition (SVD) of a real $m \times n$ matrix A is its factorization into the product of three matrices :

 
 \begin{displaymath}
A = U \Sigma V^{T}\end{displaymath} (1)
where $U (m \times m)$ and $V (n \times n)$ are orthogonal matrices and $\Sigma
(m \times n)$ is a diagonal matrix, whose diagonal elements are the singular values of A.

There are several ways of computing the SVD. With serial processing, variants of the QR algorithm (used to compute the eigenvalues of a matrix) can be used to compute the SVD. The basic approach involves the following (Golub and Van Loan, 1989) :

With some manipulations, it can be shown that R is diagonal, so we get : $\Sigma = R$ and $V = V_1\Pi$, completing the SVD computation.

The drawback with this approach is that the formation of ATA can lead to a loss of information. A preferable method for computing the SVD is described in Golub and Kahan (1965). The symmetric QR algorithm is implicitly applied to ATA (in other words, the product ATA is never calculated) in order to find U and V simultaneously.

The approaches described above are very effective on serial processors, but they are not readily amenable to parallel implementations. With parallel processing, the preferred methods are variations of the Jacobi algorithm used to compute the eigenvalues of a symmetric matrix. The classical Jacobi algorithm (Jacobi, 1846) involves pre-multiplication and post-multiplication of A by rotations of the form (Jacobi rotations) :

\begin{displaymath}
J(p,q,\theta)=
\left(
\begin{array}
{ccccccc}
1&\cdots&0&\cd...
 ...dots&\vdots\\ 0&\cdots&0&\cdots&0&\cdots&1\\ \end{array}\right)\end{displaymath}

c and s are the cosine and sine of an angle $\theta$ determined in accordance with the following objective :

\begin{displaymath}
D =
\left(
\begin{array}
{cc}
c&s\\ -s&c\\ \end{array}\right...
 ...ight)
\left(
\begin{array}
{cc}
c&s\\ -s&c\\ \end{array}\right)\end{displaymath}

where D is a diagonal matrix and app,apq,aqp,aqq are elements of A. With the resulting $\theta$, B = JTAJ will agree with A except in rows and columns p and q, and is closer to diagonal form than A. Continuing this pre-multiplication and post-multiplication by matrices $J(p,q,\theta)$, using different p and q, we can obtain an orthogonal diagonalization of A. This algorithm can be modified to obtain the SVD of an arbitrary $m \times n$ matrix A. A parallel implementation involves the storage of two columns of the matrix per processor. The determination of $\frac{n}{2}$ matrices $J(p,q,\theta)$ and post-multiplication can be done in parallel. The problem with this approach is that the transformation matrices J have to move between processors in order to perform the premultiplication that affects the rows.

A parallel implementation using mesh connected processors is described in Brent, et al. (1985). In order to compute the SVD for an $m \times n$ matrix, O(n2) processors are needed. This requirement seriously limits the size of the matrices that can be decomposed on the Connection Machine (CM).

The method that is most suitable for implementation on the CM is described in Brent and Luk (1985). It requires O(n) processors and time O(mnS) where $S\approx O(\log n) $ (typically $S \leq 10$). With this method there is no successive pre-multiplication of the matrix A so the transformation matrices don't have to move between processors. This approach to the SVD computation is an implementation of Hestenes' method (Hestenes, 1958).


previous up next print clean
Next: HESTENES' METHOD Up: Rub: Singular value decomposition Previous: Rub: Singular value decomposition
Stanford Exploration Project
12/18/1997