previous up next print clean
Next: PARALLEL IMPLEMENTATION ON THE Up: Rub: Singular value decomposition Previous: Introduction

HESTENES' METHOD

To compute the SVD of A, an orthogonal matrix V is generated such that the transformed matrix AV = W has orthogonal columns. Scaling each column of W so that its 2-norm is unity we get :

\begin{displaymath}
W = \tilde{U} \tilde{\Sigma}\end{displaymath}

where $\tilde{U}$ is an $m \times n$ matrix and $\tilde{\Sigma}$ is an $n \times n$ nonnegative diagonal matrix. The corresponding SVD of A is given by :

 
 \begin{displaymath}
A = \tilde{U} \tilde{\Sigma} V^T\end{displaymath} (2)

Comparing equation (2) with equation (1), we notice the different sizes of U and $\tilde{U}$ as well as $\Sigma$ and $\tilde{\Sigma}$. However, since the null columns of U and $\tilde{U}$ are associated with zero diagonal elements of $\Sigma$ and $\tilde{\Sigma}$, respectively, the non-null columns of U and $\tilde{U}$ form orthogonal bases for range(A) and there is no essential difference between equations (1) and (2). Successive Jacobi rotations are applied to A in order to obtain W, giving us the following iteration :

Ak+1 = Ak Jk

where $J_{k} = J(p,q,\theta)$ and $\theta$ is determined so that the columns of p and q of Ak+1 are orthogonal. Denoting column p of Ak as a(k)p, the requirements on $\theta$ can be stated as:

\begin{displaymath}
\left(
\begin{array}
{cc}
a^{(k+1)}_{p}&a^{(k+1)}_{q}\\ \end...
 ...ight)
\left(
\begin{array}
{cc}
c&s\\ -s&c\\ \end{array}\right)\end{displaymath}

where c and s are the cosine and sine of $\theta$, such that a(k+1)Tp a(k+1)q = 0, leading to the following determination of c and s :

a(k+1)p = c a(k)p - s a(k)q

a(k+1)q = s a(k)p + c a(k)q

a(k+1)Tp a(k+1)q = c s a(k)Tp a(k)p + c2 a(k)Tp a(k)q - s2 a(k)Tq a(k)p - c s a(k)Tq a(k)q = 0

Denoting $\alpha = a^{(k)T}_{p} a^{(k)}_{p}, \beta = a^{(k)T}_{q} a^{(k)}_{q}, \gamma = a^{(k)T}_{p} a^{(k)}_{q} = a^{(k)T}_{q} a^{(k)}_{p} $ we get :

\begin{displaymath}
c s \alpha + (c^{2} - s^{2}) \gamma - c s \beta = 0\end{displaymath}

Dividing by c s :

\begin{displaymath}
\alpha + 
\left(
\frac{c}{s}-\frac{s}{c}
\right) \gamma - \beta = 0\end{displaymath}

Denoting $t = \frac{s}{c}$ and $\xi = \frac{\beta - \alpha}{2 \gamma}$ we get :

\begin{displaymath}
t^{2} + 2 \xi t - 1 = 0\end{displaymath}

giving us $t = \frac{sign(\xi)}{\vert\xi\vert+\sqrt{1+\xi^{2}}}, c = \frac{1}{\sqrt{1+t^{2}}} $ and s = t c.

For every iteration we choose different column pairs (p,q) and obtain a matrix Jk that will orthogonalize those columns. As we are computing W, we can also compute V beginning with the $n \times n$ identity matrix and performing the following with each iteration :

Vk+1 = Vk Jk

There are several ways of choosing the pairs of columns (p,q) that are orthogonalized at every step. The approach used here is one that is suitable for parallel implementation on the CM.


previous up next print clean
Next: PARALLEL IMPLEMENTATION ON THE Up: Rub: Singular value decomposition Previous: Introduction
Stanford Exploration Project
12/18/1997