To compute the SVD of A, an orthogonal matrix V is generated such that the transformed matrix AV = W has orthogonal columns. Scaling each column of W so that its 2-norm is unity we get :
where is an matrix and is an nonnegative diagonal matrix. The corresponding SVD of A is given by :
Comparing equation (2) with equation (1), we notice the different sizes of U and as well as and . However, since the null columns of U and are associated with zero diagonal elements of and , respectively, the non-null columns of U and form orthogonal bases for range(A) and there is no essential difference between equations (1) and (2). Successive Jacobi rotations are applied to A in order to obtain W, giving us the following iteration :
Ak+1 = Ak Jkwhere and is determined so that the columns of p and q of Ak+1 are orthogonal. Denoting column p of Ak as a(k)p, the requirements on can be stated as:
where c and s are the cosine and sine of , such that a(k+1)Tp a(k+1)q = 0, leading to the following determination of c and s :
a(k+1)p = c a(k)p - s a(k)q
a(k+1)q = s a(k)p + c a(k)q
a(k+1)Tp a(k+1)q = c s a(k)Tp a(k)p + c2 a(k)Tp a(k)q - s2 a(k)Tq a(k)p - c s a(k)Tq a(k)q = 0Denoting we get :
Dividing by c s :
Denoting and we get :
giving us and s = t c.
For every iteration we choose different column pairs (p,q) and obtain a matrix Jk that will orthogonalize those columns. As we are computing W, we can also compute V beginning with the identity matrix and performing the following with each iteration :
Vk+1 = Vk Jk
There are several ways of choosing the pairs of columns (p,q) that are orthogonalized at every step. The approach used here is one that is suitable for parallel implementation on the CM.