next up previous print clean
Next: Sign convention Up: Model fitting by least Previous: Unknown input: deconvolution with

KRYLOV SUBSPACE ITERATIVE METHODS

The solution time for simultaneous linear equations grows cubically with the number of unknowns. There are three regimes for solution; which one is applicable depends on the number of unknowns m. For m three or less, we use analytical methods. We also sometimes use analytical methods on matrices of size $4\times 4$when the matrix contains enough zeros. Today in year 2001, a deskside workstation, working an hour solves about a $4000\times 4000$ set of simultaneous equations. A square image packed into a 4096 point vector is a $64\times 64$ array. The computer power for linear algebra to give us solutions that fit in a $k\times k$ image is thus proportional to k6, which means that even though computer power grows rapidly, imaging resolution using ``exact numerical methods'' hardly grows at all from our $64\times 64$ current practical limit.

The retina in our eyes captures an image of size about $1000\times 1000$which is a lot bigger than $64\times 64$.Life offers us many occasions where final images exceed the 4000 points of a $64\times 64$ array. To make linear algebra (and inverse theory) relevant to such problems, we investigate special techniques. A numerical technique known as the ``conjugate-direction method'' works well for all values of m and is our subject here. As with most simultaneous equation solvers, an exact answer (assuming exact arithmetic) is attained in a finite number of steps. And if n and m are too large to allow enough iterations, the iterative methods can be interrupted at any stage, the partial result often proving useful. Whether or not a partial result actually is useful is the subject of much research; naturally, the results vary from one application to the next.