When we use the steepest-descent method, we iteratively find solutions by this updating:
(50) | ||
(51) | ||
(52) |
Suppose that by adding a huge amount of ,we now change and continue iterating. Notice that remains zero because vanishes. Thus we conclude that any null space in the initial guess will remain there unaffected by the gradient-descent process.
Linear algebra theory enables us to dig up the entire null space should we so desire. On the other hand, the computer demands might be vast. Even the memory for holding the many vectors could be prohibitive. A much simpler and more practical goal is to find out if the null space has any members, and if so, to view some of them. To try to see a member of the null space, we take two starting guesses and we run our iterative solver for each of them. If the two solutions, and ,are the same, there is no null space. If the solutions differ, the difference is a member of the null space. Let us see why: Suppose after iterating to minimum residual we find
(53) | ||
(54) |
A practical way to learn about the existence of null spaces and their general appearance is simply to try gradient-descent methods beginning from various different starting guesses. |
``Did I fail to run my iterative solver long enough?'' is a question you might have. If two residuals from two starting solutions are not equal, ,then you should be running your solver through more iterations.
If two different starting solutions produce two different residuals, then you didn't run your solver through enough iterations. |