(1) |
(2) |
(3) |
Iterative solvers for the LS problem search the solution space for a better solution in each iteration step, along the gradient direction (in steepest-descent algorithms), or on the plane made by the current gradient vector and the previous descent-step vector (in conjugate-gradient algorithms). Following Claerbout 1992, a conjugate-gradient algorithm for the LS solution can be summarized as follows:
iterate { } ,where the subroutine cgstep() remembers the previous iteration descent vector, , where i is the iteration step, and determines the step size by minimizing the quadrature function composed from (the conjugate gradient) and (the previous iteration descent vector), as follows Claerbout (1992): Notice that the gradient vector () in the CG method for LS solution is the gradient of the squared residual and is determined by taking the derivative of the squared residual (i.e. the L2-norm of the residual, , with respect to the model ):
(4) |