next up previous print clean
Next: CGG with iteratively reweighted Up: Conjugate Guided Gradient(CGG) method Previous: Conjugate Guided Gradient(CGG) method

CGG with iteratively reweighted residual

If we apply the same weight $\bold W$ we used in the IRLS, but do not change the operator from $\bold L$ to $\bold W \bold L$,the weight affects only the gradient direction. This corresponds to guiding the gradient direction with a weighted residual, and the resultant weighted gradient will be the same gradient as we used in the IRLS method. This algorithm can be implemented as follows:


		 		 $\bold r \quad\longleftarrow\quad\bold L \bold m - \bold d$ 
		 iterate { 
		 		 $\bold W \quad\longleftarrow\quad{\bf diag}[f(\bold r)]$ 
		 		  $\Delta\bold m \quad\longleftarrow\quad\bold L^T\bold W^T\ \bold r$ 
		 		  $\Delta\bold r\ \quad\longleftarrow\quad\bold L \ \Delta \bold m$ 
		 		  $(\bold m,\bold r) \quad\longleftarrow\quad{\rm cgstep}
 (\bold m,\bold r, \Delta\bold m,\Delta\bold r )$ 
		 		 }. 

Notice that the above algorithm is different from the original CG algorithm only at the step of gradient computation; the modification of the gradient is performed by changing the residual before the gradient is computed from it. By choosing the weight as a function of the residual of the previous iteration step, as we did in the IRLS, we can guide the gradient to the gradient of the Lp-norm. Thus the result obtained by weighting the residual could be interpreted as an LS solution located along the direction of the Lp-norm gradient, according to the weight applied. If, during the iteration, any intermediate solution is found at the minimum L2-norm location in the model space, it will be the final solution of the algorithm, and it is the same as the solution of the conventional LS problem. However, the minimum L2-norm location is unlikely to fall along the gradient of the different Lp-norm determined by the applied weight. Therefore, it is more likely that the solution will be close to the minimum Lp-norm location determined by the applied weight.


next up previous print clean
Next: CGG with iteratively reweighted Up: Conjugate Guided Gradient(CGG) method Previous: Conjugate Guided Gradient(CGG) method
Stanford Exploration Project
5/23/2004