next up previous print clean
Next: Acknowledgments Up: Witten and Grant: Convex Previous: Real Data Examples

Conclusions

Convex optimization methods show promise for solving least-squares problems. As exemplified by the least-squares Dix equation, convex optimization can yield similar results to those obtained through conjugate gradient methods. Yet convex optimization also has the advantage of imposing geologically constrained bounds to further enhance the solution. The $\ell_1$ $\bf cvx$ solution also shows a slightly better ability to pick up the faults than the conjugate gradient method. As stated above, this could simply be a function of $\epsilon$ choice.

The convex optimization solver may not be as fast as conjugate gradient methods, but the solution obtained is guaranteed to be correct. The conjugate gradient solution is usually obtained by iterating until we are tired. The convex solver, on the other hand, works until a preset accuracy is achieved. In these problems, this precision was set at 10-9. The efficiency of $\ell_2$versus $\ell_1$ regularization is quite striking. It takes 8 times more iterations to do the $\ell_1$ than the $\ell_2$, but only 3 times as many when both are bounded. This is difference comparable to that of conjugate gradients.

To apply convex optimization larger problems and more complex operators, a convex optimization solver that does not rely on MATLAB is needed. While the $\bf cvx$ software is efficient and easy to use, it is limited by MATLAB's efficiency and lack of memory. If a new solver can be created convex optimization could be successful for future endeavors.


next up previous print clean
Next: Acknowledgments Up: Witten and Grant: Convex Previous: Real Data Examples
Stanford Exploration Project
1/16/2007