previous up next print clean
Next: TESTS Up: HUBER FUNCTION REGRESSION Previous: HUBER FUNCTION REGRESSION

Distractions

For simplicity above I omitted the noise variance/covariance. Theoretically, they are easy to include and in practice I would do so. Practically, they are unknown and often erratic so it is a happy coincidence that L1-nomr reduces the sensitivity to the noise variance.

Useing a preconditioner does not change any of the above conclusions. Then $\bold m=\bold B\bold x$where $\bold A\bold B=\bold I$so the usual problem $\min_{\bold m} \ ( \vert\bold F\bold m-\bold d\vert + \vert\bold A\bold m \vert^2)$becomes simply $\min_{\bold x} \ ( \vert\bold F\bold B\bold x-\bold d\vert + \vert\bold x\vert^2)$


Stanford Exploration Project
11/12/1997