[*] up next print clean
Next: INTRODUCTION Up: Table of Contents

[1] [*] [1] ([*]) [1] [*] [1] [*] [1] [*] [1] [*] [1]   [1]   [1]   [1]   [1]   et al. [1] [*] Proof:   Remark:  
\begin{eqnarray}
{
\enq{\end{eqnarray} (1)
x x a b c d e E g h j J m n p ^ q r ^ s t u v w x . y z ^ ^ ^ ^ ^ 1_n 1_m A B C D F G H I J k K L M N O 0 P Q R S T T U V W X Y Z R 0&^T&0 &00& A C D E F G K L P R

S T V W min max LS [1]LS[#1] [1]LS[#1] Int Bdy k_- k_+ k_s i(r - t) i(r - t) i(z - t)

Nonlinear least squares and regularization

James G. Berryman

berryman@sep.stanford.edu

ABSTRACT

I present and discuss some general ideas about iterative nonlinear output least-squares methods. The main result is that, if it is possible to do forward modeling on a physical problem in a way that permits the output (i.e., the predicted values of some physical parameter that could be measured) and the first derivative of the same output with respect to the model parameters (whatever they may be) to be calculated numerically, then it is possible (at least in principle) to solve the inverse problem using the method described. The main trick learned in this analysis comes from the realization that the steps in the model updates may have to be quite small in some cases for the implied guarantees of convergence to be realized.



 
[*] up next print clean
Next: INTRODUCTION Up: Table of Contents
Stanford Exploration Project
11/12/1997