Next: Slope estimation tests
Up: The Method
Previous: Discretizing the problem
To estimate two local slopes p_{1} and p_{2}, we treat vector in
equation (10) as a familiar prediction error, and find the
p_{1} and p_{2} which minimize the squared norm of the prediction error.
First we define the following shorthand:
Expanding from equation (10) and
collecting terms yields a nonlinear function of p_{1} and p_{2}, which we
denote Q(p_{1},p_{2}):
 

 
 (11) 
To find the leastsquaresoptimal p_{1} and p_{2}, we compute the partial
derivatives of Q(p_{1},p2), set them equal to zero, and solve a system of
two equations.
 

 (12) 
 
 (13) 
I use Newton's method for two variables to compute the optimal slopes by
updating estimates of p_{1} and p_{2} with the following iteration:
 
(14) 
 (15) 
The estimated slopes at iteration k are p_{1,k} and p_{2,k}.
f_{p1}(p_{1,k},p_{2,k}) is, for example, the partial derivative of
f(p_{1},p_{2}) with respect to p_{1}. While intimidating, equations
(14) and (15) result simply from the inversion
of a 2by2 matrix of second derivatives (of Q(p_{1},p_{2})), the socalled
Hessian matrix. Since, the partial derivatives of f and g are nonconstant,
the problem is nonquadratic, which implies that Newton's method may diverge for
certain initial guesses (p_{1,0},p_{2,0}), and furthermore, may converge to
a local minimum. In practice, however, the method converges to machine
precision within 35 iterations.
Next: Slope estimation tests
Up: The Method
Previous: Discretizing the problem
Stanford Exploration Project
11/11/2002