Next: Slope estimation tests
Up: The Method
Previous: Discretizing the problem
To estimate two local slopes p1 and p2, we treat vector in
equation (10) as a familiar prediction error, and find the
p1 and p2 which minimize the squared norm of the prediction error.
First we define the following shorthand:
Expanding from equation (10) and
collecting terms yields a nonlinear function of p1 and p2, which we
denote Q(p1,p2):
| |
|
| |
| (11) |
To find the least-squares-optimal p1 and p2, we compute the partial
derivatives of Q(p1,p2), set them equal to zero, and solve a system of
two equations.
| |
|
| (12) |
| |
| (13) |
I use Newton's method for two variables to compute the optimal slopes by
updating estimates of p1 and p2 with the following iteration:
| |
(14) |
| (15) |
The estimated slopes at iteration k are p1,k and p2,k.
fp1(p1,k,p2,k) is, for example, the partial derivative of
f(p1,p2) with respect to p1. While intimidating, equations
(14) and (15) result simply from the inversion
of a 2-by-2 matrix of second derivatives (of Q(p1,p2)), the so-called
Hessian matrix. Since, the partial derivatives of f and g are non-constant,
the problem is non-quadratic, which implies that Newton's method may diverge for
certain initial guesses (p1,0,p2,0), and furthermore, may converge to
a local minimum. In practice, however, the method converges to machine
precision within 3-5 iterations.
Next: Slope estimation tests
Up: The Method
Previous: Discretizing the problem
Stanford Exploration Project
11/11/2002