next up previous print clean
Next: A nonlinear-estimation method Up: SEGREGATING P AND S Previous: Failure of independence assumption

Solution by weighting functions

Examining Figure 1, we realize that our goals were really centered in the quiet regions. We need to boost the importance of those quiet regions in the analysis. What we need is a weighting function. Denote the i-th component of a vector with the subscript i, say vi. When we minimize the sums of squares of $v_i-\alpha h_i$,the weighting function for the i-th component is  
 \begin{displaymath}
w_i \eq {1 \over v_i^2 \ + \ \sigma^2 }\end{displaymath} (7)
and the minimization itself is  
 \begin{displaymath}
\min_\alpha \left[
\sum_i \ w_i (v_i - \alpha h_i)^2
\right]\end{displaymath} (8)
To find $\alpha'$, the weighting function would be $w=1/(h^2+\sigma^2)$.

The detailed form of these weighting functions is not important here. The form I chose is somewhat arbitrary and may be far from optimal. The choice of the constant $\sigma$is discussed on page [*]. What is more important is the idea that instead of minimizing the sum of errors themselves, we are minimizing something like the sum of relative errors. Weighting makes any region of the data plane as important as any other region, regardless of whether a letter (big signal) is present or absent (small signal). It is like saying a zero-valued signal is just as important as a signal with any other value. A zero-valued signal carries information.

When signal strength varies over a large range, a nonuniform weighting function should give better regressions. The task of weighting-function design may require some experimentation and judgment.



 
next up previous print clean
Next: A nonlinear-estimation method Up: SEGREGATING P AND S Previous: Failure of independence assumption
Stanford Exploration Project
10/21/1998