next up previous print clean
Next: TWO-STAGE LINEAR LEAST SQUARES Up: Multidimensional autoregression Previous: Infill of 3-D seismic

WEIGHTED PEF ESTIMATION

 There are practical reasons for wanting weighting functions in fitting goals and there are physical reasons. When data points are missing for accidental reasons, they spoil the fitting equations they affect and it is best to simply weight those equations by zero.

Physically, echos get weaker with time, though the information content is unrelated to the signal strength. Echos also vary in strength as different materials are encountered by the outgoing wave. Programs for echo analysis typically divide the data by a scaling factor that is a smoothed average of the signal strength. This practice is nearly universal, although it is fraught with hazards. An example of automatic gain control (AGC) is when we compute the divisor by forming the absolute value of the signal strength and then smoothing by convolution with a triangle or damped exponential. Pitfalls are strange amplitude behavior surrounding the water bottom, and the overall loss of information contained in amplitudes. Personally, I have found that the gain function t2

nearly always eliminates the need for AGC on raw field data, but without doubt, AGC is occasionally needed. (A theoretical explanation for t2 is given in IEI.)

Equation (27) shows a tiny example of a weighted least-squares filtering goal. In real-life applications, the output is typically 1000-2000 points and the filter 5-50 points.  
 \begin{displaymath}
\left[ 
 \begin{array}
{c}
 0 \\  0 \\  0 \\  0 \\  0
 \end{...
 ...in{array}
{c}
 1 \\  a_1 \\  a_2
 \end{array} \right] 
 \right)\end{displaymath} (27)
The weights wt are any positive numbers we choose. In ``accidental'' applications, we set weights to zero where data points are missing or bad. In physical applications, we choose the wt so that the residual components are about equal in magnitude. (A generalization of weighting is where the weighting matrix is a filtering matrix or a product of weighting and filtering.)

Equation (27) is of the form $\bold 0 \approx \bold W ( \bold Y\bold a - \bold d)$.To convert this to a new problem without weights we define a new data vector $\bold W\bold d$and a new operator $\bold W\bold Y$simply by carrying $\bold W$ through the parentheses to $\bold 0 \approx (\bold W \bold Y)\bold a - \bold W \bold d $.

An earlier version of this book used a weighting function equal zero for fitting equations that lacked inputs. In this version of the book, we will accomplish the same goals with logical masks so that values to be multiplied by zero are never even computed.


next up previous print clean
Next: TWO-STAGE LINEAR LEAST SQUARES Up: Multidimensional autoregression Previous: Infill of 3-D seismic
Stanford Exploration Project
2/27/1998