next up previous print clean

Wilson-Burg theory

Newton's iteration for square roots  
a_{t+1} \eq {1\over 2} \ \left( a_t \ +\ {s\over a_t} \right)\end{displaymath} (16)
converges quadratically starting from any real initial guess a0 except zero. When a0 is negative, Newton's iteration converges to the negative square root.

Quadratic convergence means that the square of the error $a_t-\sqrt{s}$at one iteration is proportional to the error at the next iteration
a_{t+1} - \sqrt{s} \quad\sim\quad ( a_t-\sqrt{s})^2 
\eq a_t^2 - 2 a_t \sqrt{s} + s \quad \gt \quad 0\end{eqnarray} (17)
so, for example if the error is one significant digit at one iteration, at the next iteration it is two digits, then four, etc. We cannot use equation (17) in place of the Newton iteration itself, because it uses the answer $\sqrt{s}$ to get the answer at+1, and also we need the factor of proportionality. Notice, however, if we take the factor to be 1/(2at), then $\sqrt{s}$ cancels and equation (17) becomes itself the Newton iteration (16).

Another interesting feature of the Newton iteration is that all iterations (except possibly the initial guess) are above the ultimate square root. This is obvious from equation (17).

We can insert spectral functions in the Newton square-root iteration, for example $s(\omega)$ and $a(\omega)$.Where the first guess a0 happens to match $\sqrt{s}$,it will match $\sqrt{s}$ at all iterations. The Newton iteration is
2\ {a_{t+1}\over a_t} \eq 1 \ +\ {s\over a_t^2}\end{displaymath} (18)
Something inspires Wilson to express the spectrum $S=\bar A A$ as a Z-transform and then write the iteration  
{\bar A_{t+1}(1/Z) \over \bar A_t(1/Z)}
 \ +\ 
 {A_{t+1}(Z) \over A_t(Z)}
1 \ +\ {S(Z) \over \bar A_t(1/Z)\ A_t(Z)}\end{displaymath} (19)

Now we are ready for the algorithm: Compute the right side of (19) by polynomial division forwards and backwards and then add 1. Then abandon negative lags and take half of the zero lag. Now you have At+1(Z) / At(Z). Multiply out (convolve) the denominator At(Z), and you have the desired result At+1(Z). Iterate as long as you wish.

(Parenthetically, for those people familiar with the idea of minimum phase (if not, see FGDP or PVI), we show that At+1(Z) is minimum phase: Both sides of (19) are positive, as noted earlier. Both terms on the right are positive. Since the Newton iteration always overestimates, the 1 dominates the rightmost term. After masking off the negative powers of Z (and half the zero power), the right side of (19) adds two wavelets. The 1/2 is wholly real, and hence its real part always dominates the real part of the rightmost term. Thus (after masking negative powers) the wavelet on the right side of (19) has a positive real part, so the phase cannot loop about the origin. This wavelet multiplies At(Z) to give the final wavelet At+1(Z) and the product of two minimum-phase wavelets is minimum phase.)

The input of the program is the spectrum S(Z) and the output is the factor A(Z), a function with the spectrum S(Z). I mention here that in later chapters of this book, the factor A(Z) is known as the inverse Prediction-Error Filter (PEF). In the Wilson-Burg code below, S(Z) and A(Z) are Z-transform polynomials but their lead coefficients are extracted off, so for example, $A(z) = (a_0) + (a_1 Z + a_2 Z^2 + \cdots)$is broken into the two parts a0 and aa. wilsonWilson-Burg spectral factorization
You hear from three different people
that a more isotropic re...
 ...n involves just three convolutions
(could be even done without helix).\end{exer}

next up previous print clean
Stanford Exploration Project