Lattice filter

The lattice filter can be understood from the point of view of the Levinson-Durbin recursion. The filter coefficients are updated in a following way:

20#20 (1)

where 21#21 is the prediction-error filter of order 22#22 and size 23#23 on the previous iteration, 24#24 has the same coefficients in the reverse order, 25#25 are reflection coefficients and 26#26 is the updated prediction-error filter of order 27#27 and size 28#28. Let us consider the action of the PEF of order 23#23 to the input 29#29 of size 28#28 and with corresponding lags equal to 30#30. This produces a forward prediction error of order 23#23:

31#31

Looking at the terms in the Equation 1 separately:

32#32    

33#33    

Therefore, to estimate the forward prediction error 34#34 (where 23#23 represents the filter length 28#28 at time 35#35), we need the forward prediction error 36#36 at time 35#35 corresponding to the lower filter of order 37#37 and backward prediction error 38#38 at the previous sample 39#39:

40#40 (2)

For this procedure and the resulting filter to be stable (and minimum-phase) the prediction-error power should be decreasing or at least stay the same. Update equations for the prediction-error of the Levinson recursion (Yilmaz, 2001) show that this requires the reflection coefficients 41#41 at every iteration 42#42.

To find the coefficients 25#25, we should find the minimum of objective function:

43#43

where 44#44 and 45#45 are now the vectors containing the forward and backward prediction errors of 23#23-order accumulated up to the 35#35-th sample.

46#46    

Taking the derivative of the previous expression with respect to 25#25:

47#47

Therefore, the optimal 25#25 minimizing the power of forward and backward errors can be chosen by equating the derivative to zero:

48#48    

What this says is that the reflection coefficients (controlling the PEF and the prediction error) are computed based on all the previous samples. The idea for making this process adaptive is to introduce a "forgetting" parameter 49#49 for the denominator estimate at a current sample 35#35 to become (taking single-pole average):

50#50 (3)

Treating the numerator in the same manner and allowing reflection coefficients to depend on the sample number 51#51 it is possible to obtain the following recursive formula (Haykin, 2002):

52#52 (4)

which is similar to the update equations of the normalized LMS algorithm (Paleologu et al., 2008). This allows the algorithm to respond accordingly depending on the powers of forward and backward errors. If the errors are small (53#53 is small) then the step-size 54#54 is large and the filter adapts rapidly and if the errors are large, the step-size is small and the filter doesn't respond to variations as quickly.

To imitate nonstationarity, I created a simple single trace with four damped sinusoids with four distinct frequencies increasing with time (Figure 5(a)). We can see that after filter is applied, the signal has indeed been compressed, the amplitudes, however, are not really reliable. The effect of forgetting parameter 2#2 is also obvious - the smaller its value, the more adaptive the filter is, which results in overfitting (Figure 1(d)).

nst0 nst0.9 nst0.5 nst0.1
nst0,nst0.9,nst0.5,nst0.1
Figure 1.
Lattice filter applied to single nonstationary trace (with a filter size 1#1, 2#2-forgetting parameter): (a) - original trace, (b) - 3#3, (c) - 4#4, (d) - 5#5. ER
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png]

I then applied this filter to the field stacked data on Figure 2. Again, after filtering, we can see increased resolution. However, we have to be careful again with the forgetting parameter so as to not create "fake" reflections.

WGstack0 WGstack0.9 WGstack0.7
WGstack0,WGstack0.9,WGstack0.7
Figure 2.
Lattice filter applied to raw stack (with a filter size 1#1, 2#2-forgetting parameter): (a) - original stack, (b) - 3#3, (c) - 7#7. ER
[pdf] [pdf] [pdf] [png] [png] [png]



Subsections

2018-06-10