The first step to allow RLS filter adaptation is to introduce a forgetting parameter 55#55 in the error-norm estimate at a given sample 35#35:
56#56 | (5) |
Because of the ill-posed nature of the problem, a regularization term is included in the cost function as follows
67#67 |
It is straightforward to see now that adding the regularization term has essentially the effect of adding white noise to the data and makes the correlation matrix diagonally dominant to assure its stable inverse. It is also easy to note the following recursion relations (Haykin, 2002):
68#68 |
Computing the optimal filter weights requires the inverse correlation matrix 69#69. An efficient way of finding it is using the previously shown recursion and the Woodbury matrix inversion lemma. The resulting formulas are:
Here, 71#71 is the a priori estimation error - the error at a current sample 35#35 estimated using the old filter weights from the previous sample 39#39; 72#72 is the gain vector that is the input 73#73 transformed by the inverse correlation matrix 74#74. We see that the filter weights are updated by adding in a priori prediction error scaled by 72#72.
The equations 6 constitute the RLS algorithm. We start off from the initial filter 75#75 and initial correlation matrix 76#76. Then, after choosing the forgetting parameter 17#17 ,the algorithm proceeds to giving the prediction errors.
![]() ![]() ![]() ![]() |
---|
nst0,nst-rls100,nst-rls10,nst-rls1
Figure 5. RLS filter applied to single nonstationary trace (with a filter size 1#1, forgetting parameter 12#12 and initialization constant 13#13: (a) - original trace, (b) - 14#14, (c) - 15#15, (d) - 16#16. ER |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
We applied the RLS filter to a single nonstationary trace (Figure 5). Here, we observe different results by choosing different initialization constants 13#13. Apparently, decreasing it leads to faster convergence, however in the high signal-to-noise environment this might lead to instabilities.
Applying the filter to the raw stacked data (Figure 6) increased resolution throughout the whole section since the filter was changing along the time axis and adapting to the changes in the signal.
![]() ![]() ![]() |
---|
WGstack0,WGstack-rls1-1,WGstack-rls1-0.9
Figure 6. 1D RLS filter applied to raw stack (with filter size 1#1, forgetting parameter 17#17 and initialization constant 15#15: (a) - original stack, (b) - 18#18, (c) - 12#12).ER |
![]() ![]() ![]() ![]() ![]() ![]() |
Adding filter coefficients in the spatial direction leads to suppressing spatially correlated events (mostly reflections). Ideally the output of the PEF should be uncorrelated white noise. This can be achieved by modifying the forgetting parameter and making the filter adapt faster.
![]() ![]() ![]() |
---|
WGstack0,WGstack-rls2-0.99,WGstack-rls2-0.9
Figure 7. 2D RLS filter applied to raw stack (with filter size 9#9, forgetting parameter 17#17 and initialization constant 15#15: (a)- original stack, (b) - 19#19, (c) - 12#12).ER |
![]() ![]() ![]() ![]() ![]() ![]() |