If we sacrifice regularizing along the time axis the problem becomes more manageable. We can redefine our data as
![]() |
(8) |
We can now split the data along the time axis and regularize along any of the remaining axes. For this paper I chose to only regularize along the offset axes. A 4-D prediction error filter would be preferable, but would require simultaneously infilling and estimating the filter or some ad-hoc scheme that is beyond the scope of this paper.
Regularizing only over offset also allows additional cost savings.
We can pull out the operator outside our filtering
operation. For the cascaded derivative operation used in
Biondi and Vlad (2001) we save the cost of
six
per iteration
step (approximately 67% reduction in cost).
Our expanded model space (hy axis) requires a new regularization scheme. Two obvious choices come to mind. The first is to cascade derivative regularization along the hy axis. Our new regularization operator becomes
![]() |
(9) |
On even a small problem the current formulation is still problematic on SEP's current architecture. Having to store six copies of the model exceed our node's disk capacity even after splitting the data along the time axes. The final simplification is instead of solving a single global inversion problem to solve for each frequency independently. This final simplification makes the problem manageable, but at a price. We are doing a low number of conjugate gradient iterations, therefore our solution step size (and direction after the first iteration) is going to be different for the global and the individual local problems.