It would be nice if we could forget about the goal (10) but without it the goal (9), would simply set the signal equal to the data .Choosing the value of will determine in some way the amount of data energy partitioned into each. The last thing we will do is choose the value of ,and if we do not find a theory for it, we will experiment.
The operators and can be thought of as ``leveling'' operators. The method of least-squares sees mainly big things, and spectral zeros in and tend to cancel spectral lines and plane waves in and .(Here we assume that power levels remain fairly level in time. Were power levels to fluctuate in time, the operators and should be designed to level them out too.)
None of this is new or exciting in one dimension, but I find it exciting in more dimensions. In seismology, quasisinusoidal signals and noises are quite rare, whereas local plane waves are abundant. Just as a short one-dimensional filter can absorb a sinusoid of any frequency, a compact two-dimensional filter can absorb a wavefront of any dip.
To review basic concepts, suppose we are in the one-dimensional frequency domain. Then the solution to the fitting goals (10) and (9) amounts to minimizing a quadratic form by setting to zero its derivative, say
(11) |
(12) | ||
(13) |
The analytic solutions in equations (12) and (13) are valid in 2-D Fourier space or dip space too. I prefer to compute them in the time and space domain to give me tighter control on window boundaries, but the Fourier solutions give insight and offer a computational speed advantage.
Let us express the fitting goal in the form needed in computation.
(14) |
signoisignal and noise separation As with the missing-data subroutines, the potential number of iterations is large, because the dimensionality of the space of unknowns is much larger than the number of iterations we would find acceptable. Thus, sometimes changing the number of iterations niter can create a larger change than changing epsilon. Experience shows that helix preconditioning saves the day.