I notice 2-3 dozen computational operations for each triangle wavelet. While these many operations are of no concern in demonstrations and experimental work, for production work it is desirable to eliminate them if possible. The midpoint axis is the key. With velocity transformations, the inner loop can then be a sum over midpoint. With migration, this speedup is only possible if the trace spacing and velocity are constant over the x-axis. Subroutine aamig() implements this convolution over midpoint outside the loop on t but the midpoint loop can be brought inside the inner loops in the spotw() subroutine. Then the outer overhead would be of little consequence and the migration should speed up by a factor of 2-3 dozen.
It would be fun to have movies of migration versus the antialiasing parameter to see if there is a practical importance to the distinction between antialias=1 and antialias=2.
Given the ample computer power that we enjoy nowadays, it would also be interesting to adapt the program to variable trace spacing and missing data. Notice that the basic concept of wavelet overlap from one trace to the next is easily adaptable to drastically variable spacing or missing traces.
Most seismic data is recorded with low frequencies strongly filtered, however, I have seen recorded data with substantial zero frequency energy. This seems to be from pressure sensors that stay at a constant depth from the bottom as surface water waves roll by effectively at zero velocity. I conjecture the double integral of such data would be so strongly dominated by the low frequency that a precision problem would arise. One method of solving this problem is to prefilter the data. Another would be to use tapered rectangles and tapered triangles starting from something like (1- (.99Z)n)/(1-.99Z).