Scaling on the time axis before migration can be advantageous. What about scaling on the space axis? The traditional methods of scaling that are called automatic gain control (AGC) deduce a scaling divisor by smoothing the data envelope (or its square or its absolute value) over some window. Such scaling can vary rapidly from trace to trace, so concern is justified that diffractions might be caused by lateral jumps in the scaling function. On the other hand, there might be good reasons for the scale to jump rapidly from trace to trace. The shots and geophones used to collect land data normally have variable strength and coupling, and these problems affect the entire trace.
A model must be found that respects both physics and statistics. I suggest allowing for gain that is slowly time-variable and shots and geophones of arbitrarily variable strength, but I also prefer to regard an impulse as evidence that the earth really can focus. For example, data processing with this model can be implemented by smoothing the scaling envelope with the filter