next up previous print clean
Next: Results Up: Regularizing over offset Previous: Regularizing over offset

Implementation

Implementing this method proves to be more problematic then the adjoint case. The added difficulty is caused by the 2-D filter. In order to parallelize over offset we would have to have significant inter-processor communication. This is problematic from both stability, we must rely all of the nodes remaining up, and efficiency, both the cost of sending the data and the delay caused by machine A needing data from B which needs data from C. As a result I decided to parallelize over frequency. As mentioned before this also has drawbacks. An entire (cmpx,cmpy,hx,hy) hypercube can not be held in memory for large problems, so we must do patching along some other axes.

The necessary transposes (to move from an inner time axis, to an outer frequency and back) complicate matters. The data are initially broken along the trace axis. The local datasets are NMOed, FFTed, and transposed. The transposed data is then recombined with frequency the outer axis. The procedure is significantly faster then performing the transpose on a single machine where disk IO dominates. By using multiple nodes which can each do the transpose in-core, or nearly core, the entire processing drops to minimally more than distributing and collecting the data.

Frequency blocks are then distributed to the nodes and equation (8) is applied. The data is then collected and re-split along the cmpx axis. The new regularized frequency slices are transposed, inverse FFTed, inverse NMOed, and recombined to form the output volume.


next up previous print clean
Next: Results Up: Regularizing over offset Previous: Regularizing over offset
Stanford Exploration Project
5/3/2005