** Next:** Subsurface offset imaging condition
** Up:** Pell and Clapp: Accelerating
** Previous:** Computing with FPGAs

FPGA's require that memory is accessed through the processor and
transfered to the FPGA. In order to obtain a meaningful speed advantage
a large number of operations must be performed for each data point.
The density of arithmetic operations per data item is the key to the potential for acceleration.
Algorithms which use a transferred data item only once (such as the vector add example above) are unlikely to accelerate,
since the overhead of transferring the data across the bus is significant, however algorithms such as an offset gather
which use each data item many times will accelerate significantly.
Because FPGA accelerators dedicate specific resources to each operation executed, there is a maximum size to the code segment
that can be executed on-chip. This depends on not only the size of the FPGA but also the complexity of the operations. Additions and multiplications can be implemented more densely than divisions, square roots and complex functions (*sin*, *cos* etc) so algorithms in which adds and multiplies are dominant will accelerate particularly well. This is common in seismic applications.
In contrast to conventional processors, which support a fixed set of data representations (typically integer and IEEE floating point) FPGAs offer the potential for the data representation to be customised to the application. This allows acceleration to be maximised subject to desired accuracy constraints. The dynamic range of migration data is such that floating point representations are not necessary, so our FPGA implementation uses fixed-point data. Fixed point arithmetic can be implemented more densely and with lower latency on FPGAs than floating point, so allowing for increased acceleration without loss of accuracy.

** Next:** Subsurface offset imaging condition
** Up:** Pell and Clapp: Accelerating
** Previous:** Computing with FPGAs
Stanford Exploration Project

5/6/2007