next up previous [pdf]

Next: Limitations Up: FPGAs Previous: Bit-width Minimization

Data Compression

In contrast to CPUs, implementing decompression and compression on streaming architectures only consumes extra hardware resources. The computation performance does not get affected, as the fully-pipelined circuit would provide the same throughput. Moreover, as the performance of the design is in most cases constrained by how fast we can push the data in and out, i.e. the memory bandwidth, adding data decompression and compressing stages enables us to push more data through the processing circuits and improve the performance of the design.

For typical seismic applications, by utilizing the similar value range of neighboring data items, both the wavefield and velocity can be compressed by a significant ratio. For wavefield and the velocity data represented in 32-bit single-precision floating-point numbers, two times compression is usually achievable by converting the values into 16-bit fixed-point representation, and the performance of the design can be doubled for the cases in which memory bandwidth is the bottleneck.


next up previous [pdf]

Next: Limitations Up: FPGAs Previous: Bit-width Minimization

2009-10-16