next up previous [pdf]

Next: Number Representation Exploration Up: Customized Number Representations Previous: Circuit Design: Basic Arithmetic


Bit-accurate Value Simulator

As discussed earlier, based on the range information, we are able to determine the integer bit-width of fixed-point and LNS numbers and the exponent bit-width of floating-point numbers. The remaining bit-widths, such as the fractional bit-width of fixed-point and LNS numbers, and the mantissa bit-width of floating-point numbers, are predominantly related to the precision of the calculation in order to find out the minimum acceptable values for these precision bit-widths, we need a mechanism to determine whether a given set of bit-width values produce satisfactory results for the application.

In our previous work on function evaluation or other arithmetic designs, we set a requirement of the absolute error of the whole calculation, and use a conservative error model to determine whether the current bit-width values meet the requirement (Lee et al., 2006). However, a specified requirement for absolute error does not work for seismic processing. To find out whether the current configuration of precision bit-width is accurate enough, we need to run the whole program to produce the resulting image, to find out whether the image contains the correct pattern information. Thus, to enable exploration of different bit-width values, a value simulator for different number representations is needed to provide bit-accurate simulation results for the hardware designs.

With the requirement to produce bit-accurate results as the corresponding hardware design, the simulator also needs to be efficiently implemented, as we need to run the whole application (which takes days using the whole input dataset) to produce the image.

In our approach, the simulator works with ASC format C++ code. It re-implements the hardware data-types, such as HWfix, HWfloat and HWlns, and overloads their arithmetic operators with the corresponding simulation code.

For HWfix variables, the value is stored in a 64-bit signed integer, while another integer is used to record the fractional point. The basic arithmetic operations are mapped into shifts and arithmetic operations of the 64-bit integers.

For HWfloat variables, the value is stored in a 64-bit double-precision floating-point number, with two other integers used to record the exponent and mantissa bit-width. To keep the simulation simple and fast, the arithmetic operations are processed using double-precision floating-point values. However, to keep the result bit-accurate, during each assignment, using functions frexp and ldexp. The double-precision value is decomposed into mantissa and exponent, truncated according to the exponent and mantissa bit-width, and combined back into the double value.

The arithmetic operations of HWlns are implemented using HWfix numbers. Thus, we call the HWfix simulation code to perform the calculations of HWlns.


next up previous [pdf]

Next: Number Representation Exploration Up: Customized Number Representations Previous: Circuit Design: Basic Arithmetic

2007-09-18