scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Open Source Synthesis and Verification Tool for Fixed-to-Floating and Floating-to-Fixed Points Conversions

06 Sep 2016-Circuits and Systems (Scientific Research Publishing)-Vol. 07, Iss: 11, pp 3874-3885
TL;DR: An open source high level synthesis fixed-to-floating and floating- to-fixed conversion tool is presented for embedded design, communication systems, and signal processing applications.
Abstract: An open source high level synthesis fixed-to-floating and floating-to-fixed conversion tool is presented for embedded design, communication systems, and signal processing applications. Many systems use a fixed point number system. Fixed point numbers often need to be converted to floating point numbers for higher accuracy, dynamic range, fixed-length transmission limitations or end user requirements. A similar conversion system is needed to convert floating point numbers to fixed point numbers due to the advantages that fixed point numbers offer when compared with floating point number systems, such as compact hardware, reduced verification time and design effort. The latest embedded and SoC designs use both number systems together to improve accuracy or reduce required hardware in the same design. The proposed open source design and verification tool converts fixed point numbers to floating point numbers, and floating point numbers to fixed point numbers using the IEEE-754 floating point number standard. This open source design tool generates HDL code and its test bench that can be implemented in FPGA and VLSI systems. The design can be compiled and simulated using open source Iverilog/GTKWave and verified using Octave. A high level synthesis tool and GUI are designed using C#. The proposed design tool can increase productivity by reducing the design and verification time, as well as reduce the development cost due to the open source nature of the design tool. The proposed design tool can be used as a standalone block generator or implemented into current designs to improve range, accuracy, and reduce the development cost. The generated design has been implemented on Xilinx FPGAs.

Content maybe subject to copyright    Report

Circuits and Systems, 2016, 7, 3874-3885
http://www.scirp.org/journal/cs
ISSN Online: 2153-1293
ISSN Print: 2153-1285
DOI:
10.4236/cs.2016.711323
September 23, 2016
Open Source Synthesis and Verification
Tool for Fixed-to-Floating and
Floating-to-Fixed Points Conversions
Semih Aslan
1
, Ekram Mohammad
1
, Azim Hassan Salamy
2
1
Ingram School of Engineering, Electrical Engineering Texas State University, San Marcos, Texas, USA
2
School of Engineering, Electrical Engineering University of St. Thomas, St. Paul, Minnesota, USA
Abstract
An open source high level synthesis fixed-to-floating and floating-to-fixed conve
r-
sion tool is presented for
embedded design, communication systems, and signal
processing applications. Many systems use a fixed point number system. Fixed point
numbers often need to be converted to floating point numbers for higher accuracy,
dynamic range, fixed-length transmission
limitations or end user requirements. A
similar conversion system is needed to convert floating point numbers to fixed point
numbers due to the advantages that
fixed point numbers offer when compared with
floating point number systems, such as compact har
dware, reduced verification time
and design effort. The latest embedded and SoC designs use both number systems
together to improve accuracy or reduce required hardware in the same design. The
proposed open source design and verification tool converts fixe
d point numbers to
floating point numbers, and floating point numbers to fixed point numbers using the
IEEE-
754 floating point number standard. This open source design tool generates
HDL code and its test bench that can be implemented in FPGA and VLSI syst
ems.
The design can be compiled and simulated using open source Iverilog/GTKWave
and verified using Octave. A high
level synthesis tool and GUI are designed using C#.
The proposed design tool can increase productivity by reducing the design and ver
i-
ficatio
n time, as well as reduce the development cost due to the open source nature of
the design tool. The proposed design tool can be used as a standalone block gener
a-
tor or implemented into current designs to improve range, accuracy, and reduce the
development cost. The generated design has been implemented on Xilinx FPGAs.
Keywords
FPGA, VLSI, RTL, Iverilog, GTKWave, OCTAVE, HLS, C#, Open Source
How to cite this paper:
Aslan, S., M
o-
hammad
, E. and Salamy, A.H. (2016)
Open
Source Synthesis and Verification Tool for
Fixed
-to-Floating and Floating-to-
Fixed Points
Conversions.
Circuits and Systems
,
7
, 3874
-
3885
.
http://dx.doi.org/10.4236/cs.2016.711323
Received:
May 18, 2016
Accepted:
May 30, 2016
Published:
September 23, 2016
Copyright © 201
6 by authors and
Scientific
Research Publishing Inc.
This work is licensed under the Creative
Commons Attribution International
License (CC BY
4.0).
http://creativecommons.org/licenses/by/4.0/
Open Access

S. Aslan et al.
3875
1. Introduction
Most embedded systems, System-on-Chip (SoC) and transmission systems are imple-
mented using either fixed point, floating point or hybrid number systems wherein fixed
[1] [2] and floating point numbers [3] [4] can be used together in the same chip [5]-[7].
The IEEE754-1985 standard was released for binary floating point arithmetic with some
new features such as better precision, range and accuracy [8]. The current standard in-
cludes half precision, single, double, double-extended and quadruple precisions [8].
The single precision binary 32-bit floating and 32-bit fixed point numbers are shown
in Figure 1. The most significant bit (index number 31) of the floating point number is
the sign bit. The next eight bits are biased exponents (index numbers 30 - 23) and the
last 23 bits are fraction bits [8]-[10]. The representation and decimal calculation of the
single precision floating-point number are shown in Equations (1) and (2), respectively:
( )
( )
( )
127
22 21 20 19 1 0
11 2
s
float
x xxxx xx
ε
=−⋅
(1)
( )
( )
22
127
23
0
11 2 2
s
i
decimal i
i
Xx
ε
=

=−+


(2)
The 32-bit fixed point number that is shown in
Figure 1 has an integer signed-fixed
point number and its decimal equivalent is shown in Equations (3) and (4), respective-
ly:
( )
31 30 29 28 27 1 0fixed
x x x x x x xx=
(3)
( )
( )
30
31
31
0
22
i
decimal i
i
X xx
=

=


(4)
The numbers that are used in many DSP and communication systems are scaled be-
tween [−1, 1). The [−1, 1) scaled version of equations (3) and (4) can be written as Eq-
uations (5) and (6) respectively:
(5)
( )
( )
30
31
31
31
0
22 2
i
decimal i
i
X xx
=


=





(6)
Figure 1. IEEE 754 floating and 32-bit fixed point numbers.
S
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 1 0 0 0 0 0 1 1 0 1 1 0 1 0 0 0 0
S
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
0 0 0 0 0 1 0 0 1 1 1 1 1 0 0 0 0 0 1 1 0 1 1 0 1 0 0 0 0 0 0 0
Exponent (8 Bits)
Fractions (23 Bits)
Fractions (23 Bits)
IEEE 754 Single Precision Floating Point Number
32-Bit Fixed Point

S. Aslan et al.
3876
The hardware implementation of fixed point number systems requires less hardware
than floating point number systems. The implementation of addition in a floating point
number system can be particularly difficult and will consume more hardware than fixed
point numbers. Fixed point numbers are limited to the number of bits used as shown in
Figure 1. For example, representing the current U.S. National debt, which is
19,246,968,550,852 dollars [11], requires 45-bit fixed point number system.
Representing the same debt in Mexican pesos, which is 337,858,398,896,373, requires a
49-bit fixed-point number system, and 44,385,703,632 bit coins require a 36-bit fixed
point number system. These numbers can be represented using single precision 32-bit
floating numbers 0 × 558C0A46, 0 × 5799A3E5, and 0 × 51255981, respectively.
Another advantage of a floating-point number system is in the representation of small-
er numbers [9]. For example, the charge and mass of a proton are 1.60217646 × 10
19
C
and 1.6726219 × 10
27
kg, which can be represented as 32-bit floating point numbers as
0 × 203D26D0 and 0 × 130484CD, respectively. The error during this representation is
3.9999 × 10
27
. To represent the charge of a proton using a fixed-point number system
[12], an 80-bit is required. Error during this representation is 6.5653 × 10
25
. To reduce
this error, the number of bits needs to be increased to 88-bit. This error is reduced to
3.4346 × 10
25
. This shows that floating point numbers offer better dynamic range.
To improve design accuracy, improve range and reduce design time and cost, a cus-
tom fixed-to-floating and floating-to-fixed-point conversion tool is proposed. This
conversion tool is described in Section 2. FPGA implementation and results are dis-
cussed in Section 3, and the conclusion and future work are given in Section 4.
2. Fixed-to-Floating Point Conversion System
Most hardware blocks in the latest processors, microcontrollers, microprocessors, and
many special purpose arithmetic logic units (ALU) use fixed and floating point arith-
metic operations. Some of these hardware blocks need to communicate with each other
or I/O needs to be represented in either fixed and/or floating point numbers. Fixed-to-
floating and floating-to-fixed point conversion blocks are often needed to properly
transfer the data as seen in Figure 2 below. Designing and verifying these blocks could
be time consuming and require a longer time-to-market (TTM) cycle.
The proposed conversion tool shown in Figure 3 can accelerate TTM [13] by reduc-
Figure 2. Conversion blocks communication.
Output
(Fixed)
Input
(Fixed)
Input
(Float)
Float/Fixed
Conversion
Fixed/Float
Conversion
Floating Point
HW
Fixed Point
HW
Output
(Float)

S. Aslan et al.
3877
Figure 3. Fixed-to-floating point conversion GUI.
ing the verification time if IEEE-754 floating point or custom floating point arithmetic
are required. The GUI of the proposed the conversion system is developed in C# with
Window Presentation Foundation (WPF). The developed software is inherently ex-
ecutable in every Windows PC running Windows 7 or greater. The software does not
require any special runtime except .NET, which is available in every Windows PC. The
GUI incorporates the Model-View-View Model (MVVM) model for improved interac-
tion and easy user interface. It is modular, refactorable and easily extendable. The GUI
code and the logic are separated via MVVM model so that developing UI does not in-
terfere with the main aspect of the software.
This conversion tool takes an n-bit fixed point input and creates a 32-bit floating
point number and also takes a 32-bit floating point number and creates an n-bit fixed
point output. This tool creates IEEE 754-based hardware and test bench using Verilog
HDL. The test bench is created using C# and functional verification is done using Ive-
rilog
[14], GTKWave [15] and OCTAVE [16]. This code is also checked using LEDA
[17] for ASIC and FPGA compatibility. The components of this conversion system are:
Float-to-Fixed: This is an optional check box to generate a float-to-fixed conversion
block. The system converts a 32-bit single precision floating point number to any
n-bit fixed point number.

S. Aslan et al.
3878
o Fixed Point Total Bits: This is a user defined field for fixed point number n. The
current system supports any value of n. The default value for this field is 32 bits.
o Total Test Vectors Generate: This is a user defined field for the number of test vec-
tors. The current system supports any value of the test vectors. The default value for
this field is 100.
Fixed-to-Float: This is an optional check box to generate fixed-to-float conversion
block. The system converts any n-bit fixed point number to a 32-bit single precision
floating point number.
o Fixed Point Total Bits: This is a user defined field for fixed point number n. The
current system supports any value of n. The default value for this field is 32 bits.
o Total Test Vectors Generate: This is a user defined field for the number of test vec-
tors. The current system supports any value of test vectors. The default value for this
field is 100.
Select Iverilog Directory: This design and verification software supports both Iveri-
log and Modelsim compilers and simulators. Due to the open source nature of the
software, only Iverilog verification is performed. The generated RTL code can be
compiled and simulated using third party EDA tools if necessary. The correct Iveri-
log path needs to be entered for proper verification operation. The default of Iveri-
log installation path is used as default for the design tool.
Generate Octave File for Error Analysis: During Iverilog verification process values
are compared using the expected values and error is displayed in a plot window.
This method is an easy way to verify a large amount of test vectors.
The idea behind the fixed-to-floating point conversion system is similar to the High
Level Synthesis (HLS)
[18] design flow shown in
Figure 4. This design flow model is
used to create a proposed conversion tool design flow that is shown in Figure 5 below.
The conversion tool generates the RTL design files and verifies them using the gener-
ated test bench file with user-defined test vectors. All of the test vectors are verified
Figure 4. HLS design flow.
Figure 5. The proposed conversion tool design flow.

References
More filters
StandardDOI
01 Jan 2008

1,354 citations

Book
12 Nov 2009
TL;DR: The Handbook of Floating-point Arithmetic is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.
Abstract: Floating-point arithmetic is by far the most widely used way of implementing real-number arithmetic on modern computers. Although the basic principles of floating-point arithmetic can be explained in a short amount of time, making such an arithmetic reliable and portable, yet fast, is a very difficult task. From the 1960s to the early 1980s, many different arithmetics were developed, but their implementation varied widely from one machine to another, making it difficult for nonexperts to design, learn, and use the required algorithms. As a result, floating-point arithmetic is far from being exploited to its full potential. This handbook aims to provide a complete overview of modern floating-point arithmetic, including a detailed treatment of the newly revised (IEEE 754-2008) standard for floating-point arithmetic. Presented throughout are algorithms for implementing floating-point arithmetic as well as algorithms that use floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program. Key topics and features include: * Presentation of the history and basic concepts of floating-point arithmetic and various aspects of the past and current standards * Development of smart and nontrivial algorithms, and algorithmic possibilities induced by the availability of a fused multiply-add (fma) instruction, e.g., correctly rounded software division and square roots * Implementation of floating-point arithmetic, either in softwareon an integer processoror hardware, and a discussion of issues related to compilers and languages * Coverage of several recent advances related to elementary functions: correct rounding of these functions and computation of very accurate approximations under constraints * Extensions of floating-point arithmetic such as certification, verification, and big precision Handbook of Floating-Point Arithmetic is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.

468 citations

Book
01 Jan 2001
TL;DR: A broad overview of numerical computing, in a historical context, with a special focus on the IEEE standard for binary floating point arithmetic, explained in a simple yet rigorous context.
Abstract: A broad overview of numerical computing, in a historical context, with a special focus on the IEEE standard for binary floating point arithmetic. Key ideas are developed step by step, taking the reader from floating point representation, correctly rounded arithmetic, and the IEEE philosophy on exceptions, to an understanding of the crucial concepts of conditioning and stability, explained in a simple yet rigorous context. There are technical details and challenging exercises that go beyond the subjects covered in the text.

261 citations


"Open Source Synthesis and Verificat..." refers background in this paper

  • ...The next eight bits are biased exponents (index numbers 30 - 23) and the last 23 bits are fraction bits [8]-[10]....

    [...]

Journal ArticleDOI
TL;DR: An overview of the components in the VFloat library are given and their use in an implementation of the K-means clustering algorithm applied to multispectral satellite images is demonstrated.
Abstract: Optimal reconfigurable hardware implementations may require the use of arbitrary floating-point formats that do not necessarily conform to IEEE specified sizes. We present a variable precision floating-point library (VFloat) that supports general floating-point formats including IEEE standard formats. Most previously published floating-point formats for use with reconfigurable hardware are subsets of our format. Custom datapaths with optimal bitwidths for each operation can be built using the variable precision hardware modules in the VFloat library, enabling a higher level of parallelism. The VFloat library includes three types of hardware modules for format control, arithmetic operations, and conversions between fixed-point and floating-point formats. The format conversions allow for hybrid fixed- and floating-point operations in a single design. This gives the designer control over a large number of design possibilities including format as well as number range within the same application. In this article, we give an overview of the components in the VFloat library and demonstrate their use in an implementation of the K-means clustering algorithm applied to multispectral satellite images.

55 citations


"Open Source Synthesis and Verificat..." refers background in this paper

  • ...Most embedded systems, System-on-Chip (SoC) and transmission systems are implemented using either fixed point, floating point or hybrid number systems wherein fixed [1] [2] and floating point numbers [3] [4] can be used together in the same chip [5]-[7]....

    [...]

Book ChapterDOI
Enrico Bocchieri1
01 Jan 2008
TL;DR: This chapter presents fixed-point methods yielding the same recognition accuracy of the floating-point algorithms, and illustrates a practical approach to the implementation of the frame-synchronous beam-search Viterbi decoder, N-grams language models, HMM likelihood computation and mel-cepstrum front-end.
Abstract: There are two main requirements for embedded/mobile systems: one is low power consumption for long battery life and miniaturization, the other is low unit cost for components produced in very large numbers (cell phones, set-top boxes). Both requirements are addressed by CPU’s with integer-only arithmetic units which motivate the fixed-point arithmetic implementation of automatic speech recognition (ASR) algorithms. Large vocabulary continuous speech recognition (LVCSR) can greatly enhance the usability of devices, whose small size and typical on-the-go use hinder more traditional interfaces. The increasing computational power of embedded CPU’s will soon allow real-time LVCSR on portable and lowcost devices. This chapter reviews problems concerning the fixed-point implementation of ASR algorithms and it presents fixed-point methods yielding the same recognition accuracy of the floating-point algorithms. In particular, the chapter illustrates a practical approach to the implementation of the frame-synchronous beam-search Viterbi decoder, N-grams language models, HMM likelihood computation and mel-cepstrum front-end. The fixed-point recognizer is shown to be as accurate as the floating-point recognizer in several LVCSR experiments, on the DARPA Switchboard task, and on an AT&T proprietary task, using different types of acoustic front-ends, HMM’s and language models. Experiments on the DARPA Resource Management task, using the StrongARM-1100 206 MHz and the XScale PXA270 624 MHz CPU’s show that the fixed-point implementation enables real-time performance: the floating point recognizer, with floating-point software emulation is several times slower for the same accuracy.

27 citations