scispace - formally typeset
Search or ask a question
Journal ArticleDOI

High-Level Synthesis for FPGAs: From Prototyping to Deployment

TL;DR: AutoESL's AutoPilot HLS tool coupled with domain-specific system-level implementation platforms developed by Xilinx are used as an example to demonstrate the effectiveness of state-of-art C-to-FPGA synthesis solutions targeting multiple application domains.
Abstract: Escalating system-on-chip design complexity is pushing the design community to raise the level of abstraction beyond register transfer level. Despite the unsuccessful adoptions of early generations of commercial high-level synthesis (HLS) systems, we believe that the tipping point for transitioning to HLS msystem-on-chip design complexityethodology is happening now, especially for field-programmable gate array (FPGA) designs. The latest generation of HLS tools has made significant progress in providing wide language coverage and robust compilation technology, platform-based modeling, advancement in core HLS algorithms, and a domain-specific approach. In this paper, we use AutoESL's AutoPilot HLS tool coupled with domain-specific system-level implementation platforms developed by Xilinx as an example to demonstrate the effectiveness of state-of-art C-to-FPGA synthesis solutions targeting multiple application domains. Complex industrial designs targeting Xilinx FPGAs are also presented as case studies, including comparison of HLS solutions versus optimized manual designs. In particular, the experiment on a sphere decoder shows that the HLS solution can achieve an 11-31% reduction in FPGA resource usage with improved design productivity compared to hand-coded design.

Summary (8 min read)

I. INTRODUCTION

  • HE RAPID INCREASE of complexity in System-on-a-Chip (SoC) design has encouraged the design community to seek design abstractions with better productivity than RTL.
  • In addition to the line-count reduction in design specifications, behavioral synthesis has the added value of allowing efficient reuse of behavioral IPs.
  • The wide availability of SystemC functional models directly drives the need for SystemC-based HLS solutions, which can automatically generate RTL code through a series of formal constructive transformations.
  • These pre-defined building blocks can be modeled precisely ahead of time for each FPGA platform and, to a large extent, confine the design space.
  • In Sections IV-VIII, using a state-of-art HLS tool as an example, the authors discuss some key reasons for the wider adoption of HLS solutions in the FPGA design community, including wide language coverage and robust compilation technology, platform-based modeling, advancement in core HLS algorithms, improvements on simulation and verification flow, and the availability of domain-specific design templates.

II. EVOLUTION OF HIGH-LEVEL SYNTHESIS FOR FPGA

  • Compilers for high-level languages have been successful in practice since the 1950s.
  • The idea of automatically generating circuit implementations from high-level behavioral specifications arises naturally with the increasing design complexity of integrated circuits.
  • Most of those tools, however, made rather simplistic assumptions about the target platform and were not widely used.
  • Early commercialization efforts in the 1990s and early 2000s attracted considerable interest among designers, but also failed to gain wide adoption, due in part to usability issues and poor quality of results.
  • More recent efforts in high-level synthesis have improved usability by increasing input language coverage and platform integration, as well as improving quality of results.

A. Early Efforts

  • Since the history of HLS is considerably longer than that of FPGAs, most early HLS tools targeted ASIC designs.
  • In the subsequent years in the 1980s and early 1990s, a number of similar high-level synthesis tools were built, mostly for research.
  • The list scheduling algorithm and its variants are widely used to solve scheduling problems with resource constraints [70] ; the forcedirected scheduling algorithm developed in HAL [73] is able to optimize resource requirements under a performance constraint; the path-based scheduling algorithm in the Yorktown Silicon Compiler is useful to optimize performance with conditional branches [12] .
  • The Silage language, along with the Cathedral-II tool, represented an early domain-specific approach in high-level synthesis.
  • These tools received wide attention, but failed to widely replace RTL design.

B. Recent efforts

  • Since 2000, a new generation of high-level synthesis tools has been developed in both academia and industry.
  • The use of C-based languages also makes it easy to leverage the newest technologies in software compilers for parallelization and optimization in the synthesis tools.
  • (ii) C and C++ have complex language constructs, such as pointers, dynamic memory management, recursion, polymorphism, etc., which do have efficient hardware counterparts and lead to difficulty in synthesis.
  • Handel-C allows the user to specify clock boundaries explicitly in the source code.
  • FPGAs have continually improved in capacity and speed in recent years, and their programmability makes them an attractive platform for many applications in signal processing, communication, and high-performance computing.

C. Lessons Learned

  • The authors believe that past failures are due to one or several of the following reasons:.
  • The first generation of the HLS synthesis tools could not synthesize high-level programming languages.
  • Instead, untimed or partially timed behavioral HDL was used.
  • C and C++ lack the necessary constructs and semantics to represent hardware attributes such as design hierarchy, timing, synchronization, and explicit concurrency.

Lack of reusable and portable design specification:

  • Many HLS tools have required users to embed detailed timing and interface information as well as the synthesis constraints into the source code.
  • Lack of satisfactory quality of results (QoR):.
  • There was no dependable RTL to GDSII foundation to support HLS, which made it difficult to consistently measure, track, and enhance HLS results.
  • As a result, the final implementation often fails to meet timing/power requirements.
  • Another major factor limiting quality of result was the limited capability of HLS tools to exploit performance-optimized and power-efficient IP blocks on a specific platform, such as the versatile DSP blocks and on-chip memories on modern FPGA platforms.

Lack of a compelling reason/event to adopt a new design methodology:

  • The first-generation HLS tools were clearly ahead of their time, as the design complexity was still manageable at the register transfer level in late 1990s.
  • Like any major transition in the EDA industry, designers needed a compelling reason or event to push them over the "tipping point," i.e., to adopt the HLS design methodology.
  • This goal is not generally practical for HLS to achieve.
  • It is critical that these optimizations be carefully implemented using scalable and predictable algorithms, keeping tool runtimes acceptable for large programs and the results understandable by designers.
  • The code should be readable by algorithm specialists.

2. Effectively generate efficient parallel architectures

  • With minimal modification of the C code, for parallelizable algorithms.
  • Allow an optimization-oriented design process, where a designer can improve the performance of the resulting implementation by successive code modification and refactoring.
  • Generate implementations that are competitive with synthesizable RTL designs after automatic and manual optimization.
  • Moreover, the authors are pleased to see that the latest generation of HLS tools has made significant progress in providing wide language coverage and robust compilation technology, platform-based modeling, and advanced core HLS algorithms.
  • The authors shall discuss these advancements in more detail in the next few sections.

III. CASE STUDY OF STATE-OF-ART OF HIGH-LEVEL

  • SYNTHESIS FOR FPGAS AutoPilot is one of the most recent HLS tools, and is representative of the capabilities of the state-of-art commercial HLS tools available today.
  • AutoPilot outputs RTL in Verilog, VHDL or cycle-accurate SystemC for simulation and verification.
  • These SystemC wrappers connect high-level interfacing objects in the behavioral test bench with pin-level signals in RTL.
  • The reports include a breakdown of performance and area metrics by individual modules, functions and loops in the source code.
  • Finally, the generated HDL files and design constraints feed into the Xilinx RTL tools for implementation.

Improved design quality:

  • Comprehensive language support allows designers to take full advantage of rich C/C++ constructs to maximize simulation speed, design modularity and reusability, as well as synthesis QoR.
  • In fact, many early C-based synthesis tools only handle a very limited language subset, which typically includes the native integer data types (e.g., char, short, int, etc.), onedimensional arrays, if-then-else conditionals, and for loops.
  • The arbitrary-precision fixed-point (ap_fixed) data types support all common algorithmic operations.
  • Designers can explore the accuracy and cost tradeoff by modifying the resolution and fixed-point location and experimenting with various quantization and saturation modes.
  • AutoPilot also supports the OCSI synthesizable subset [113] for SystemC synthesis.

B. Use of state-of-the-art compiler technologies

  • AutoPilot tightly integrates the LLVM compiler infrastructure [59] [110] to leverage leading-edge compiler technologies.
  • AutoPilot uses the llvm-gcc front end to obtain an intermediate representation (IR) based on the LLVM instruction set.
  • In particular, the following classes of transformations and analyses have shown to be very useful for hardware synthesis: SSA-based code optimizations such as constant propagation, dead code elimination, and redundant code elimination based on global value numbering [2] .
  • Memory optimizations such as memory reuse, array scalarization, and array partitioning [19] to reduce the number of memory accesses and improve memory bandwidth.
  • In other words, the code can be optimized without considering the source language.

A. Platform modeling for Xilinx FPGAs

  • AutoPilot uses detailed target platform information to carry out informed and target-specific synthesis and optimization.
  • The resulting characterization data is then used to make implementation choices during synthesis.
  • Notably, the cost of implementing hardware on FPGAs is often different from that for ASIC technology.
  • On FPGAs, multiplexors typically have the same cost and delay as an adder (approximately one LUT/output).
  • FPGA technology also features heterogeneous on-chip resources, including not only LUTs and flip flops but also other prefabricated architecture blocks such as DSP48s and Block RAMs.

B. Integration with Xilinx toolset

  • In order to raise the level of design abstraction more completely, AutoPilot attempts to hide details of the downstream RTL flow from users as much as possible.
  • Otherwise, a user may be overwhelmed by the details of vendor-specific tools such as the formats of constraint and configuration files, implementation and optimization options, or directory structure requirements.
  • As shown in Figure 1 AutoPilot instantiates these interfaces along with adapter logic and appropriate EDK meta-information to enable a generated module can be quickly connected in an EDK system.

A. Efficient mathematical programming formulations to scheduling

  • Classical approaches to the scheduling problem in highlevel synthesis use either conventional heuristics such as list scheduling [1] and force-directed scheduling [73] , which often lead to sub-optimal solutions, due to the nature of local optimization methods, or exact formulations such as integerlinear programming [45] , which can be difficult to scale to large designs.
  • Unlike previous approaches where using O(m×n) binary variables to encode a scheduling solution with n operations and m steps [45] , SDC uses a continuous representation of time with only O(n) variables: for each operation i, a scheduling variable s i is introduced to represent the time step at which the operation is scheduled.
  • A linear program with a totally unimodular constraint matrix is guaranteed to have integral solutions.
  • Many commonly encountered constraints in high-level synthesis can be expressed in the form of integer-difference constraints.
  • Other complex constraints can be handled in similar ways, using approximations or other heuristics.

B. Soft constraints and applications for platform-based optimization

  • In a typical synthesis tool, design intentions are often expressed as constraints.
  • While some of these constraints are essential for the design to function correctly, many others are not.
  • It is possible that a solution with a slight nominal timing violation can still meet the frequency requirement, considering inaccuracy in interconnect delay estimation and various timing optimization procedures in later design stages, such as logic refactoring, retiming, and interconnect optimization.
  • The approach is based on the SDC formulation discussed in the preceding subsection, but allows some constraints to be violated.
  • Consider the scheduling problem with both hard constraints and soft constraints formulated as follows.

Gs ≤ p hard constraints

  • Here G and H corresponds to the matrices representing hard constraints and soft constraints, respectively, and they are both totally unimodular as shown in [15] .
  • Hard constraints and soft constraints are generated based on the functional specification and QoR targets.
  • This approach offers a powerful yet flexible framework to address various considerations in scheduling.
  • Take the DSP48E block in Xilinx Virtex 5 FPGAs for example: each of the DSP48E blocks contains a multiplier and a post-adder, allowing efficient implementations of multiplication and multiply-accumulation.

C. Pattern mining for efficient sharing

  • A typical target architecture for HLS may introduce multiplexers when functional units, storage units or interconnects are shared by multiple operations/variables in a time-multiplexed manner.
  • Multiplexers (especially large ones) can be particularly expensive on FPGA platforms.
  • Thus, careless decisions on resource sharing could introduce more overhead than benefit.
  • The method tries to extract common structures or patterns in the data-flow graph, so that different instances of the same pattern can share resources with little overhead.
  • Pruning techniques are proposed based on characteristic vectors and locality-sensitive hashing.

D. Memory analysis and optimizations

  • While application-specific computation platforms such as FPGAs typically have considerable computational capability, their performance is often limited by available communication or memory bandwidth.
  • Typical FPGAs, such as the Xilinx Virtex series, have a considerable number of block RAMs.
  • Consider a loop that accesses array A with subscripts i, 2×i+1, and 3×i+1, in the ith iteration.
  • If the loop is targeted to be pipelined with the initiation interval of one, i.e., a new loop iteration starts every clock cycle, the schedule in (b) will lead to port conflicts, because (i+1) mod 2 = (2×(i+1)+1) mod 2 = (3×i+1) mod 2, when i is even; this will lead to three simultaneous accesses to the first bank.
  • Then, an iterative algorithm is used to perform both scheduling and memory partitioning guided by the conflict graph.

VII. ADVANCES IN SIMULATION AND VERIFICATION

  • Besides the many advantages of automated synthesis, such as quick design space exploration and automatic complex architectural changes like pipelining, resource sharing and scheduling, HLS also enables a more efficient debugging and verification flow at the higher abstraction levels.
  • Since HLS provides an automatic path to implementable RTL from behavioral/functional models, designers do not have to wait until manual RTL models to be available to conduct verification.
  • Instead, they can develop, debug and functionally verify a design at an earlier stage with high-level programming languages and tools.
  • This can significantly reduce the verification effort due to the following reasons: (i) It is easier to trace, identify and fix bugs at higher abstraction levels with more compact and readable design descriptions.
  • (ii) Simulation at the higher level is typically orders of magnitude faster than RTL simulation, allowing more comprehensive tests and greater coverage.

A. Automatic co-simulation

  • At present, simulation is the still prevalent technique to check if the resulting RTL complies with the high-level specification.
  • To reduce effort spent on RTL simulation, the latest HLS technologies have made important improvements on automatic co-simulation [86].
  • A C-to-RTL transactor is created to connect highlevel interfacing constructs (such as parameters and global variables) with pin-level signals in RTL.
  • This wrapper also includes additional control logic to manage the communication between the testing module and the RTL design under test (DUT).
  • A pipelined design may require that the test bench feed input data into the DUT at a fixed rate.

PLATFORMS

  • In the end, the time-to-market of an FPGA system design is dependent on many factors, such as availability of reference designs, development boards, and in the end, FPGA devices themselves.
  • This integration often includes a wide variety of system-level design concerns, including embedded software, system integration, and verification [104] .
  • As a result, these cores are not easily amenable to high-level synthesis and form part of the system infrastructure of a design.
  • Subsystem PSS is responsible for executing the relatively low-performance processing in the system.
  • The portion of a design generated using HLS represents the bulk of the FPGA design and communicates with the system infrastructure through standardized wire-level interfaces, such as AXI4 memory-mapped and streaming interfaces [96] shown in Figure 7 .

A. High-level design of cognitive radios project

  • Cognitive radio systems typically contain both computationally intensive processing with high data rates in the radio processing, along with complex, but relatively lowrate processing to control the radio processing.
  • Efficient interaction with the processor is an important part of the overall system complexity.
  • The processor subsystem contains standard hardware modules and is capable of running a standard embedded operating system, such as Linux.
  • The accelerator subsystem is used for implementing components with high computational requirements in hardware.
  • Components also expose a configuration interface with multiple parameters, allowing them to be reconfigured in an executing system by user-defined control code executing in the processor subsystem.

B. Video Starter Kit

  • Video processing systems implemented in FPGA include a wide variety of applications from embedded computer-vision and picture quality improvement to image and video compression.
  • Typically these systems include two significant pieces of complexity.
  • This platform is derived from the Xilinx EDKbased reference designs provided with the Xilinx Spartan 3ADSP Video Starter Kit and has been ported to several Xilinx Virtex 5 and Spartan 6 based development boards, targeting high-definition HD video processing with pixel clocks up to 150 MHz.
  • The incoming video data is analyzed by the Frame Decoder block to determine the frame size of the incoming video, which is passed to the application block, enabling different video formats to be processed.
  • The interface to external memory used for frame buffers is implemented using the Xilinx Multi-ported Memory Controller (MPMC) [118] which provides access to external memory to the Application Block and to the Microblaze control processor, if necessary.

A. Summary of BDTI HLS Certification

  • Xilinx has worked with BDTI Inc. [99] to implement an HLS Tool Certification Program [100] .
  • This program was designed to compare the results of an HLS Tool and the Xilinx Spartan 3 FPGA that is part of the Video Starter Kit, with the result of a conventional DSP processor and with the results of a good manual RTL implementation.
  • There were two applications used in this Certification Program, an optical flow algorithm, which is characteristic for a demanding image processing application and a wireless application for which a very representative implementation in RTL was available.
  • The DSP processor implementation rated "fair", while the AutoPilot implementation rated "good", indicating that less source code modification was necessary to achieve high performance when using AutoPilot.
  • BDTI also assessed overall ease of use of the DSP tool flow and the FPGA tool flow, combining HLS with the low-level implementation tools.

B. Sphere Decoder

  • Xilinx has implemented a sphere decoder for a multi-input multi-output (MIMO) wireless communication system using AutoPilot [67] [85] .
  • The application exhibits a large amount of parallelism, since the operations must be executed on each of 360 independent subcarriers which form the overall communication channel and the processing for each channel can generally be pipelined.
  • The resulting HLS code for the application makes heavy use of C++ templates to describe arbitrary-precision integer data types and parameterized code blocks used to process different matrix sizes at different points in the application.
  • Both designs were implemented as standalone cores using ISE 12.1, targeting Xilinx Virtex 5 speed grade 2 at 225 MHz.
  • Using AutoPilot Version 2010.07.ft, the authors were able to generate a design that was smaller than the reference implementation in less time than a hand RTL implementation by refactoring and optimizing the algorithmic C model.

Toplevel Block Diagram

  • Design time for the RTL design was estimated from work logs by the original authors of [28] , and includes only the time for an algorithm expert and experienced tool user to enter and verify the RTL architecture in System Generator.
  • Given the significant time familiarizing ourselves with the application and structure of the code, the authors believe that an application expert familiar with the code would be able to create such a design at least twice as fast.
  • To meet the required throughput, one row of the systolic array is instantiated, consisting of one diagonal cell and 8 off-diagonal cells, and the remaining rows are time multiplexed over the single row.
  • In the 4x4 case, the off-diagonal cell implements finegrained resource sharing, with one resource-shared complex multiplier.
  • The authors do observe that AutoPilot uses additional BRAM to implement this block relative to the RTL implementation, because AutoPilot requires tool-implemented double-buffers to only be read or written in a single loop.

X. CONCLUSIONS AND CHALLENGES AHEAD

  • It seems clear that the latest generation of FPGA HLS tools has made significant progress in providing wide language coverage, robust compilation technology, platform-based modeling, and domain-specific system-level integration.
  • As a result, they can quickly provide highly competitive quality of results, in many cases comparable or better than manual RTL designs.
  • For the FPGA design community, it appears that HLS technology may be transitioning from research and investigation to selected deployment.
  • The authors also see many opportunities for HLS tools to further improve.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

>
FOR CONFERENCE-RELATED PAPERS, REPLACE THIS LINE WITH YOUR SESSION NUMBER, E.G., AB-02 (DOUBLE-CLICK HERE)
<
1
High-Level Synthesis for FPGAs: From Prototyping to Deployment
Jason Cong
1,2
, Fellow, IEEE, Bin Liu
1,2
,
Stephen Neuendorffer
3
, Member, IEEE, Juanjo Noguera
3
,
Kees Vissers
3
, Member, IEEE and Zhiru Zhang
1
, Member, IEEE
1
AutoESL Design Technologies, Inc.
2
University of California, Los Angeles
3
Xilinx, Inc.
Abstract—Escalating System-on-Chip design complexity is
pushing the design community to raise the level of abstraction
beyond RTL. Despite the unsuccessful adoptions of early
generations of commercial high-level synthesis (HLS) systems, we
believe that the tipping point for transitioning to HLS
methodology is happening now, especially for FPGA designs. The
latest generation of HLS tools has made significant progress in
providing wide language coverage and robust compilation
technology, platform-based modeling, advancement in core HLS
algorithms, and a domain-specific approach. In this paper we use
AutoESL’s AutoPilot HLS tool coupled with domain-specific
system-level implementation platforms developed by Xilinx as an
example to demonstrate the effectiveness of state-of-art C-to-
FPGA synthesis solutions targeting multiple application domains.
Complex industrial designs targeting Xilinx FPGAs are also
presented as case studies, including comparison of HLS solutions
versus optimized manual designs.
Index TermsDomain-specific design, field-programmable
gate array (FPGA), high-level synthesis (HLS), quality of results
(QoR).
I. I
NTRODUCTION
HE RAPID INCREASE
of complexity in System-on-a-Chip
(SoC) design has encouraged the design community to
seek design abstractions with better productivity than RTL.
Electronic system-level (ESL) design automation has been
widely identified as the next productivity boost for the
semiconductor industry, where HLS plays a central role,
enabling the automatic synthesis of high-level, untimed or
partially timed specifications (such as in C or SystemC) to a
low-level cycle-accurate register-transfer level (RTL)
specifications for efficient implementation in ASICs or
FPGAs. This synthesis can be optimized taking into account
the performance, power, and cost requirements of a particular
system.
Despite the past failure of the early generations of
commercial HLS systems (started in the 1990s), we see a
rapidly growing demand for innovative, high-quality HLS
solutions for the following reasons:
Embedded processors are in almost every SoC: With
the coexistence of micro-processors, DSPs, memories
and custom logic on a single chip, more software
elements are involved in the process of designing a
modern embedded system. An automated HLS flow
allows designers to specify design functionality in high-
level programming languages such as C/C++ for both
embedded software and customized hardware logic on
the SoC. This way, they can quickly experiment with
different hardware/software boundaries and explore
various area/power/performance tradeoffs from a single
common functional specification.
Huge silicon capacity requires a higher level of
abstraction: Design abstraction is one of the most
effective methods for controlling complexity and
improving design productivity. For example, the study
from NEC [90] shows that a 1M-gate design typically
requires about 300K lines of RTL code, which cannot be
easily handled by a human designer. However, the code
density can be easily reduced by 7X to 10X when moved
to high-level specification in C, C++, or SystemC. In this
case, the same 1M-gate design can be described in 30K
to 40K lines of lines of behavioral description, resulting
in a much reduced design complexity.
Behavioral IP reuse improves design productivity: In
addition to the line-count reduction in design
specifications, behavioral synthesis has the added value
of allowing efficient reuse of behavioral IPs. As opposed
to RTL IP which has fixed microarchitecture and
interface protocols, behavioral IP can be retargeted to
different implementation technologies or system
requirements.
Verification drives the acceptance of high-level
specification: Transaction-level modeling (TLM) with
SystemC [107] or similar C/C++ based extensions has
become a very popular approach to system-level
verification [35]. Designers commonly use SystemC
TLMs to describe virtual software/hardware platforms,
which serve three important purposes: early embedded
software development, architectural modeling and
exploration, and functional verification. The wide
availability of SystemC functional models directly drives
the need for SystemC-based HLS solutions, which can
automatically generate RTL code through a series of
formal constructive transformations. This avoids slow
and error-prone manual RTL re-coding, which is the
standard practice in the industry today.
Trend towards extensive use of accelerators and
heterogeneous SoCs: Many SoCs, or even CMPs (chip
multi-processors) move towards inclusion of many
accelerators (or algorithmic blocks), which are built with
custom architectures, largely to reduce power compared
to using multiple programmable processors. According
to ITRS prediction [109], the number of on-chip
accelerators will reach 3000 by 2024. In FPGAs, custom
T

>
FOR CONFERENCE-RELATED PAPERS, REPLACE THIS LINE WITH YOUR SESSION NUMBER, E.G., AB-02 (DOUBLE-CLICK HERE)
<
2
architecture for algorithmic blocks provides higher
performance in a given amount of FPGA resources than
synthesized soft processors. These algorithmic blocks
are particularly appropriate for HLS.
Although these reasons for adopting HLS design
methodology are common to both ASIC and FPGA designers,
we also see additional forces that push the FPGA designers for
faster adoption of HLS tools.
Less pressure for formal verification: The ASIC
manufacturing cost in nanometer IC technologies is well
over $1M [109]. There is tremendous pressure for the
ASIC designers to achieve first tape-out success. Yet
formal verification tools for HLS are not mature, and
simulation coverage can be limited for multi-million gate
SOC designs. This is a significant barrier for HLS
adoption in the ASIC world. However, for FPGA
designs, in-system simulation is possible with much
wider simulation coverage. Design iterations can be
done quickly and inexpensively without huge
manufacturing costs.
Ideal for platform-based synthesis: Modern FPGAs
embed many pre-defined/fabricated IP components, such
as arithmetic function units, embedded memories,
embedded processors, and embedded system buses.
These pre-defined building blocks can be modeled
precisely ahead of time for each FPGA platform and, to
a large extent, confine the design space. As a result, it is
possible for modern HLS tools to apply a platform-based
design methodology [51] and achieve higher quality of
results (QoR).
More pressure for time-to-market: FPGA platforms
are often selected for systems where time-to-market is
critical, in order to avoid long chip design and
manufacturing cycles. Hence, designers may accept
increased performance, power, or cost in order to reduce
design time. As shown in Section IX, modern HLS tools
put this tradeoff in the hands of a designer allowing
significant reduction in design time or, with additional
effort, quality of result comparable to hand-written RTL.
Accelerated or reconfigurable computing calls for
C/C++ based compilation/synthesis to FPGAs: Recent
advances in FPGAs have made reconfigurable
computing platforms feasible to accelerate many high-
performance computing (HPC) applications, such as
image and video processing, financial analytics,
bioinformatics, and scientific computing applications.
Since RTL programming in VHDL or Verilog is
unacceptable to most application software developers, it
is essential to provide a highly automated
compilation/synthesis flow from C/C++ to FPGAs.
As a result, a growing number of FPGA designs are
produced using HLS tools. Some example application
domains include 3G/4G wireless systems [38][81], aerospace
applications [75], image processing [27], lithography
simulation [13], and cosmology data analysis [52]. Xilinx is
also in the process of incorporating HLS solutions in their
Video Development Kit [116] and DSP Develop Kit [97] for
all Xilinx customers.
This paper discusses the reasons behind the recent success
in deploying HLS solutions to the FPGA community. In
Section II we review the evolution of HLS systems and
summarize the key lessons learned. In Sections IV-VIII, using
a state-of-art HLS tool as an example, we discuss some key
reasons for the wider adoption of HLS solutions in the FPGA
design community, including wide language coverage and
robust compilation technology, platform-based modeling,
advancement in core HLS algorithms, improvements on
simulation and verification flow, and the availability of
domain-specific design templates. Then, in Section IX, we
present the HLS results on several real-life industrial designs
and compare with manual RTL implementations. Finally, in
Section X, we conclude the paper with discussions of future
challenges and opportunities.
II. E
VOLUTION OF HIGH
-
LEVEL SYNTHESIS FOR
FPGA
In this section we briefly review the evolution of high-level
synthesis by looking at representative tools. Compilers for
high-level languages have been successful in practice since the
1950s. The idea of automatically generating circuit
implementations from high-level behavioral specifications
arises naturally with the increasing design complexity of
integrated circuits. Early efforts (in the 1980s and early 1990s)
on high-level synthesis were mostly research projects, where
multiple prototype tools were developed to call attention to the
methodology and to experiment with various algorithms. Most
of those tools, however, made rather simplistic assumptions
about the target platform and were not widely used. Early
commercialization efforts in the 1990s and early 2000s
attracted considerable interest among designers, but also failed
to gain wide adoption, due in part to usability issues and poor
quality of results. More recent efforts in high-level synthesis
have improved usability by increasing input language
coverage and platform integration, as well as improving
quality of results.
A. Early Efforts
Since the history of HLS is considerably longer than that of
FPGAs, most early HLS tools targeted ASIC designs. A
pioneering high-level synthesis tool, CMU-DA, was built by
researchers at Carnegie Mellon University in the 1970s
[29][71]. In this tool the design is specified at behavior level
using the ISPS (Instruction Set Processor Specification)
language [4]. It is then translated into an intermediate data-
flow representation called the Value Trace [79] before
producing RTL. Many common code-transformation
techniques in software compilers, including dead-code
elimination, constant propagation, redundant sub-expression
elimination, code motion, and common sub-expression
extraction could be performed. The synthesis engine also
included many steps familiar in hardware synthesis, such as
datapath allocation, module selection, and controller
generation. CMU-DA also supported hierarchical design and
included a simulator of the original ISPS language. Although
many of the methods used were very preliminary, the

>
FOR CONFERENCE-RELATED PAPERS, REPLACE THIS LINE WITH YOUR SESSION NUMBER, E.G., AB-02 (DOUBLE-CLICK HERE)
<
3
innovative flow and the design of toolsets in CMU-DA
quickly generated considerable research interest.
In the subsequent years in the 1980s and early 1990s, a
number of similar high-level synthesis tools were built, mostly
for research. Examples of academic efforts include the ADAM
system developed at the University of Southern California
[37][46], HAL developed at Bell-Northern Research [72],
MIMOLA developed at University of Kiel, Germany [62], the
Hercules/Hebe high-level synthesis system (part of the
Olympus system) developed at Stanford University [24][25]
[55], the Hyper/Hyper-LP system developed at University of
California, Berkeley [10][77]. Industry efforts include
Cathedral/Cathedral-II and their successors developed at
IMEC [26], the IBM Yorktown Silicon Compiler [11] and the
GM BSSC system [92], among many others. Like CMU-DA,
these tools typically decompose the synthesis task into a few
steps, including code transformation, module selection,
operation scheduling, datapath allocation, and controller
generation. Many fundamental algorithms addressing these
individual problems were also developed. For example, the list
scheduling algorithm and its variants are widely used to solve
scheduling problems with resource constraints [70]; the force-
directed scheduling algorithm developed in HAL [73] is able
to optimize resource requirements under a performance
constraint; the path-based scheduling algorithm in the
Yorktown Silicon Compiler is useful to optimize performance
with conditional branches [12]. The Sehwa tool in ADAM is
able to generate pipelined implementations and explore the
design space by generating multiple solutions [69]. The
relative scheduling technique developed in Hebe is an elegant
way to handle operations with unbounded delay [56]. Conflict-
graph coloring techniques were developed and used in several
systems to share resources in the datapath [57][72].
These early high-level tools often used custom languages
for design specification. Besides the ISPS language used in
CMD-DA, a few other languages were notable. HardwareC is
a language designed for use in the Hercules system [54].
Based on the popular C programming language, it supports
both procedural and declarative semantics and has built-in
mechanisms to support design constraints and interface
specifications. This is one of the earliest C-based hardware
synthesis languages for high-level synthesis and is interesting
to compare with similar languages later. The Silage language
used in Cathedral/Cathedral-II was specifically designed for
the synthesis of digital signal processing hardware [26]. It has
built-in support for customized data types, and allows easy
transformations [77][10]. The Silage language, along with the
Cathedral-II tool, represented an early domain-specific
approach in high-level synthesis.
These early research projects helped to create a basis for
algorithmic synthesis with many innovations, and some were
even used to produce real chips. However, these efforts did
not lead to wide adoption among designers. A major reason is
that the methodology of using RTL synthesis was not yet
widely accepted at that time and RTL synthesis tools were not
yet mature. Thus, high-level synthesis, built on top of RTL
synthesis, did not have a sound foundation in practice. In
addition, simplistic assumptions were often made in these
early systems—many of them were “technology independent”
(such as Olympus), and inevitably led to suboptimal results.
With improvements in RTL synthesis tools and the wide
adoption of RTL-based design flows in the 1990s, industrial
deployment of high-level synthesis tools became more
practical. Proprietary tools were built in major semiconductor
design houses including IBM [5], Motorola [58], Philips [61],
and Simens [6]. Major EDA vendors also began to provide
commercial high-level synthesis tools. In 1995, Synopsys
announced Behavioral Compiler [88], which generates RTL
implementations from behavioral HDL code and connects to
downstream tools. Similar tools include Monet from Mentor
Graphics [33] and Visual Architect from Cadence [43]. These
tools received wide attention, but failed to widely replace RTL
design. One reason is due to the use of behavioral HDLs as the
input language, which
is not popular among
algorithm and
system designers.
B. Recent efforts
Since 2000, a new generation of high-level synthesis tools
has been developed in both academia and industry. Unlike
many predecessors, most of these tools focus on using C/C++
or C-like languages to capture design intent. This makes the
tools much more accessible to algorithm and system designers
compared to previous tools that only accept HDL languages. It
also enables hardware and software to be built using a
common model, facilitating software/hardware co-design and
co-verification. The use of C-based languages also makes it
easy to leverage the newest technologies in software compilers
for parallelization and optimization in the synthesis tools.
In fact, there has been an ongoing debate on whether C-
based languages are proper choices for HLS [31][78]. Despite
the many advantages of using C-based languages, opponents
often criticize C/C++ as languages only suitable for describing
sequential software that runs on microprocessors. Specifically,
the deficiencies of C/C++ include the following:
(i) Standard C/C++ lack built-in constructs to explicitly
specify bit accuracy, timing, concurrency, synchronization,
hierarchy, etc., which are critical to hardware design.
(ii) C and C++ have complex language constructs, such as
pointers, dynamic memory management, recursion,
polymorphism, etc., which do have efficient hardware
counterparts and lead to difficulty in synthesis.
To address these deficiencies, modern C-based HLS tools
have introduced additional language extensions and
restrictions to make C inputs more amenable to hardware
synthesis. Common approaches include both restriction to a
synthesizable subset that discourages or disallows the use of
dynamic constructs (as required by most tools) and
introduction of hardware-oriented language extensions
(HardwareC [54], SpecC [34], Handel-C [95]), libraries
(SystemC [107]), and compiler directives to specify
concurrency, timing, and other constraints. For example,
Handel-C allows the user to specify clock boundaries
explicitly in the source code. Clock edges and events can also
be explicitly specified in SpecC and SystemC. Pragmas and

>
FOR CONFERENCE-RELATED PAPERS, REPLACE THIS LINE WITH YOUR SESSION NUMBER, E.G., AB-02 (DOUBLE-CLICK HERE)
<
4
directives along with a subset of ANSI C/C++ are used in
many commercial tools. An advantage of this approach is that
the input program can be compiled using standard C/C++
compilers without change, so that such a program or a module
of it can be easily moved between software and hardware and
co-simulation of hardware and software can be performed
without code rewriting. At present, most commercial HLS
tools use some form of C-based design entry, although tools
using other input languages (e.g., BlueSpec [102], Esterel [30],
Matlab [42], etc.) also exist.
Another notable difference between the new generation of
high-level synthesis tools and their predecessors is that many
tools are built targeting implementation on FPGA. FPGAs
have continually improved in capacity and speed in recent
years, and their programmability makes them an attractive
platform for many applications in signal processing,
communication, and high-performance computing. There has
been a strong desire to make FPGA programming easier, and
many high-level synthesis tools are designed to specifically
target FPGAs, including ASC [64], CASH [9], C2H from
Altera [98], DIME-C from Nallatech [112], GAUT [22],
Handel-C compiler (now part of Mentor Graphics DK Design
Suite) [95], Impulse C [74], ROCCC [87][39], SPARK
[41][40], Streams-C compiler [36], and Trident [82][83], .
ASIC tools also commonly provide support for targeting an
FPGA tool flow in order to enable system emulation.
Among these high-level synthesis tools, many are designed
to focus on a specific application domain. For example, the
Trident compiler, developed at Los Alamos National Lab, is
an open-source tool focusing on the implementation of
floating-point scientific computing applications on FPGA.
Many tools, including GAUT, Streams-C, ROCCC, ASC, and
Impulse C, target streaming DSP applications. Following the
tradition of Cathedral, these tools implement architectures
consisting of a number of modules connected using FIFO
channels. Such architectures can be integrated either as a
standalone DSP pipeline, or integrated to accelerate code
running on a processor (as in ROCCC).
As of 2010, major commercial C-based high-level synthesis
tools include AutoESL’s AutoPilot [94] (originated from
UCLA xPilot project [17]), Cadence’s C-to-Silicon Compiler
[3][103], Forte’s Cynthesizer [65], Mentor’s Catapult C [7],
NEC’s Cyber Workbench [89][91], and Synopsys Synphony C
[115] (formerly Synfora’s PICO Express, originated from a
long range research effort in HP Labs [49]).
C. Lessons Learned
Despite extensive development efforts, most commercial
HLS efforts have failed. We believe that past failures are due
to one or several of the following reasons:
Lack of comprehensive design language support: The
first generation of the HLS synthesis tools could not
synthesize high-level programming languages. Instead,
untimed or partially timed behavioral HDL was used.
Such design entry marginally raised the abstraction
level, while imposing a steep learning curve on both
software and hardware developers.
Although early C-based HLS technologies have
considerably improved the ease of use and the level of
design abstraction, many C-based tools still have glaring
deficiencies. For instance, C and C++ lack the necessary
constructs and semantics to represent hardware attributes
such as design hierarchy, timing, synchronization, and
explicit concurrency. SystemC, on the other hand, is
ideal for system-level specification with
software/hardware co-design. However, it is foreign to
algorithmic designers and has slow simulation speed
compared to pure ANSI C/C++ descriptions.
Unfortunately, most early HLS solutions commit to only
one of these input languages, restricting their usage to
niche application domains.
Lack of reusable and portable design specification:
Many HLS tools have required users to embed detailed
timing and interface information as well as the synthesis
constraints into the source code. As a result, the
functional specification became highly tool-dependent,
target-dependent, and/or implementation-platform
dependent. Therefore, it could not be easily ported to
alternative implementation targets.
7arrow focus on datapath synthesis: Many HLS tools
focus primarily on datapath synthesis, while leaving
other important aspects unattended, such as interfaces to
other hardware/software modules and platform
integration. Solving the system integration problem then
becomes a critical design bottleneck, limiting the value
in moving to a higher-level design abstraction for IP in a
design.
Lack of satisfactory quality of results (QoR): When
early generations of HLS tools were introduced in the
mid-1990s to early 2000s, the EDA industry was still
struggling with timing closure between logic and
physical designs. There was no dependable RTL to
GDSII foundation to support HLS, which made it
difficult to consistently measure, track, and enhance
HLS results. Highly automated RTL to GDSII solutions
only became available in late 2000s (e.g., provided by
the IC Compiler from Synopsys [114] or the
BlastFusion/Talus from Magma [111]). Moreover, many
HLS tools are weak in optimizing real-life design
metrics. For example, the commonly used algorithms
mainly focus on reducing functional unit count and
latency, which do not necessarily correlate to actual
silicon area, power, and performance. As a result, the
final implementation often fails to meet timing/power
requirements. Another major factor limiting quality of
result was the limited capability of HLS tools to exploit
performance-optimized and power-efficient IP blocks on
a specific platform, such as the versatile DSP blocks and
on-chip memories on modern FPGA platforms. Without
the ability to match the QoR achievable with an RTL
design flow, most designers were unwilling to explore
potential gains in design productivity.
Lack of a compelling reason/event to adopt a new
design methodology: The first-generation HLS tools

>
FOR CONFERENCE-RELATED PAPERS, REPLACE THIS LINE WITH YOUR SESSION NUMBER, E.G., AB-02 (DOUBLE-CLICK HERE)
<
5
were clearly ahead of their time, as the design
complexity was still manageable at the register transfer
level in late 1990s. Even as the second-generation of
HLS tools showed interesting capabilities to raise the
level of design abstraction, most designers were
reluctant to take the risk of moving away from the
familiar RTL design methodology to embrace a new
unproven one, despite its potential large benefits. Like
any major transition in the EDA industry, designers
needed a compelling reason or event to push them over
the “tipping point,” i.e., to adopt the HLS design
methodology.
Another important lesson learned is that tradeoffs must be
made in the design of the tool. Although a designer might
wish for a tool that takes any input program and generates the
“best” hardware architecture, this goal is not generally
practical for HLS to achieve. Whereas compilers for
processors tend to focus on local optimizations with the sole
goal of increasing performance, HLS tools must automatically
balance performance and implementation cost using global
optimizations. However, it is critical that these optimizations
be carefully implemented using scalable and predictable
algorithms, keeping tool runtimes acceptable for large
programs and the results understandable by designers.
Moreover, in the inevitable case that the automatic
optimizations are insufficient, there must be a clear path for a
designer to identify further optimization opportunities and
execute them by rewriting the original source code.
Hence, it is important to focus on several design goals for a
high-level synthesis tool:
1. Capture designs at a bit-accurate, algorithmic level in
C code. The code should be readable by algorithm
specialists.
2. Effectively generate efficient parallel architectures
with minimal modification of the C code, for
parallelizable algorithms.
3. Allow an optimization-oriented design process, where
a designer can improve the performance of the
resulting implementation by successive code
modification and refactoring.
4. Generate implementations that are competitive with
synthesizable RTL designs after automatic and manual
optimization.
We believe that the tipping point for transitioning to HLS
methodology is happening now, given the reasons discussed in
Section I and the conclusions by others [14][84]. Moreover,
we are pleased to see that the latest generation of HLS tools
has made significant progress in providing wide language
coverage and robust compilation technology, platform-based
modeling, and advanced core HLS algorithms. We shall
discuss these advancements in more detail in the next few
sections.
III. C
ASE
S
TUDY OF
S
TATE
-
OF
-
ART OF HIGH
-
LEVEL
SYNTHESIS FOR
FPGA
S
AutoPilot is one of the most recent HLS tools, and is
representative of the capabilities of the state-of-art commercial
HLS tools available today. Figure 1 shows the AutoESL
AutoPilot development flow targeting Xilinx FPGAs.
AutoPilot accepts synthesizable ANSI C, C++, and OSCI
SystemC (based on the synthesizable subset of the IEEE-1666
standard [113]) as input and performs advanced platform-
based code transformations and synthesis optimizations to
generate optimized synthesizable RTL.
AutoPilot outputs RTL in Verilog, VHDL or cycle-accurate
SystemC for simulation and verification. To enable automatic
co-simulation, AutoPilot creates test bench wrappers and
transactors in SystemC so that designers can leverage the
original test framework in C/C++/SystemC to verify the
correctness of the RTL output. These SystemC wrappers
connect high-level interfacing objects in the behavioral test
bench with pin-level signals in RTL. AutoPilot also generates
appropriate simulation scripts for use with 3
rd
-party RTL
simulators. Thus designers can easily use their existing
simulation environment to verify the generated RTL.
AutoPilot
Synthesis
AutoPilot
Simulation
AutoPilot
Module
Generation
High-level
Spec (C,
C++,
Design
Test Bench
RTL
(SystemC,
VHDL,
Verilog)
Design
Wrapper
Synthesis
Directives
Simulation
Scripts
Implementation
Scripts
RTL/Netlist
Xilinx ISE
EDK
Xilinx
CoreGen
RTL
Simulator
FPGA Platform Libs
Bitstream
Figure 1. AutoESL and Xilinx C-to-FPGA design flow.
In addition to generating RTL, AutoPilot also creates
synthesis reports that estimate FPGA resource utilization, as
well as the timing, latency and throughput of the synthesized
design. The reports include a breakdown of performance and
area metrics by individual modules, functions and loops in the
source code. This allows users to quickly identify specific
areas for QoR improvement and then adjust synthesis
directives or refine the source design accordingly.
Finally, the generated HDL files and design constraints feed
into the Xilinx RTL tools for implementation. The Xilinx ISE
tool chain (such as CoreGen, XST, PAR, etc.) and Embedded
Development Kit (EDK) are used to transform that RTL
implementation into a complete FPGA implementation in the
form of a bitstream for programming the target FPGA
platform.

Citations
More filters
Journal ArticleDOI
TL;DR: This work uses a first-published methodology to compare one commercial and three academic tools on a common set of C benchmarks, aiming at performing an in-depth evaluation in terms of performance and the use of resources.
Abstract: High-level synthesis (HLS) is increasingly popular for the design of high-performance and energy-efficient heterogeneous systems, shortening time-to-market and addressing today’s system complexity. HLS allows designers to work at a higher-level of abstraction by using a software program to specify the hardware functionality. Additionally, HLS is particularly interesting for designing field-programmable gate array circuits, where hardware implementations can be easily refined and replaced in the target device. Recent years have seen much activity in the HLS research community, with a plethora of HLS tool offerings, from both industry and academia. All these tools may have different input languages, perform different internal optimizations, and produce results of different quality, even for the very same input description. Hence, it is challenging to compare their performance and understand which is the best for the hardware to be implemented. We present a comprehensive analysis of recent HLS tools, as well as overview the areas of active interest in the HLS research community. We also present a first-published methodology to evaluate different HLS tools. We use our methodology to compare one commercial and three academic tools on a common set of C benchmarks, aiming at performing an in-depth evaluation in terms of performance and the use of resources.

433 citations


Cites methods from "High-Level Synthesis for FPGAs: Fro..."

  • ...We first introduce the academic HLS tools evaluated in this study, before moving onto highlight features of other HLS tools available in the community (either commercial or academic)....

    [...]

Proceedings ArticleDOI
22 Feb 2017
TL;DR: The design of a BNN accelerator is presented that is synthesized from C++ to FPGA-targeted Verilog and outperforms existing FPGAs-based CNN accelerators in GOPS as well as energy and resource efficiency.
Abstract: Convolutional neural networks (CNN) are the current stateof-the-art for many computer vision tasks. CNNs outperform older methods in accuracy, but require vast amounts of computation and memory. As a result, existing CNN applications are typically run on clusters of CPUs or GPUs. Studies into the FPGA acceleration of CNN workloads has achieved reductions in power and energy consumption. However, large GPUs outperform modern FPGAs in throughput, and the existence of compatible deep learning frameworks give GPUs a significant advantage in programmability. Recent research in machine learning demonstrates the potential of very low precision CNNs -- i.e., CNNs with binarized weights and activations. Such binarized neural networks (BNNs) appear well suited for FPGA implementation, as their dominant computations are bitwise logic operations and their memory requirements are reduced. A combination of low-precision networks and high-level design methodology may help address the performance and productivity gap between FPGAs and GPUs. In this paper, we present the design of a BNN accelerator that is synthesized from C++ to FPGA-targeted Verilog. The accelerator outperforms existing FPGA-based CNN accelerators in GOPS as well as energy and resource efficiency.

379 citations


Cites methods from "High-Level Synthesis for FPGAs: Fro..."

  • ...Our HLS implementation leverages these optimizations, and further propose novel BNN-specific hardware constructs to ensure full throughput and hardware utilization across the different input feature sizes....

    [...]

  • ...We make use of Xilinx SDSoC 2016.1 as the primary design tool, which leverages Vivado HLS and Vivado to perform the actual HLS compilation and FPGA implementation....

    [...]

  • ...It invokes Vivado HLS under the hood to synthesize the “hardware” portion into RTL....

    [...]

  • ...The rest of this paper is organized as follows: Section 2 gives a primer on CNNs and BNNs; Section 3 describes our BNN accelerator design; Section 4 provides some details on our HLS code; Section 5 reports our experimental findings, Section 6 reviews previous work on FPGA-based CNN accelerators; and we conclude the paper in Section 7....

    [...]

  • ...Zhang et al. [27] describe how to optimize an HLS design by reordering and tiling loops, inserting the proper pragmas, and organizing external memory transfers....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a new approximate matrix inversion algorithm relying on a Neumann series expansion, which substantially reduces the complexity of linear data detection in single-carrier frequency-division multiple access (SC-FDMA)-based large-scale MIMO systems.
Abstract: Large-scale (or massive) multiple-input multiple-out put (MIMO) is expected to be one of the key technologies in next-generation multi-user cellular systems based on the upcoming 3GPP LTE Release 12 standard, for example. In this work, we propose-to the best of our knowledge-the first VLSI design enabling high-throughput data detection in single-carrier frequency-division multiple access (SC-FDMA)-based large-scale MIMO systems. We propose a new approximate matrix inversion algorithm relying on a Neumann series expansion, which substantially reduces the complexity of linear data detection. We analyze the associated error, and we compare its performance and complexity to those of an exact linear detector. We present corresponding VLSI architectures, which perform exact and approximate soft-output detection for large-scale MIMO systems with various antenna/user configurations. Reference implementation results for a Xilinx Virtex-7 XC7VX980T FPGA show that our designs are able to achieve more than 600 Mb/s for a 128 antenna, 8 user 3GPP LTE-based large-scale MIMO system. We finally provide a performance/complexity trade-off comparison using the presented FPGA designs, which reveals that the detector circuit of choice is determined by the ratio between BS antennas and users, as well as the desired error-rate performance.

363 citations

Journal ArticleDOI
TL;DR: The techniques investigated in this paper represent the recent trends in the FPGA-based accelerators of deep learning networks and are expected to direct the future advances on efficient hardware accelerators and to be useful for deep learning researchers.
Abstract: Due to recent advances in digital technologies, and availability of credible data, an area of artificial intelligence, deep learning, has emerged and has demonstrated its ability and effectiveness in solving complex learning problems not possible before. In particular, convolutional neural networks (CNNs) have demonstrated their effectiveness in the image detection and recognition applications. However, they require intensive CPU operations and memory bandwidth that make general CPUs fail to achieve the desired performance levels. Consequently, hardware accelerators that use application-specific integrated circuits, field-programmable gate arrays (FPGAs), and graphic processing units have been employed to improve the throughput of CNNs. More precisely, FPGAs have been recently adopted for accelerating the implementation of deep learning networks due to their ability to maximize parallelism and their energy efficiency. In this paper, we review the recent existing techniques for accelerating deep learning networks on FPGAs. We highlight the key features employed by the various techniques for improving the acceleration performance. In addition, we provide recommendations for enhancing the utilization of FPGAs for CNNs acceleration. The techniques investigated in this paper represent the recent trends in the FPGA-based accelerators of deep learning networks. Thus, this paper is expected to direct the future advances on efficient hardware accelerators and to be useful for deep learning researchers.

308 citations

References
More filters
Proceedings ArticleDOI
01 Sep 2009
TL;DR: This paper presents a scalable and low power low-density parity-check (LDPC) decoder design for the next generation wireless handset SoC and proposes two parallel LDPC decoder architectures: per-layer decoding architecture with scalable parallelism, and multi-layer pipelined decoding architecture to achieve higher throughput.
Abstract: This paper presents a scalable and low power low-density parity-check (LDPC) decoder design for the next generation wireless handset SoC. The methodology is based on high level synthesis: PICO (program-in chip-out) tool was used to produce efficient RTL directly from a sequential un-timed C algorithm. We propose two parallel LDPC decoder architectures: (1) per-layer decoding architecture with scalable parallelism, and (2) multi-layer pipelined decoding architecture to achieve higher throughput. Based on the PICO technology, we have implemented a two-layer pipelined decoder on a TSMC 65nm 0.9V 8-metal layer CMOS technology with a core area of 1.2 mm2. The maximum achievable throughput is 415 Mbps when operating at 400 MHz clock frequency and the estimated peak power consumption is 180 mW.

27 citations


"High-Level Synthesis for FPGAs: Fro..." refers methods in this paper

  • ...The path-based scheduling algorithm in the Yorktown Silicon Compiler is useful to optimize performance with conditional branches [12]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes an IR specifically designed for reconfigurable fabrics: CIRRF (Compiler Intermediate Representation for Reconfigurable Fabrics), and describes the design and initial implementation as part of the ROCCC compiler for translating C code to VHDL.
Abstract: Configurable computing relies on the expression of a computation as a circuit. Its main purpose is the hardware based acceleration of programs. Configurable computing has received renewed interest with the recent rapid increase in both size and speed of FPGAs. One of the major obstacles in the way of wider adoption of (re)configurable computing is the lack of high-level tools that support the efficient mapping of programs expressed in high-level languages (HLL) to reconfigurable fabrics. The major difficulty in such a mapping is the translation from a temporal execution model to a spatial execution model. An intermediate representation (IR) is the central structure around which tools such as compilers and synthesis tools are built. In this paper we propose an IR specifically designed for reconfigurable fabrics: CIRRF (Compiler Intermediate Representation for Reconfigurable Fabrics). We describe the design of CIRRF and its initial implementation as part of the ROCCC compiler for translating C code to VHDL. CIRRF is designed to support the creation of a datapath and the scheduling of operations on it. It provides support for buffers, look-up tables, predication and pipelining in the datapath. One of the important features of CIRRF, and ROCCC, is its support for the import of pre-designed IP cores into the original C source code allowing the user to leverage the huge wealth of existing IP cores while programming the configurable platform using a HLL. Using experiments and examples we show that CIRRF is a solid foundation to generate high-performance hardware.

27 citations


"High-Level Synthesis for FPGAs: Fro..." refers background in this paper

  • ...There has been a strong desire to make FPGA programming easier, and many HLS tools are designed to specifically target FPGAs, including ASC [65], CASH [9], C2H [99], DIME-C [114], GAUT [23], Handel-C Compiler (now part of Mentor Graphics DK Design Suite) [96], Impulse C [75], ROCCC [88], [40], SPARK [41], [42], Streams-C [37], and Trident [83], [84]....

    [...]

Journal ArticleDOI
TL;DR: Matisse is an architectural design tool that increases productivity without sacrificing area, performance, or power and supports the diverse design practices required for commodity IC design by giving the designer fine-grain control of behavioral synthesis tasks.
Abstract: To accelerate industrial adoption of behavioral synthesis, we have developed Matisse, an architectural design tool that increases productivity without sacrificing area, performance, or power. Matisse's main difference from traditional behavioral synthesis tools is that it lets the designer play a key role. It allows the designer to make major decisions about styles, protocols, parallelism, delays, and partial or even complete architectures before the behavioral synthesis phase starts. Then it enables the designer to incorporate these decisions into the architecture using behavioral synthesis. Matisse supports the diverse design practices required for commodity IC design by giving the designer fine-grain control of behavioral synthesis tasks.

27 citations


"High-Level Synthesis for FPGAs: Fro..." refers methods in this paper

  • ...Proprietary tools were built in major semiconductor design houses including IBM [5], Motorola [59], Philips [62], and Siemens [6]....

    [...]

Journal ArticleDOI
TL;DR: The article provides an overview of sequential equivalence checking techniques, its challenges, and successes in real-world designs.
Abstract: High-level synthesis facilitates the use of formal verification methodologies that check the equivalence of the generated RTL model against the original source specification. The article provides an overview of sequential equivalence checking techniques, its challenges, and successes in real-world designs.

26 citations


"High-Level Synthesis for FPGAs: Fro..." refers background in this paper

  • ...9, is divided in two subsystems: a processor subsystem and an accelerator subsystem....

    [...]

Proceedings ArticleDOI
02 Nov 2009
TL;DR: An automatic memory partitioning technique which can efficiently improve throughput and reduce energy consumption of pipelined loop kernels for given throughput constraints and platform requirement and can statically compute memory access frequencies in polynomial time with little to none profiling.
Abstract: Hardware acceleration is crucial in modern embedded system design to meet the explosive demands on performance and cost. Selected computation kernels for acceleration are usually captured by nest loops, which are optimized by state-of-the-art techniques like loop tiling and loop pipelining. However, memory bandwidth bottlenecks prevent designs to reach optimal throughput with respect to available parallelism. In this paper we present an automatic memory partitioning technique which can efficiently improve throughput and reduce energy consumption of pipelined loop kernels for given throughput constraints and platform requirement. Our partition scheme consists of two steps, the first step considers cycle accurate scheduling information to meet the hard constraints on memory bandwidth requirements specifically for synchronized hardware designs. Experimental results show an average 6X throughput improvement on a set of real world designs with moderate area increase (about 45% on average), given that less resource sharing opportunities exist with higher throughput in optimized designs. The second step further partitions the memory banks for reducing the dynamic power consumption of the final design. In contrast with previous approaches, our technique can statically compute memory access frequencies in polynomial time with little to none profiling. Experimental results show about 30% power reduction on the same set of benchmarks. Categories and Subject Descriptors B.5.2 [Hardware]: Design Aids-automatic synthesis General Terms Algorithm, Design, Experimentation

26 citations