High-Level Synthesis for FPGAs: From Prototyping to Deployment
Summary (8 min read)
- HE RAPID INCREASE of complexity in System-on-a-Chip (SoC) design has encouraged the design community to seek design abstractions with better productivity than RTL.
- In addition to the line-count reduction in design specifications, behavioral synthesis has the added value of allowing efficient reuse of behavioral IPs.
- The wide availability of SystemC functional models directly drives the need for SystemC-based HLS solutions, which can automatically generate RTL code through a series of formal constructive transformations.
- These pre-defined building blocks can be modeled precisely ahead of time for each FPGA platform and, to a large extent, confine the design space.
- In Sections IV-VIII, using a state-of-art HLS tool as an example, the authors discuss some key reasons for the wider adoption of HLS solutions in the FPGA design community, including wide language coverage and robust compilation technology, platform-based modeling, advancement in core HLS algorithms, improvements on simulation and verification flow, and the availability of domain-specific design templates.
II. EVOLUTION OF HIGH-LEVEL SYNTHESIS FOR FPGA
- Compilers for high-level languages have been successful in practice since the 1950s.
- The idea of automatically generating circuit implementations from high-level behavioral specifications arises naturally with the increasing design complexity of integrated circuits.
- Most of those tools, however, made rather simplistic assumptions about the target platform and were not widely used.
- Early commercialization efforts in the 1990s and early 2000s attracted considerable interest among designers, but also failed to gain wide adoption, due in part to usability issues and poor quality of results.
- More recent efforts in high-level synthesis have improved usability by increasing input language coverage and platform integration, as well as improving quality of results.
A. Early Efforts
- Since the history of HLS is considerably longer than that of FPGAs, most early HLS tools targeted ASIC designs.
- In the subsequent years in the 1980s and early 1990s, a number of similar high-level synthesis tools were built, mostly for research.
- The list scheduling algorithm and its variants are widely used to solve scheduling problems with resource constraints  ; the forcedirected scheduling algorithm developed in HAL  is able to optimize resource requirements under a performance constraint; the path-based scheduling algorithm in the Yorktown Silicon Compiler is useful to optimize performance with conditional branches  .
- The Silage language, along with the Cathedral-II tool, represented an early domain-specific approach in high-level synthesis.
- These tools received wide attention, but failed to widely replace RTL design.
B. Recent efforts
- Since 2000, a new generation of high-level synthesis tools has been developed in both academia and industry.
- The use of C-based languages also makes it easy to leverage the newest technologies in software compilers for parallelization and optimization in the synthesis tools.
- (ii) C and C++ have complex language constructs, such as pointers, dynamic memory management, recursion, polymorphism, etc., which do have efficient hardware counterparts and lead to difficulty in synthesis.
- Handel-C allows the user to specify clock boundaries explicitly in the source code.
- FPGAs have continually improved in capacity and speed in recent years, and their programmability makes them an attractive platform for many applications in signal processing, communication, and high-performance computing.
C. Lessons Learned
- The authors believe that past failures are due to one or several of the following reasons:.
- The first generation of the HLS synthesis tools could not synthesize high-level programming languages.
- Instead, untimed or partially timed behavioral HDL was used.
- C and C++ lack the necessary constructs and semantics to represent hardware attributes such as design hierarchy, timing, synchronization, and explicit concurrency.
Lack of reusable and portable design specification:
- Many HLS tools have required users to embed detailed timing and interface information as well as the synthesis constraints into the source code.
- Lack of satisfactory quality of results (QoR):.
- There was no dependable RTL to GDSII foundation to support HLS, which made it difficult to consistently measure, track, and enhance HLS results.
- As a result, the final implementation often fails to meet timing/power requirements.
- Another major factor limiting quality of result was the limited capability of HLS tools to exploit performance-optimized and power-efficient IP blocks on a specific platform, such as the versatile DSP blocks and on-chip memories on modern FPGA platforms.
Lack of a compelling reason/event to adopt a new design methodology:
- The first-generation HLS tools were clearly ahead of their time, as the design complexity was still manageable at the register transfer level in late 1990s.
- Like any major transition in the EDA industry, designers needed a compelling reason or event to push them over the "tipping point," i.e., to adopt the HLS design methodology.
- This goal is not generally practical for HLS to achieve.
- It is critical that these optimizations be carefully implemented using scalable and predictable algorithms, keeping tool runtimes acceptable for large programs and the results understandable by designers.
- The code should be readable by algorithm specialists.
2. Effectively generate efficient parallel architectures
- With minimal modification of the C code, for parallelizable algorithms.
- Allow an optimization-oriented design process, where a designer can improve the performance of the resulting implementation by successive code modification and refactoring.
- Generate implementations that are competitive with synthesizable RTL designs after automatic and manual optimization.
- Moreover, the authors are pleased to see that the latest generation of HLS tools has made significant progress in providing wide language coverage and robust compilation technology, platform-based modeling, and advanced core HLS algorithms.
- The authors shall discuss these advancements in more detail in the next few sections.
III. CASE STUDY OF STATE-OF-ART OF HIGH-LEVEL
- SYNTHESIS FOR FPGAS AutoPilot is one of the most recent HLS tools, and is representative of the capabilities of the state-of-art commercial HLS tools available today.
- AutoPilot outputs RTL in Verilog, VHDL or cycle-accurate SystemC for simulation and verification.
- These SystemC wrappers connect high-level interfacing objects in the behavioral test bench with pin-level signals in RTL.
- The reports include a breakdown of performance and area metrics by individual modules, functions and loops in the source code.
- Finally, the generated HDL files and design constraints feed into the Xilinx RTL tools for implementation.
Improved design quality:
- Comprehensive language support allows designers to take full advantage of rich C/C++ constructs to maximize simulation speed, design modularity and reusability, as well as synthesis QoR.
- In fact, many early C-based synthesis tools only handle a very limited language subset, which typically includes the native integer data types (e.g., char, short, int, etc.), onedimensional arrays, if-then-else conditionals, and for loops.
- The arbitrary-precision fixed-point (ap_fixed) data types support all common algorithmic operations.
- Designers can explore the accuracy and cost tradeoff by modifying the resolution and fixed-point location and experimenting with various quantization and saturation modes.
- AutoPilot also supports the OCSI synthesizable subset  for SystemC synthesis.
B. Use of state-of-the-art compiler technologies
- AutoPilot tightly integrates the LLVM compiler infrastructure   to leverage leading-edge compiler technologies.
- AutoPilot uses the llvm-gcc front end to obtain an intermediate representation (IR) based on the LLVM instruction set.
- In particular, the following classes of transformations and analyses have shown to be very useful for hardware synthesis: SSA-based code optimizations such as constant propagation, dead code elimination, and redundant code elimination based on global value numbering  .
- Memory optimizations such as memory reuse, array scalarization, and array partitioning  to reduce the number of memory accesses and improve memory bandwidth.
- In other words, the code can be optimized without considering the source language.
A. Platform modeling for Xilinx FPGAs
- AutoPilot uses detailed target platform information to carry out informed and target-specific synthesis and optimization.
- The resulting characterization data is then used to make implementation choices during synthesis.
- Notably, the cost of implementing hardware on FPGAs is often different from that for ASIC technology.
- On FPGAs, multiplexors typically have the same cost and delay as an adder (approximately one LUT/output).
- FPGA technology also features heterogeneous on-chip resources, including not only LUTs and flip flops but also other prefabricated architecture blocks such as DSP48s and Block RAMs.
B. Integration with Xilinx toolset
- In order to raise the level of design abstraction more completely, AutoPilot attempts to hide details of the downstream RTL flow from users as much as possible.
- Otherwise, a user may be overwhelmed by the details of vendor-specific tools such as the formats of constraint and configuration files, implementation and optimization options, or directory structure requirements.
- As shown in Figure 1 AutoPilot instantiates these interfaces along with adapter logic and appropriate EDK meta-information to enable a generated module can be quickly connected in an EDK system.
A. Efficient mathematical programming formulations to scheduling
- Classical approaches to the scheduling problem in highlevel synthesis use either conventional heuristics such as list scheduling  and force-directed scheduling  , which often lead to sub-optimal solutions, due to the nature of local optimization methods, or exact formulations such as integerlinear programming  , which can be difficult to scale to large designs.
- Unlike previous approaches where using O(m×n) binary variables to encode a scheduling solution with n operations and m steps  , SDC uses a continuous representation of time with only O(n) variables: for each operation i, a scheduling variable s i is introduced to represent the time step at which the operation is scheduled.
- A linear program with a totally unimodular constraint matrix is guaranteed to have integral solutions.
- Many commonly encountered constraints in high-level synthesis can be expressed in the form of integer-difference constraints.
- Other complex constraints can be handled in similar ways, using approximations or other heuristics.
B. Soft constraints and applications for platform-based optimization
- In a typical synthesis tool, design intentions are often expressed as constraints.
- While some of these constraints are essential for the design to function correctly, many others are not.
- It is possible that a solution with a slight nominal timing violation can still meet the frequency requirement, considering inaccuracy in interconnect delay estimation and various timing optimization procedures in later design stages, such as logic refactoring, retiming, and interconnect optimization.
- The approach is based on the SDC formulation discussed in the preceding subsection, but allows some constraints to be violated.
- Consider the scheduling problem with both hard constraints and soft constraints formulated as follows.
Gs ≤ p hard constraints
- Here G and H corresponds to the matrices representing hard constraints and soft constraints, respectively, and they are both totally unimodular as shown in  .
- Hard constraints and soft constraints are generated based on the functional specification and QoR targets.
- This approach offers a powerful yet flexible framework to address various considerations in scheduling.
- Take the DSP48E block in Xilinx Virtex 5 FPGAs for example: each of the DSP48E blocks contains a multiplier and a post-adder, allowing efficient implementations of multiplication and multiply-accumulation.
C. Pattern mining for efficient sharing
- A typical target architecture for HLS may introduce multiplexers when functional units, storage units or interconnects are shared by multiple operations/variables in a time-multiplexed manner.
- Multiplexers (especially large ones) can be particularly expensive on FPGA platforms.
- Thus, careless decisions on resource sharing could introduce more overhead than benefit.
- The method tries to extract common structures or patterns in the data-flow graph, so that different instances of the same pattern can share resources with little overhead.
- Pruning techniques are proposed based on characteristic vectors and locality-sensitive hashing.
D. Memory analysis and optimizations
- While application-specific computation platforms such as FPGAs typically have considerable computational capability, their performance is often limited by available communication or memory bandwidth.
- Typical FPGAs, such as the Xilinx Virtex series, have a considerable number of block RAMs.
- Consider a loop that accesses array A with subscripts i, 2×i+1, and 3×i+1, in the ith iteration.
- If the loop is targeted to be pipelined with the initiation interval of one, i.e., a new loop iteration starts every clock cycle, the schedule in (b) will lead to port conflicts, because (i+1) mod 2 = (2×(i+1)+1) mod 2 = (3×i+1) mod 2, when i is even; this will lead to three simultaneous accesses to the first bank.
- Then, an iterative algorithm is used to perform both scheduling and memory partitioning guided by the conflict graph.
VII. ADVANCES IN SIMULATION AND VERIFICATION
- Besides the many advantages of automated synthesis, such as quick design space exploration and automatic complex architectural changes like pipelining, resource sharing and scheduling, HLS also enables a more efficient debugging and verification flow at the higher abstraction levels.
- Since HLS provides an automatic path to implementable RTL from behavioral/functional models, designers do not have to wait until manual RTL models to be available to conduct verification.
- Instead, they can develop, debug and functionally verify a design at an earlier stage with high-level programming languages and tools.
- This can significantly reduce the verification effort due to the following reasons: (i) It is easier to trace, identify and fix bugs at higher abstraction levels with more compact and readable design descriptions.
- (ii) Simulation at the higher level is typically orders of magnitude faster than RTL simulation, allowing more comprehensive tests and greater coverage.
A. Automatic co-simulation
- At present, simulation is the still prevalent technique to check if the resulting RTL complies with the high-level specification.
- To reduce effort spent on RTL simulation, the latest HLS technologies have made important improvements on automatic co-simulation .
- A C-to-RTL transactor is created to connect highlevel interfacing constructs (such as parameters and global variables) with pin-level signals in RTL.
- This wrapper also includes additional control logic to manage the communication between the testing module and the RTL design under test (DUT).
- A pipelined design may require that the test bench feed input data into the DUT at a fixed rate.
- In the end, the time-to-market of an FPGA system design is dependent on many factors, such as availability of reference designs, development boards, and in the end, FPGA devices themselves.
- This integration often includes a wide variety of system-level design concerns, including embedded software, system integration, and verification  .
- As a result, these cores are not easily amenable to high-level synthesis and form part of the system infrastructure of a design.
- Subsystem PSS is responsible for executing the relatively low-performance processing in the system.
- The portion of a design generated using HLS represents the bulk of the FPGA design and communicates with the system infrastructure through standardized wire-level interfaces, such as AXI4 memory-mapped and streaming interfaces  shown in Figure 7 .
A. High-level design of cognitive radios project
- Cognitive radio systems typically contain both computationally intensive processing with high data rates in the radio processing, along with complex, but relatively lowrate processing to control the radio processing.
- Efficient interaction with the processor is an important part of the overall system complexity.
- The processor subsystem contains standard hardware modules and is capable of running a standard embedded operating system, such as Linux.
- The accelerator subsystem is used for implementing components with high computational requirements in hardware.
- Components also expose a configuration interface with multiple parameters, allowing them to be reconfigured in an executing system by user-defined control code executing in the processor subsystem.
B. Video Starter Kit
- Video processing systems implemented in FPGA include a wide variety of applications from embedded computer-vision and picture quality improvement to image and video compression.
- Typically these systems include two significant pieces of complexity.
- This platform is derived from the Xilinx EDKbased reference designs provided with the Xilinx Spartan 3ADSP Video Starter Kit and has been ported to several Xilinx Virtex 5 and Spartan 6 based development boards, targeting high-definition HD video processing with pixel clocks up to 150 MHz.
- The incoming video data is analyzed by the Frame Decoder block to determine the frame size of the incoming video, which is passed to the application block, enabling different video formats to be processed.
- The interface to external memory used for frame buffers is implemented using the Xilinx Multi-ported Memory Controller (MPMC)  which provides access to external memory to the Application Block and to the Microblaze control processor, if necessary.
A. Summary of BDTI HLS Certification
- Xilinx has worked with BDTI Inc.  to implement an HLS Tool Certification Program  .
- This program was designed to compare the results of an HLS Tool and the Xilinx Spartan 3 FPGA that is part of the Video Starter Kit, with the result of a conventional DSP processor and with the results of a good manual RTL implementation.
- There were two applications used in this Certification Program, an optical flow algorithm, which is characteristic for a demanding image processing application and a wireless application for which a very representative implementation in RTL was available.
- The DSP processor implementation rated "fair", while the AutoPilot implementation rated "good", indicating that less source code modification was necessary to achieve high performance when using AutoPilot.
- BDTI also assessed overall ease of use of the DSP tool flow and the FPGA tool flow, combining HLS with the low-level implementation tools.
B. Sphere Decoder
- Xilinx has implemented a sphere decoder for a multi-input multi-output (MIMO) wireless communication system using AutoPilot   .
- The application exhibits a large amount of parallelism, since the operations must be executed on each of 360 independent subcarriers which form the overall communication channel and the processing for each channel can generally be pipelined.
- The resulting HLS code for the application makes heavy use of C++ templates to describe arbitrary-precision integer data types and parameterized code blocks used to process different matrix sizes at different points in the application.
- Both designs were implemented as standalone cores using ISE 12.1, targeting Xilinx Virtex 5 speed grade 2 at 225 MHz.
- Using AutoPilot Version 2010.07.ft, the authors were able to generate a design that was smaller than the reference implementation in less time than a hand RTL implementation by refactoring and optimizing the algorithmic C model.
Toplevel Block Diagram
- Design time for the RTL design was estimated from work logs by the original authors of  , and includes only the time for an algorithm expert and experienced tool user to enter and verify the RTL architecture in System Generator.
- Given the significant time familiarizing ourselves with the application and structure of the code, the authors believe that an application expert familiar with the code would be able to create such a design at least twice as fast.
- To meet the required throughput, one row of the systolic array is instantiated, consisting of one diagonal cell and 8 off-diagonal cells, and the remaining rows are time multiplexed over the single row.
- In the 4x4 case, the off-diagonal cell implements finegrained resource sharing, with one resource-shared complex multiplier.
- The authors do observe that AutoPilot uses additional BRAM to implement this block relative to the RTL implementation, because AutoPilot requires tool-implemented double-buffers to only be read or written in a single loop.
X. CONCLUSIONS AND CHALLENGES AHEAD
- It seems clear that the latest generation of FPGA HLS tools has made significant progress in providing wide language coverage, robust compilation technology, platform-based modeling, and domain-specific system-level integration.
- As a result, they can quickly provide highly competitive quality of results, in many cases comparable or better than manual RTL designs.
- For the FPGA design community, it appears that HLS technology may be transitioning from research and investigation to selected deployment.
- The authors also see many opportunities for HLS tools to further improve.
Did you find this useful? Give us your feedback
Cites methods from "High-Level Synthesis for FPGAs: Fro..."
...We first introduce the academic HLS tools evaluated in this study, before moving onto highlight features of other HLS tools available in the community (either commercial or academic)....
Cites methods from "High-Level Synthesis for FPGAs: Fro..."
...Our HLS implementation leverages these optimizations, and further propose novel BNN-specific hardware constructs to ensure full throughput and hardware utilization across the different input feature sizes....
...We make use of Xilinx SDSoC 2016.1 as the primary design tool, which leverages Vivado HLS and Vivado to perform the actual HLS compilation and FPGA implementation....
...It invokes Vivado HLS under the hood to synthesize the “hardware” portion into RTL....
...The rest of this paper is organized as follows: Section 2 gives a primer on CNNs and BNNs; Section 3 describes our BNN accelerator design; Section 4 provides some details on our HLS code; Section 5 reports our experimental findings, Section 6 reviews previous work on FPGA-based CNN accelerators; and we conclude the paper in Section 7....
...Zhang et al.  describe how to optimize an HLS design by reordering and tiling loops, inserting the proper pragmas, and organizing external memory transfers....
...infrastructure  to leverage leading-edge compiler...