scispace - formally typeset
Open AccessMonographDOI

FPGA-based Implementation of Signal Processing Systems

Reads0
Chats0
TLDR
FPGA-based Implementation of Signal Processing Systems is an important reference for practising engineers and researchers working on the design and development of DSP systems for radio, telecommunication, information, audio-visual and security applications.
Abstract
Field programmable gate arrays (FPGAs) are an increasingly popular technology for implementing digital signal processing (DSP) systems. By allowing designers to create circuit architectures developed for the specific applications, high levels of performance can be achieved for many DSP applications providing considerable improvements over conventional microprocessor and dedicated DSP processor solutions. The book addresses the key issue in this process specifically, the methods and tools needed for the design, optimization and implementation of DSP systems in programmable FPGA hardware. It presents a review of the leading-edge techniques in this field, analyzing advanced DSP-based design flows for both signal flow graph- (SFG-) based and dataflow-based implementation, system on chip (SoC) aspects, and future trends and challenges for FPGAs. The automation of the techniques for component architectural synthesis, computational models, and the reduction of energy consumption to help improve FPGA performance, are given in detail. Written from a system level design perspective and with a DSP focus, the authors present many practical application examples of complex DSP implementation, involving: high-performance computing e.g. matrix operations such as matrix multiplication; high-speed filtering including finite impulse response (FIR) filters and wave digital filters (WDFs); adaptive filtering e.g. recursive least squares (RLS) filtering; transforms such as the fast Fourier transform (FFT). FPGA-based Implementation of Signal Processing Systems is an important reference for practising engineers and researchers working on the design and development of DSP systems for radio, telecommunication, information, audio-visual and security applications. Senior level electrical and computer engineering graduates taking courses in signal processing or digital signal processing shall also find this volume of interest.

read more

Content maybe subject to copyright    Report

JWST795-c01 JWST795-Wood January 4, 2017 15:53 Printer Name: Trim: 244mm × 170mm
Introduction to Field Programmable Gate Arrays
. Introduction
Electronics continues to make an impact in the twenty-first century and has given birth
to the computer industry, mobile telephony and personal digital entertainment and ser-
vices industries, to name but a few. ese markets have been driven by developments in
silicon technology as described by Moore’s law (Moore 1965), which is represented pic-
torially in Figure 1.1. is has seen the number of transistors double every 18 months.
Moreover, not only has the number of transistors doubled at this rate, but also the costs
have decreased, thereby reducing the cost per transistor at every technology advance.
In the 1970s and 1980s, electronic systems were created by aggregating standard com-
ponents such as microprocessors and memory chips with digital logic components, e.g.
dedicated integrated circuits along with dedicated input/output (I/O) components on
printed circuit boards (PCBs). As levels of integration grew, manufacturing working
PCBs became more complex, largely due to greater component complexity in terms
of the increase in the number of transistors and I/O pins. In addition, the development
of multi-layer boards with as many as 20 separate layers increased the design complex-
ity. us, the probability of incorrectly connecting components grew, particularly as the
possibility of successfully designing and testing a working system before production was
coming under greater and greater time pressures.
e problem became more challenging as system descriptions evolved during prod-
uct development. Pressure to create systems to meet evolving standards, or that could
change after board construction due to system alterations or changes in the design spec-
ification, meant that the concept of having a “fully specified” design, in terms of phys-
ical system construction and development on processor software code, was becoming
increasingly challenging. Whilst the use of programmable processors such as microcon-
trollers and microprocessors gave some freedom to the designer to make alterations in
order to correct or modify the system after production, this was limited. Changes to
the interconnections of the components on the PCB were restricted to I/O connectiv-
ity of the processors themselves. us the attraction of using programmability inter-
connection or glue logic offered considerable potential, and so the concept of field
FPGA-based Implementation of Signal Processing Systems,
Second Edition. Roger Woods, John McAllister, Gaye Lightbody and Ying Yi.
© 2017 John Wiley & Sons, Ltd. Published 2017 by John Wiley & Sons, Ltd.
COPYRIGHTED MATERIAL

JWST795-c01 JWST795-Wood January 4, 2017 15:53 Printer Name: Trim: 244mm × 170mm
FPGA-based Implementation of Signal Processing Systems
500 T / mm
2
Chip = 4 mm
2
1950 1960 1970 1980 1990 2000 2010 2020
10,000,000,000
1,000,000,000
100,000,000
10,000,000
1,000,000
100,000
10,000
1,000
100
10
Transistors/chip
*based on 22 nm, 5.5 B
transistors Intel's 18-
core Xeon Haswell-EP
processor core (2015)
Chip = 200mm
2
8,400,000T /mm
2
Chip = 355mm
2
*
Year
100,000 T / mm
2
Figure . Moores law
programmable logic (FPL), specifically field programmable gate array (FPGA) technol-
ogy, was born.
From this unassuming start, though, FPGAs have grown into a powerful technol-
ogy for implementing digital signal processing (DSP) systems. is emergence is due to
the integration of increasingly complex computational units into the fabric along with
increasing complexity and number of levels in memory. Coupled with a high level of pro-
grammable routing, this provides an impressive heterogeneous platform for improved
levels of computing. For the first time ever, we have seen evolutions in heterogeneous
FPGA-based platforms from Microsoft, Intel and IBM. FPGA technology has had an
increasing impact on the creation of DSP systems. Many FPGA-based solutions exist for
wireless base station designs, image processing and radar systems; these are, of course,
themajorfocusofthistext.
Microsoft has developed acceleration of the web search engine Bing using FPGAs
and shows improved ranking throughput in a production search infrastructure. IBM
and Xilinx have worked closely together to show that they can accelerate the reading
of data from web servers into databases by applying an accelerated Memcache2; this
is a general-purpose distributed memory caching system used to speed up dynamic
database-driven searches (Blott and Vissers 2014). Intel have developed a multicore die
with Altera FPGAs, and their recent purchase of the company (Clark 2015) clearly indi-
cates the emergence of FPGAs as a core component in heterogeneous computing with
a clear target for data centers.
. Field Programmable Gate Arrays
e FPGA concept emerged in 1985 with the XC2064
TM
FPGA family from Xilinx. At
the same time, a company called Altera was also developing a programmable device,

JWST795-c01 JWST795-Wood January 4, 2017 15:53 Printer Name: Trim: 244mm × 170mm
Introduction to Field Programmable Gate Arrays
later to become the EP1200, which was the first high-density programmable logic device
(PLD). Alteras technology was manufactured using 3-μm complementary metal oxide
semiconductor (CMOS) electrically programmable read-only memory (EPROM) tech-
nology and required ultraviolet light to erase the programming, whereas Xilinxs tech-
nology was based on conventional static random access memory (SRAM) technology
and required an EPROM to store the programming.
e co-founder of Xilinx, Ross Freeman, argued that with continuously improving
silicon technology, transistors were going to become cheaper and cheaper and could be
used to offer programmability. is approach allowed system design errors which had
only been recognized at a late stage of development to be corrected. By using an FPGA
to connect the system components, the interconnectivity of the components could be
changed as required by simply reprogramming them. Whilst this approach introduced
additional delays due to the programmable interconnect, it avoided a costly and time-
consuming PCB redesign and considerably reduced the design risks.
At this stage, the FPGA market was populated by a number of vendors, including
Xilinx, Altera, Actel, Lattice, Crosspoint, Prizm, Plessey, Toshiba, Motorola, Algotronix
and IBM. However, the costs of developing technologies not based on conventional inte-
grated circuit design processes and the need for programming tools saw the demise of
many of these vendors and a reduction in the number of FPGA families. SRAM tech-
nology has now emerged as the dominant technology largely due to cost, as it does not
require a specialist technology. e market is now dominated by Xilinx and Altera, and,
more importantly, the FPGA has grown from a simple glue logic component to a com-
plete system on programmable chip (SoPC) comprising on-board physical processors,
soft processors, dedicated DSP hardware, memory and high-speed I/O.
e FPGA evolution was neatly described by Steve Trimberger in his FPL2007 ple-
nary talk (see the summary in Table 1.1). e evolution of the FPGA can be divided into
threeeras.eageofinvention was when FPGAs started to emerge and were being used
as system components typically to provide programmable interconnect giving protec-
tion to design evolutions and variations. At this stage, design tools were primitive, but
designers were quite happy to extract the best performance by dealing with lookup tables
(LUTs) or single transistors.
As highlighted above, there was a rationalization of the technologies in the early 1990s,
referred to by Trimberger as the great architectural shakedown. e age of expansion
was when the FPGA started to approach the problem size and thus design complexity
was key. is meant that it was no longer sufficient for FPGA vendors to just produce
Table . Three ages of FPGAs
Period Age Comments
1984–1991 Invention Technology is limited, FPGAs are much smaller than the
application problem size. Design automation is secondary,
architecture efficiency is key
1992–1999 Expansion FPGA size approaches the problem size. Ease of design
becomes critical
2000–present Accumulation FPGAs are larger than the typical problem size. Logic capacity
limited by I/O bandwidth

JWST795-c01 JWST795-Wood January 4, 2017 15:53 Printer Name: Trim: 244mm × 170mm
FPGA-based Implementation of Signal Processing Systems
place and route tools and it became critical that hardware description languages (HDLs)
and associated synthesis tools were created. e final evolution period was the period of
accumulation when FPGAs started to incorporate processors and high-speed intercon-
nection. Of course, this is very relevant now and is described in more detail in Chapter
5 where the recent FPGA offerings are reviewed.
is has meant that the FPGA market has grown from nothing in just over 20 years to
become a key player in the IC industry, worth some $3.9 billion in 2014 and expected to
be worth around $7.3 billion in 2022 (MarketsandMarkets 2016). It has been driven by
the growth in the automotive sector, mobile devices in the consumer electronics sector
and the number of data centers.
1.2.1 Rise of Heterogeneous Computing Platforms
Whilst Moore’s law is presented here as being the cornerstone for driving FPGA evo-
lution and indeed electronics, it also has been the driving force for computing. How-
ever, all is not well with computings reliance on silicon technology. Whilst the number
of transistors continues to double, the scaling of clock speed has not continued at the
same rate. is is due to the increase in power consumption, particularly the increase in
static power. e issue of the heat dissipation capability of packaging means that com-
puting platform providers such as Intel have limited their processor power to 30 W. is
resulted in an adjustment in the prediction for clock rates between 2005 and 2011 (as
illustrated in Figure 1.2) as clock rate is a key contributor to power consumption (ITRS
2005).
In 2005, the International Technology Roadmap for Semiconductors (ITRS) predicted
that a 100 GHz clock would be achieved in 2020, but this estimation had to be revised
first in 2007 and then again in 2011. is has been seen in the current technology where
a clock rate of some 30 GHz was expected in 2015 based on the original forecast, but we
see that speeds have been restricted to 3–4 GHz. is has meant that the performance
per gigahertz has effectively stalled since 2005 and has generated the interest by major
2011(4%)
2007(8%)
2005(18%)
1995 2000 2005 2010 2020 20252015
100
10
1
0.1
Figure . Change in ITRS scaling prediction for clock frequencies

JWST795-c01 JWST795-Wood January 4, 2017 15:53 Printer Name: Trim: 244mm × 170mm
Introduction to Field Programmable Gate Arrays
computing companies in exploring different architectures that employ FPGA technol-
ogy (Putnam et al. 2014; Blott and Vissers 2014).
1.2.2 Programmability and DSP
On many occasions, the growth indicated by Moore’s law has led people to argue that
transistors are essentially free and therefore can be exploited, as in the case of pro-
grammable hardware, to provide additional flexibility. is could be backed up by the
observation that the cost of a transistor has dropped from one-tenth of a cent in the
1980s to one-thousandth of a cent in the 2000s. us we have seen the introduction of
hardware programmability into electronics in the form of FPGAs.
In order to make a single transistor programmable in an SRAM technology, the
programmability is controlled by storing a “1” or a “0” on the gate of the transistor,
thereby making it conduct or not. is value is then stored in an SRAM cell which,
if it requires six transistors, will will mean that we need seven transistors to achieve one
programmable equivalent in FPGA. e reality is that in an overall FPGA implementa-
tion, the penalty is nowhere as harsh as this, but it has to be taken into consideration in
terms of ultimate system cost.
It is the ability to program the FPGA hardware after fabrication that is the main appeal
of the technology; this provides a new level of reassurance in an increasingly compet-
itive market where “right first time” system construction is becoming more difficult to
achieve. It would appear that that assessment was vindicated in the late 1990s and early
2000s: when there was a major market downturn, the FPGA market remained fairly
constant when other microelectronic technologies were suffering. Of course, the impor-
tance of programmability has already been demonstrated by the microprocessor, but this
represented a new change in how programmability was performed.
e argument developed in the previous section presents a clear advantage of FPGA
technology in overcoming PCB design errors and manufacturing faults. Whilst this
mighthavebeentrueintheearlydaysofFPGAtechnology,evolutioninsilicontech-
nology has moved the FPGA from being a programmable interconnection technology
to making it into a system component. If the microprocessor or microcontroller was
viewed as programmable system component, the current FPGA devices must also be
viewed in this vein, giving us a different perspective on system implementation.
In electronic system design, the main attraction of the microprocessor is that it consid-
erably lessens the risk of system development. As the hardware is fixed, all of the design
effort can be concentrated on developing the code. is situation has been comple-
mented by the development of efficient software compilers which have largely removed
the need for the designer to create assembly language; to some extent, this can even
absolve the designer from having a detailed knowledge of the microprocessor archi-
tecture (although many practitioners would argue that this is essential to produce good
code). is concept has grown in popularity, and embedded microprocessor courses are
now essential parts of any electrical/electronic or computer engineering degree course.
A lot of this process has been down to the software developer’s ability to exploit
an underlying processor architecture, the von Neumann architecture. However, this
advantage has also been the limiting factor in its application to the topic of this text,
namely DSP. In the von Neumann architecture, operations are processed sequentially,
which allows relatively straightforward interpretation of the hardware for programming

Citations
More filters
Journal ArticleDOI

Review of applications of high-throughput sequencing in personalized medicine: barriers and facilitators of future progress in research and clinical application

TL;DR: An up-to-date overview of the evolution of HTS and the accompanying tools, infrastructure and data management approaches that are emerging in this space, which, if used within in a multidisciplinary context, may ultimately facilitate the development of personalized medicine.
Journal ArticleDOI

Software-defined Radios: Architecture, state-of-the-art, and challenges

TL;DR: In this article, a survey of the state-of-the-art software-defined radio (SDR) platforms in the context of wireless communication protocols is presented, with a focus on programmability, flexibility, portability, and energy efficiency.
Journal ArticleDOI

Universal electronics for miniature and automated chemical assays

TL;DR: This minireview discusses the advantages and drawbacks of universal electronic modules, considering their application in prototyping and manufacture of intelligent analytical instrumentation.
Journal ArticleDOI

FPGA Implementation of the Generalized Delayed Signal Cancelation—Phase Locked Loop Method for Detecting Harmonic Sequence Components in Three-Phase Signals

TL;DR: Field programmable gate array's (FPGA's) capacity of exploring the parallelism of operations present in the GDSC-PLL is demonstrated through the mapping of this technique directly in hardware, allowing for a much shorter execution time than in DSP.
References
More filters
Book

Adaptive Filter Theory

Simon Haykin
TL;DR: In this paper, the authors propose a recursive least square adaptive filter (RLF) based on the Kalman filter, which is used as the unifying base for RLS Filters.
Journal ArticleDOI

Ten Lectures on Wavelets

TL;DR: In this article, the regularity of compactly supported wavelets and symmetry of wavelet bases are discussed. But the authors focus on the orthonormal bases of wavelets, rather than the continuous wavelet transform.
Journal ArticleDOI

A 2dvEv- bit distributed algorithm for the directed Euler trail problem

TL;DR: The algorithm can be used as a building block for solving other distributed graph problems, and can be slightly modified to run on a strongly-connected diagraph for generating the existent Euler trail or to report that no Euler trails exist.
Journal ArticleDOI

An algorithm for the machine calculation of complex Fourier series

TL;DR: Good generalized these methods and gave elegant algorithms for which one class of applications is the calculation of Fourier series, applicable to certain problems in which one must multiply an N-vector by an N X N matrix which can be factored into m sparse matrices.
Book

Computer Architecture: A Quantitative Approach

TL;DR: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today.