scispace - formally typeset
Journal ArticleDOI

Fast algorithms for digital signal processing

TLDR
This new textbook by R. E. Blahut contains perhaps the most comprehensive coverage of fast algorithms todate, with an emphasis on implementing the two canonical signal processing operations of convolution and discrete Fourier transformation.
Abstract
This new textbook by R. E. Blahut, which deals with the theory and design of efficient algorithms for digital signal processing, contains perhaps the most comprehensive coverage of fast algorithms todate.Alargecollectionofalgorithmsistreated,withanemphasis on implementing the two canonical signal processing operations of convolution and discrete Fourier transformation. In recent years, there has been much work done on fast algorithms,andBlahutdoesafinejobofblendingmaterialfromdiverse sources to form a coherent and self-contained approach to his subject.The mathematical level of this book is high, reflecting the rather abstract nature of the theoretical underpinnings of fast computational techniques. Although electrical engineers are forthe most part mathematically sophisticated, they tend to lack training in abstract algebra and number theory, both of which are essential to any thorough discussion of fast algorithms. Thus this audience should find the tutorial chapters which the text provides on these topics to be quite helpful. An additional feature of the text, which the nonspecialist should find useful, is that each new algorithm is described through three different formats: a simple example, a flowchart, and a set of matrix equations. This use of repetition assists the reader in grasping subject matter which for the most part is nonintuitive. Operation counts (as measured by the number of multiplications and the number of additions) for each algorithm are tabulated for avarietyof blocklengths (i.e., lengths of data segments), making performance comparisons easy. As the author points out, run-time comparisons may be quite different. Each chapter concludes with a set of problems of varying difficulty. These problems are well integrated with the text and serve to supplement the many examples worked out in the text. The book is devoted to how one rapidly computes various mathematical operators such as transform and convolutions. For a deeper understanding of the meaning of theseoperators,one must consult other sources in which their use is discussed. The text emphasizes algorithms which employ a reduced, or minimum, numberof muItiplications,althoughadditioncountsarealsotaken into consideration. However, an algorithm which is the “fastest” as measured in arithmetic operation counts may not be the fastest in execution time, particularly if dedicated hardware is employed. Indeed in practiceother considerationsfrequentlypropel oneaway from the computationally ”optimal” algorithm. Much work has been done on the theory and application of signal processing algorithms which are “efficient” in terms other than rnultiply/add counts, such as roundoff noise, limit cycles, coefficient quantization, memory access, hardware costs, etc. It is clearly necessary to limit the scope of any treatise, and the exclusion of differing performance measures is certainly appropriate. A description of the contents of the book will now be given, followed by some concluding remarks of a more general nature. Chapter 2 i s a tutorial on abstract algebra. It i s quite readable and is liberally laced with examples. In addition to the standard modern algebra fare (groups, rings, fields, vector spaces, matrices), the ubiquitous Chinese remainder theorem is discussed in detail. Chapters 3 and 4, and their extensions in Chapters 7and 8, form the core of the text. The third chapter addresses fast algorithms for short convolutions. The Cook-Toom convolution algorithm is discussed, followed by the Winograd convolution algorithm. A proof of the optimality of the Winograd algorithm, with respect to multiplications, for performing cyclic convolutions, is presented at the close of the chapter. The fourth chapter addresses fast algorithms for computing the discrete Fourier transform. The CooleyTukey algorithm is considered first. The approach taken is to view this algorithm as a means of mapping a onedimensional Fourier transform into a multidimensional transform. Variations of the algorithm are discussed, including the Rader-Brenner transform. Next, the Good-Thomas algorithm is discussed. This algorithm is again presented as a means of mapping a onedimensional transform into a higher dimensional transform, this time based on the Chinese remainder theorem. Rader’s algorithm for computing primelength Fouriertransforms by useofconvolution ispresented next. Extensions of the algorithm to blocklengths which are the power of an odd prime are considered. The chapter closes with the Winograd-Fourier transform which builds upon the Rader prime algorithm. Certain short blocklengthsareconsidered in detail,and the corresponding algorithms are compiled into an Appendix. Chapter 5 i s a mathematical interlude, tutorially covering items from number theory and algebraic field theory which are needed in later chapters. Topics include the totient function, Euler’s theorem, Fermat’s theorem, minimal polynomials, and cyclotomic polynomials. Chapter 6 is devoted to number theoretic transforms. These transforms proceed by representing the data values themselves in the field of integers modulo a prime. Convolution in integer fields is also covered. Chapters 7and 8 extend the convolution and transform methods of Chapters 3 and 4 to higher dimensions. Multidimensional transforms (convolutions) are used both to efficiently compute onedimensional transforms (convolutions) and to process data which are inherently higher dimensional. Both applications are treated in these chapters. Topics include the Agarwal-Cooley convolution algorithm, polynomial transforms, the family of Johnson-Burrus transforms and the Nussbaumer-Quandalle FFT. Chapter 9 discusses architectures for transforms and digital filters and includes treatmentsof FFT butterfly networks and overlapadd convolution. The remaining three chapters are mostly independent from the rest of the book. Chapter 10 covers fast algorithms based on doubling strategies. Computational tasks for which such fast algorithms are derived include sorting, matrix transposition, matrix multiplication, polynomial division, computation of trigonometric functions, and coordinate rotation. Many of theseoperations arise as steps in the solution of oneor more signal processing problems. Fast algorithms for solving Toeplitz systems is the theme of Chapter 11. There is a variety of fast algorithms discussed, the proper choice of which depends on the specific structure of the Toeplitz system at hand (such as whether or not the system is symmetric and whether or not the right-hand vector is arbitrary). The final chapter addresses fast algorithms for Trellis and tree search and includes the Viterbi, Stack, and Fano algorithms. These

read more

Citations
More filters
Journal ArticleDOI

Effective erasure codes for reliable computer communication protocols

TL;DR: A very basic description of erasure codes is provided, an implementation of a simple but very flexible erasure code to be used in network protocols is described, and its performance and possible applications are discussed.
Posted Content

Wavelet transforms versus Fourier transforms

TL;DR: The wavelet transform as mentioned in this paper maps each $f(x)$ to its coefficients with respect to an orthogonal basis of piecewise constant functions, constructed by dilation and translation.
Journal ArticleDOI

Low complexity bit parallel architectures for polynomial basis multiplication over GF(2m)

TL;DR: A new formulation for polynomial basis multiplication in terms of the reduction matrix Q is derived and a generalized architecture for the multiplier is developed and the time and gate complexities of the proposed multiplier are analyzed as a function of degree m and the Reduction matrix Q.
Book

The Theory of Linear Prediction

TL;DR: The text is self-contained for readers with introductory exposure to signal processing, random processes, and the theory of matrices, and a historical perspective and detailed outline are given in the first chapter.
Journal ArticleDOI

Challenges in Indoor Global Navigation Satellite Systems: Unveiling its core features in signal processing

TL;DR: The science and technology for positioning and navigation has experienced a dramatic evolution and the observation of celestial bodies for navigation purposes has been replaced today by the use of electromagnetic waveforms emitted from reference sources.
References
More filters
Journal ArticleDOI

Effective erasure codes for reliable computer communication protocols

TL;DR: A very basic description of erasure codes is provided, an implementation of a simple but very flexible erasure code to be used in network protocols is described, and its performance and possible applications are discussed.
Journal ArticleDOI

Wavelet transforms versus Fourier transforms

TL;DR: This note is a very basic introduction to wavelets, starting with an orthogonal basis of piecewise constant functions, constructed by dilation and translation, and leading to dilation equations and their unusual solutions.
Book

The Theory of Linear Prediction

TL;DR: The text is self-contained for readers with introductory exposure to signal processing, random processes, and the theory of matrices, and a historical perspective and detailed outline are given in the first chapter.
Journal ArticleDOI

Challenges in Indoor Global Navigation Satellite Systems: Unveiling its core features in signal processing

TL;DR: The science and technology for positioning and navigation has experienced a dramatic evolution and the observation of celestial bodies for navigation purposes has been replaced today by the use of electromagnetic waveforms emitted from reference sources.
Book

Turbo Coding, Turbo Equalisation and Space-Time Coding: EXIT-Chart-Aided Near-Capacity Designs for Wireless Channels

TL;DR: This new edition of Turbo Coding, Turbo Equalisation and Space-Time Coding includes recent advances in near-capacity turbo-transceivers as well as new sections on multi-level coding schemes and of Generalized Low Density Parity Check codes.