scispace - formally typeset
Search or ask a question
Author

C.S. Burrus

Bio: C.S. Burrus is an academic researcher from Rice University. The author has contributed to research in topics: Fast Fourier transform & Wavelet transform. The author has an hindex of 36, co-authored 112 publications receiving 10028 citations. Previous affiliations of C.S. Burrus include Information Technology Institute.


Papers
More filters
Book
14 Aug 1997
TL;DR: This work describes the development of the Basic Multiresolution Wavelet System and some of its components, as well as some of the techniques used to design and implement these systems.
Abstract: 1 Introduction to Wavelets 2 A Multiresolution Formulation of Wavelet Systems 3 Filter Banks and the Discrete Wavelet Transform 4 Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Tight Frames, and Unconditional Bases 5 The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients 6 Regularity, Moments, and Wavelet System Design 7 Generalizations of the Basic Multiresolution Wavelet System 8 Filter Banks and Transmultiplexers 9 Calculation of the Discrete Wavelet Transform 10 Wavelet-Based Signal Processing and Applications 11 Summary Overview 12 References Bibliography Appendix A Derivations for Chapter 5 on Scaling Functions Appendix B Derivations for Section on Properties Appendix C Matlab Programs Index

2,339 citations

Book
01 Jan 1987
TL;DR: Introduction to Digital Filters Properties of Finite Impulse-Response Filters Design of Linear-Phase Finite Filters Minimum Phase and Complex Approximation and Comparison of Filtering Alternatives Appendix Index.
Abstract: Introduction to Digital Filters Properties of Finite Impulse-Response Filters Design of Linear-Phase Finite Filters Minimum Phase and Complex Approximation Implementation of Finite Impulse-Response Filters Properties of Infinite Impulse-Response Filters Design of Infinite Impulse-Response Filters Implementation of Infinite-Response Filters Comparison of Filtering Alternatives Appendix Index

908 citations

Journal ArticleDOI
M. Lang1, H. Guo, J.E. Odegard, C.S. Burrus, Raymond O. Wells 
TL;DR: A new nonlinear noise reduction method is presented that uses the discrete wavelet transform instead of the usual orthogonal one, resulting in a significantly improved noise reduction compared to the original wavelet based approach.
Abstract: A new nonlinear noise reduction method is presented that uses the discrete wavelet transform. Similar to Donoho (1995) and Donohoe and Johnstone (1994, 1995), the authors employ thresholding in the wavelet transform domain but, following a suggestion by Coifman, they use an undecimated, shift-invariant, nonorthogonal wavelet transform instead of the usual orthogonal one. This new approach can be interpreted as a repeated application of the original Donoho and Johnstone method for different shifts. The main feature of the new algorithm is a significantly improved noise reduction compared to the original wavelet based approach. This holds for a large class of signals, both visually and in the l/sub 2/ sense, and is shown theoretically as well as by experimental results.

516 citations

Journal ArticleDOI
TL;DR: A new implementation of the real-valued split-radix FFT is presented, an algorithm that uses fewer operations than any otherreal-valued power-of-2-length FFT.
Abstract: This tutorial paper describes the methods for constructing fast algorithms for the computation of the discrete Fourier transform (DFT) of a real-valued series. The application of these ideas to all the major fast Fourier transform (FFT) algorithms is discussed, and the various algorithms are compared. We present a new implementation of the real-valued split-radix FFT, an algorithm that uses fewer operations than any other real-valued power-of-2-length FFT. We also compare the performance of inherently real-valued transform algorithms such as the fast Hartley transform (FHT) and the fast cosine transform (FCT) to real-valued FFT algorithms for the computation of power spectra and cyclic convolutions. Comparisons of these techniques reveal that the alternative techniques always require more additions than a method based on a real-valued FFT algorithm and result in computer code of equal or greater length and complexity.

489 citations

Journal ArticleDOI
TL;DR: The algorithm developed by Cooley and Tukey clearly had its roots in, though perhaps not a direct influence from, the early twentieth century, and remains the most Widely used method of computing Fourier transforms.
Abstract: THE fast Fourier transform (Fm has become well known . as a very efficient algorithm for calculating the discrete Fourier Transform (Om of a sequence of N numbers. The OFT is used in many disciplines to obtain the spectrum or . frequency content of a Signal, and to facilitate the computation of discrete convolution and correlation. Indeed, published work on the FFT algorithm as a means of calculating the OFT, by J. W. Cooley and J. W. Tukey in 1965 [1], was a turning point in digital signal processing and in certain areas of numerical analysis. They showed that the OFT, which was previously thought to require N 2 arithmetic operations, could be calculated by the new FFT algorithm using only N log Noperations. This algorithm had a revolutionary effect on many digital processing methods, and remains the most Widely used method of computing Fourier transforms [2]. In their original paper, Cooley and Tukey referred only to I. J. Good's work published in 1958 [3] as having influenced their development. However, It was soon discovered there are major differences between the Cooley-Tukey FFT and the algorithm described by Good, which is now commonly referred to as the prime factor algorithm (PFA). Soon after the appearance of the CooleyTukey paper, Rudnick [4] demonstrated a similar algorithm, based on the work of Danielson and Lanczos [5] which had appeared in 1942. This discovery prompted an investigation into the history of the FFT algorithm by Cooley, Lewis, and Welch [6]. They discovered that the Oanielson-Lanczos paper referred to work by Runge published at the tu rn of the centu ry [7, 8]. The algorithm developed by Cooley and Tukey clearly had its roots in, though perhaps not a direct influence from, the early twentieth century. In a recently published history of numerical analysis [9], H. H. Goldstine attributes to Carl Friedrich Gauss, the eminent German mathematician, an algorithm similar to the FFT for the computation of the coefficients of a finite Fourier series. Gauss' treatise describing the algorithm was not published in his lifetime; it appeared only in his collected works [10] as an unpublished manuscript. The presumed year of the composition of this treatise is 1805, thereby suggesting that efficient algorithms for evaluating

451 citations


Cited by
More filters
Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
24 Jan 2005
TL;DR: It is shown that such an approach can yield an implementation of the discrete Fourier transform that is competitive with hand-optimized libraries, and the software structure that makes the current FFTW3 version flexible and adaptive is described.
Abstract: FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.

5,172 citations

Book
01 Jan 2001
TL;DR: The complexity class P is formally defined as the set of concrete decision problems that are polynomial-time solvable, and encodings are used to map abstract problems to concrete problems.
Abstract: problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a "problem" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that "solves" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out "expensive" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its "complexity," that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a "standard" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by

2,817 citations

Book
01 Mar 1995
TL;DR: Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding and developed the theory in both continuous and discrete time.
Abstract: First published in 1995, Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding. The book developed the theory in both continuous and discrete time, and presented important applications. During the past decade, it filled a useful need in explaining a new view of signal processing based on flexible time-frequency analysis and its applications. Since 2007, the authors now retain the copyright and allow open access to the book.

2,793 citations

Book
01 Feb 2006
TL;DR: Wavelet analysis of finite energy signals and random variables and stochastic processes, analysis and synthesis of long memory processes, and the wavelet variance.
Abstract: 1. Introduction to wavelets 2. Review of Fourier theory and filters 3. Orthonormal transforms of time series 4. The discrete wavelet transform 5. The maximal overlap discrete wavelet transform 6. The discrete wavelet packet transform 7. Random variables and stochastic processes 8. The wavelet variance 9. Analysis and synthesis of long memory processes 10. Wavelet-based signal estimation 11. Wavelet analysis of finite energy signals Appendix. Answers to embedded exercises References Author index Subject index.

2,734 citations