scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Restoring the Nyquist Barrier [SP Forum]

01 Jan 1996-IEEE Signal Processing Magazine (IEEE)-Vol. 13, Iss: 1, pp 24
TL;DR: Signal processing analysis and simulation software tools should be used knowledgably for purposes of productivity enhancement, and should not be used blindly without the capability to determine when the answer provided by the tool “looks right.”
Abstract: Restoring the Nyquist Barrier “Results of data analyzed by software simulation tools are meaningless.” This was my first impression after reading the SP Lite article “Breaking the Nyquist Bmier” by Lynn Smith in the July 1995 issue [ 11. This article contains a number of fundamental conceptual errors upon which I shall comment. The author has also rediscovered filter banks, despite extensive published art on this topic. However, beyond these conceptual and rediscovery issues, I was most struck by the dependence of the author on the use of a software simulation tool to justify the author’s erroneous conclusions without an apparent full understanding of the graphical results that the tool produced. The Smith article reinforces a concern that I have been expressing to my colleagues in academia regarding the extensive use of DSP software simulation tools in virtual signal environments as a means for teaching signal processing. A selection of DSP software tools were highlighted in the article by Ebel and Younan 171 that appeared in the November 1995 IEEE Signal Processing Magazine, an issue dedicated, coincidentally, to signal processing education. Specifically, there appears to be a growing dependence on these tools with canned experiments that fails to adequately prepare many students for solving real world signal processing problems. This is most manifest during technical interviews that I often conduct with new graduates who are candidates for employment. Without access to software tools during the interview, I have observed with increasing incidence that these graduates, when presented with situations involving typical signal processing applications of importance to my employer, are unable to confidently propose signal processing operations using only knowledge of basic signal processing principles. The most evident difficulty has been their inability to relate properties of continuous time-domain and spatial-domain signals with discrete-domain digital representations of and operations on those signals. Mathematical normalization of parameters (for example, the assumption of an unity sampling rate, or expressing frequency in radian units) often utilized in academic treatments of signal processing operations also handicaps students in forming an intuitive sense of time and frequency scale when confronted with actual signals and their transforms. Signal processing analysis and simulation software tools should be used knowledgably for purposes of productivity enhancement, and should not be used blindly without the capability to determine when the answer provided by the tool “looks right.” This viewpoint is reminiscent of the debate concerning the introduction of hand calculators in public schools, in which it was argued whether hand calculators should be used by students as a substitute before learning the mathematical operations performed by the calculators or should be used only as productivity aids after they had substantial experience with the mathematical operations. I would now like to demonstrate, by use of first principles, “restoration” of the Nyquist bamer of the demonstration signal used in the Smith article [l] by showing that it was never broken in the first place. I will do this armed only with four basic waveforms (depicted in Fig. l), their transforms, and two variants of the convolution theorm. Specifically, if x(t) wX(f) designates the Fourier transform relationship between the temporal waveform x(t) and its Fourier transform X(f), while y(t) -Y(f) designates the Fourier transform relationship between
Citations
More filters
Journal ArticleDOI
S. Biyiksiz1
01 Mar 1985
TL;DR: This book by Elliott and Rao is a valuable contribution to the general areas of signal processing and communications and can be used for a graduate level course in perhaps two ways.
Abstract: There has been a great deal of material in the area of discrete-time transforms that has been published in recent years. This book does an excellent job of presenting important aspects of such material in a clear manner. The book has 11 chapters and a very useful appendix. Seven of these chapters are essentially devoted to the Fourier series/transform, discrete Fourier transform, fast Fourier transform (FFT), and applications of the FFT in the area of spectral estimation. Chapters 8 through 10 deal with many other discrete-time transforms and algorithms to compute them. Of these transforms, the KarhunenLoeve, the discrete cosine, and the Walsh-Hadamard transform are perhaps the most well-known. A lucid discussion of number theoretic transforms i5 presented in Chapter 11. This reviewer feels that the authors have done a fine job of compiling the pertinent material and presenting it in a concise and clear manner. There are a number of problems at the end of each chapter, an appreciable number of which are challenging. The authors have included a comprehensive set of references at the end of the book. In brief, this book is a valuable contribution to the general areas of signal processing and communications. It can be used for a graduate level course in perhaps two ways. One would be to cover the first seven chapters in great detail. The other would be to cover the whole book by focussing on different topics in a selective manner. This book by Elliott and Rao is extremely useful to researchers/engineers who are working in the areas of signal processing and communications. It i s also an excellent reference book, and hence a valuable addition to one’s library

843 citations

Journal ArticleDOI
TL;DR: The experimental tutorial software described in this paper is designed specifically for engineering education at the university level, and it is significantly different in purpose and structure from other educational software.
Abstract: The experimental tutorial software described in this paper is designed specifically for engineering education at the university level, and it is significantly different in purpose and structure from other educational software. Its goal is to help students understand, visualize, and connect basic concepts. It is not a design tool or a wealth of hyperlinked text or an infinite source of drill problems. It is intended to be a dynamic and flexible resource for instructors as well as students and to be used by groups as well as individuals. It takes advantage of currently available high-resolution graphics capability to go beyond what can reasonably be expected of textbooks or computer systems with small low-resolution displays. Since the software is written as an X Windows Motif application, it achieves a wide degree of platform independence for workstations and PCs. In addition, the tutorial set can be distributed without license fees or additional software acquisition costs for the user.

43 citations

Book ChapterDOI
01 Jan 1998
TL;DR: A number of recent innovations in the application of ANNs as character classifiers for word recognition, including integrated multiple representations, normalized output error, negative training, stroke warping, frequency balancing, error emphasis, and quantized weights are presented.
Abstract: While on-line handwriting recognition is an area of long-standing and ongoing research, the recent emergence of portable, pen-based computers has focused urgent attention on usable, practical solutions. We discuss a combination and improvement of classical methods to produce robust recognition of hand-printed English text, for a recognizer shipping in new models of Apple Computer’s Newton MessagePad® and eMate® . Combining an artificial neural network (ANN), as a character classifier, with a context-driven search over segmentation and word recognition hypotheses provides an effective recognition system. Long-standing issues relative to training, generalization, segmentation, models of context, probabilistic formalisms, etc., need to be resolved, however, to get excellent performance. We present a number of recent innovations in the application of ANNs as character classifiers for word recognition, including integrated multiple representations, normalized output error, negative training, stroke warping, frequency balancing, error emphasis, and quantized weights. User-adaptation and extension to cursive recognition pose continuing challenges.

19 citations

Journal ArticleDOI
TL;DR: The primary purpose of this contribution is to expose a fundamental misconception regarding the universality of the sampling theorem as taught in most digital signal processing textbooks.
Abstract: The author comments that Smith (IEEE Signal Processing Magazine, Forum Feedback, May 1996) continues to proclaim the novelty of an approach (Smith, 1995) that he purports to "break the Nyquist barrier," in spite of the revelation (Marple Jr., 1996) that his approach is simply a special two-filter case of well known analysis and synthesis filter banks performed with sample-and-hold waveforms. Smith's Fig.4(b) in Smith (1995) can be compared with the conventional filter banks of Fig.4 in Marple Jr. (1996) and it is observed that they are identical. Smith also makes further observations to which the present author responds with additional commentary. However, the primary purpose of this contribution is to expose a fundamental misconception regarding the universality of the sampling theorem as taught in most digital signal processing textbooks. It is this misconception that led Smith to prematurely claim victory over a perceived impenetrable Nyquist barrier.
Journal ArticleDOI
TL;DR: The proposed method is nothing else than an analog form of the well-known filter banks, whose theory is consistent with the (correct) definition of the Nyquist frequency, and in general, the method can not be used with the zero-order hold operation, unless the filters are designed with frequency-dependent magnitude to compensate for the spectrum alteration produced by the sample-and-hold operation.
Abstract: The discussion focuses only on those remarks of J. Lynn Smith (see ibid., p.14, May 1996) that were directly related to earlier comments (see ibid., p.41, July 1995). It complements Marple's (see ibid., p.24, January 1996) filter-bank analysis and tries to remove some clutter due to the improper test signals used. The frame of this discussion consist of the following three statements: the "successful" use of a sampling rate lower than the Nyquist rate is an illusion; the proposed method is nothing else than an analog form of the well-known filter banks, whose theory is consistent with the (correct) definition of the Nyquist frequency; and in general, the method can not be used with the zero-order hold operation, unless the filters are designed with frequency-dependent magnitude to compensate for the spectrum alteration produced by the sample-and-hold operation.
References
More filters
Journal ArticleDOI
S. Biyiksiz1
01 Mar 1985
TL;DR: This book by Elliott and Rao is a valuable contribution to the general areas of signal processing and communications and can be used for a graduate level course in perhaps two ways.
Abstract: There has been a great deal of material in the area of discrete-time transforms that has been published in recent years. This book does an excellent job of presenting important aspects of such material in a clear manner. The book has 11 chapters and a very useful appendix. Seven of these chapters are essentially devoted to the Fourier series/transform, discrete Fourier transform, fast Fourier transform (FFT), and applications of the FFT in the area of spectral estimation. Chapters 8 through 10 deal with many other discrete-time transforms and algorithms to compute them. Of these transforms, the KarhunenLoeve, the discrete cosine, and the Walsh-Hadamard transform are perhaps the most well-known. A lucid discussion of number theoretic transforms i5 presented in Chapter 11. This reviewer feels that the authors have done a fine job of compiling the pertinent material and presenting it in a concise and clear manner. There are a number of problems at the end of each chapter, an appreciable number of which are challenging. The authors have included a comprehensive set of references at the end of the book. In brief, this book is a valuable contribution to the general areas of signal processing and communications. It can be used for a graduate level course in perhaps two ways. One would be to cover the first seven chapters in great detail. The other would be to cover the whole book by focussing on different topics in a selective manner. This book by Elliott and Rao is extremely useful to researchers/engineers who are working in the areas of signal processing and communications. It i s also an excellent reference book, and hence a valuable addition to one’s library

843 citations

Journal ArticleDOI
TL;DR: Time domain techniques, in particular the theory of orthogonal expansions, are here used to derive the quadrature sampling theorem as well as the uniform sampling theorem for bandpass signals, a result usually derived from frequency (spectral) considerations.
Abstract: Deterministic bandpass signals are considered in which the nonzero portions of the signal spectrum are confined to the frequency region 0? ?0 ? ?/2 ? |?| ??0 + ?/2, where ? > 0 is the "bandwidth" of the signal. Quadrature sampling, as introduced by O. D. Grace and S. P. Pitt, requires uniform sampling of both the bandpass signal and its quarter wavelength (based on nominal frequency ?0) translation, each at a common sampling rate depending on the exact relationship between ?0 and ?. When th intersample sample spacing is properly chosen, the bandpass signal can be reconstructed in its entirety from knowledge of the sample values; moreover, with quadrature sampling, the (low-pass) in-phase and quadrature components of the bandpass signal have a simple explicit representation in terms of samples of the original bandpass signal. Time domain techniques, in particular the theory of orthogonal expansions, are here used to derive the quadrature sampling theorem as well as the uniform sampling theorem for bandpass signals, a result usually derived from frequency (spectral) considerations. The resulting minimum sampling rate for the quadrature sampling theorem provides a reduction in the sampling rate previously announced by Grace and Pitt.

61 citations

Journal ArticleDOI
TL;DR: The article highlights the roles played by software in DSP education, including symbolic and numeric processing as well as high-level programming of DSPs.
Abstract: Global competition is forcing universities to rethink their approach to undergraduate education. Advanced hardware and software tools make it possible for students to grasp fundamental engineering concepts quicker, ultimately enhancing the undergraduate educational experience. The article highlights the roles played by software in DSP education, including symbolic and numeric processing as well as high-level programming of DSPs. The authors include a survey of some of the more popular software environments and identify important logistical issues. >

12 citations

Journal ArticleDOI
TL;DR: A simple signal processing scheme to extend resolution to nearly twice the Nyquist without the problem of alias emerged by placing one-dimensional signals into a simulated sample-and-hold process in a Mathcad application.
Abstract: The author discusses the concept that data sampled beyond the Nyquist frequency is meaningless. A simple signal processing scheme to extend resolution to nearly twice the Nyquist without the problem of alias emerged. It starts by placing one-dimensional signals into a simulated sample-and-hold process in a Mathcad application. >

4 citations