scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Magazine in 1996"


Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations


Journal ArticleDOI
T.K. Moon1
TL;DR: The EM (expectation-maximization) algorithm is ideally suited to problems of parameter estimation, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation.
Abstract: A common task in signal processing is the estimation of the parameters of a probability distribution function Perhaps the most frequently encountered estimation problem is the estimation of the mean of a signal in noise In many parameter estimation problems the situation is more complicated because direct access to the data necessary to estimate the parameters is impossible, or some of the data are missing Such difficulties arise when an outcome is a result of an accumulation of simpler outcomes, or when outcomes are clumped together, for example, in a binning or histogram operation There may also be data dropouts or clustering in such a way that the number of underlying data points is unknown (censoring and/or truncation) The EM (expectation-maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation The EM algorithm is presented at a level suitable for signal processing practitioners who have had some exposure to estimation theory

2,573 citations


Journal ArticleDOI
TL;DR: The problem of blind deconvolution for images is introduced, the basic principles and methodologies behind the existing algorithms are provided, and the current trends and the potential of this difficult signal processing problem are examined.
Abstract: The goal of image restoration is to reconstruct the original scene from a degraded observation. This recovery process is critical to many image processing applications. Although classical linear image restoration has been thoroughly studied, the more difficult problem of blind image restoration has numerous research possibilities. We introduce the problem of blind deconvolution for images, provide an overview of the basic principles and methodologies behind the existing algorithms, and examine the current trends and the potential of this difficult signal processing problem. A broad review of blind deconvolution methods for images is given to portray the experience of the authors and of the many other researchers in this area. We first introduce the blind deconvolution problem for general signal processing applications. The specific challenges encountered in image related restoration applications are explained. Analytic descriptions of the structure of the major blind deconvolution approaches for images then follows. The application areas, convergence properties, complexity, and other implementation issues are addressed for each approach. We then discuss the strengths and limitations of various approaches based on theoretical expectations and computer simulations.

1,332 citations


Journal ArticleDOI
TL;DR: The genetic algorithm is introduced as an emerging optimization algorithm for signal processing and a number of applications, such as IIR adaptive filtering, time delay estimation, active noise control, and speech processing, that are being successfully implemented are described.
Abstract: This article introduces the genetic algorithm (GA) as an emerging optimization algorithm for signal processing. After a discussion of traditional optimization techniques, it reviews the fundamental operations of a simple GA and discusses procedures to improve its functionality. The properties of the GA that relate to signal processing are summarized, and a number of applications, such as IIR adaptive filtering, time delay estimation, active noise control, and speech processing, that are being successfully implemented are described.

1,093 citations


Journal ArticleDOI
TL;DR: This work presents a comprehensive review of FIR and allpass filter design techniques for bandlimited approximation of a fractional digital delay, focusing on simple and efficient methods that are well suited for fast coefficient update or continuous control of the delay value.
Abstract: A fractional delay filter is a device for bandlimited interpolation between samples. It finds applications in numerous fields of signal processing, including communications, array processing, speech processing, and music technology. We present a comprehensive review of FIR and allpass filter design techniques for bandlimited approximation of a fractional digital delay. Emphasis is on simple and efficient methods that are well suited for fast coefficient update or continuous control of the delay value. Various new approaches are proposed and several examples are provided to illustrate the performance of the methods. We also discuss the implementation complexity of the algorithms. We focus on four applications where fractional delay filters are needed: synchronization of digital modems, incommensurate sampling rate conversion, high-resolution pitch prediction, and sound synthesis of musical instruments.

1,014 citations



Journal ArticleDOI
TL;DR: This article describes conventional A/D conversion, as well as its performance modeling, and examines the use of sigma-delta converters to convert narrowband bandpass signals with high resolution.
Abstract: Using sigma-delta A/D methods, high resolution can be obtained for only low to medium signal bandwidths. This article describes conventional A/D conversion, as well as its performance modeling. We then look at the technique of oversampling, which can be used to improve the resolution of classical A/D methods. We discuss how sigma-delta converters use the technique of noise shaping in addition to oversampling to allow high resolution conversion of relatively low bandwidth signals. We examine the use of sigma-delta converters to convert narrowband bandpass signals with high resolution. Several parallel sigma-delta converters, which offer the potential of extending high resolution conversion to signals with higher bandwidths, are also described.

680 citations


Journal ArticleDOI
TL;DR: Linear predictive (LP) analysis, the first step of feature extraction, is discussed, and various robust cepstral features derived from LP coefficients are described, including the afJine transform, which is a feature transformation approach that integrates mismatch to simultaneously combat both channel and noise distortion.
Abstract: The future commercialization of speaker- and speech-recognition technology is impeded by the large degradation in system performance due to environmental differences between training and testing conditions. This is known as the "mismatched condition." Studies have shown [l] that most contemporary systems achieve good recognition performance if the conditions during training are similar to those during operation (matched conditions). Frequently, mismatched conditions axe present in which the performance is dramatically degraded as compared to the ideal matched conditions. A common example of this mismatch is when training is done on clean speech and testing is performed on noise- or channel-corrupted speech. Robust speech techniques [2] attempt to maintain the performance of a speech processing system under such diverse conditions of operation. This article presents an overview of current speaker-recognition systems and the problems encountered in operation, and it focuses on the front-end feature extraction process of robust speech techniques as a method of improvement. Linear predictive (LP) analysis, the first step of feature extraction, is discussed, and various robust cepstral features derived from LP coefficients are described. Also described is the afJine transform, which is a feature transformation approach that integrates mismatch to simultaneously combat both channel and noise distortion.

344 citations


Journal ArticleDOI
TL;DR: Discusses the application of neural networks to general and radial basis functions and in particular to adaptive equalization and interference rejection problems, and neural-network-based algorithms show promise in spread spectrum systems.
Abstract: Discusses the application of neural networks to general and radial basis functions and in particular to adaptive equalization and interference rejection problems. Neural-network-based algorithms strike a good balance between performance and complexity in adaptive equalization, and show promise in spread spectrum systems.

228 citations


Journal ArticleDOI
TL;DR: The data communications problem is described, the rationale for introducing fractionally spaced equalizers, new results, and their implications are described, and results are applied to actual transmission channels.
Abstract: Modern digital transmission systems commonly use an adaptive equalizer as a key part of the receiver. The design of this equalizer is important since it determines the maximum quality attainable from the system, and represents a high fraction of the computation used to implement the demodulator. Analytical results offer a new way of looking at fractionally spaced equalizers and have some surprising practical implications. This article describes the data communications problem, the rationale for introducing fractionally spaced equalizers, new results, and their implications. We then apply those results to actual transmission channels.

212 citations


Journal ArticleDOI
TL;DR: The article discusses the major approaches, such as projection based blind deconvolution and maximum likelihood restoration, which were overlooked previously (see ibid., no.5, 1996).
Abstract: The article discusses the major approaches, such as projection based blind deconvolution and maximum likelihood restoration, we overlooked previously (see ibid., no.5, 1996). We discuss them for completeness along with some other works found in the literature. As the area of blind image restoration is a rapidly growing field of research, new methods are constantly being developed.

Journal ArticleDOI
TL;DR: The principles and architecture of current LVR systems are discussed and the key issues affecting their future deployment are identified; to illustrate the various points raised, the Cambridge University HTK system is described.
Abstract: Considerable progress has been made in speech-recognition technology over the last few years and nowhere has this progress been more evident than in the area of large-vocabulary recognition (LVR). Current laboratory systems are capable of transcribing continuous speech from any speaker with average word-error rates between 5% and 10%. If speaker adaptation is allowed, then after 2 or 3 minutes of speech, the error rate will drop well below 5% for most speakers. LVR systems had been limited to dictation applications since the systems were speaker dependent and required words to be spoken with a short pause between them. However, the capability to recognize natural continuous-speech input from any speaker opens up many more applications. As a result, LVR technology appears to be on the brink of widespread deployment across a range of information technology (IT) systems. This article discusses the principles and architecture of current LVR systems and identifies the key issues affecting their future deployment. To illustrate the various points raised, the Cambridge University HTK system is described. This system is a modem design that gives state-of-the-art performance, and it is typical of the current generation of recognition systems.

Journal ArticleDOI
Simon Haykin1
TL;DR: The article examines the use of neural networks as an engineering tool for signal processing applications to articulate a new philosophy in the approach to statistical signal processing using neural networks, which account for the practical realities of nonlinearity, nonstationarity, and non-Gaussianity.
Abstract: Advanced algorithms for signal processing simultaneously account for nonlinearity, nonstationarity, and non-Gaussianity. The article examines the use of neural networks as an engineering tool for signal processing applications. The aim is three fold: to articulate a new philosophy in the approach to statistical signal processing using neural networks, which (either by themselves or in combination with other suitable techniques) account for the practical realities of nonlinearity, nonstationarity, and non-Gaussianity; to describe three case studies using real-life data, which clearly demonstrate the superiority of this new approach over the classical approaches to statistical signal processing; and to discuss mutual information as a criterion for designing unsupervised neural networks, thus moving away from the mean-square error criterion.

Journal ArticleDOI
TL;DR: This article provides a tutorial description as well as presenting new results on many of the fundamental higher-order concepts used in deconvolution, with the emphasis on maximizing the deconvolved signal's normalized cumulant.
Abstract: Classical deconvolution is concerned with the task of recovering an excitation signal, given the response of a known time-invariant linear operator to that excitation Deconvolution is discussed along with its more challenging counterpart, blind deconvolution, where no knowledge of the linear operator is assumed This discussion focuses on a class of deconvolution algorithms based on higher-order statistics, and more particularly, cumulants These algorithms offer the potential of superior performance in both the noise free and noisy data cases relative to that achieved by other deconvolution techniques This article provides a tutorial description as well as presenting new results on many of the fundamental higher-order concepts used in deconvolution, with the emphasis on maximizing the deconvolved signal's normalized cumulant

Journal ArticleDOI
TL;DR: A new hybrid search methodology is developed in which the genetic-type search is embedded into gradient-descent algorithms (such as the LMS algorithm), which has the characteristics of faster convergence, global search capability, less sensitivity to the choice of parameters, and simple implementation.
Abstract: An "evolutionary" approach called the genetic algorithm (GA) was introduced for multimodal optimization in adaptive IIR filtering. However, the disadvantages of using such an algorithm are slow convergence and high computational complexity. Initiated by the merits and shortcomings of the gradient-based algorithms and the evolutionary algorithms, we developed a new hybrid search methodology in which the genetic-type search is embedded into gradient-descent algorithms (such as the LMS algorithm). The new algorithm has the characteristics of faster convergence, global search capability, less sensitivity to the choice of parameters, and simple implementation. The basic idea of the new algorithm is that the filter coefficients are evolved in a random manner once the filter is found to be stuck at a local minimum or to have a slow convergence rate. Only the fittest coefficient set survives and is adapted according to the gradient-descent algorithm until the next evolution. As the random perturbation will be subject to the stability constraint, the filter can always minimum in a stable manner and achieve a smaller error performance with a fast rate. The article reviews adaptive IIR filtering and discusses common learning algorithms for adaptive filtering. It then presents a new learning algorithm based on the genetic search approach and shows how it can help overcome the problems associated with gradient-based and GA algorithms.

Journal ArticleDOI
TL;DR: A case study of the design of a computationally intensive system to do adaptive nulling of interfering signals for a phased-array radar with many antenna elements, and another DSP computation that might benefit from similar architecture, technology, or algorithms: the solution of Toeplitz linear equations.
Abstract: Presents a case study of the design of a computationally intensive system to do adaptive nulling of interfering signals for a phased-array radar with many antenna elements. The goal of the design was to increase the computational horsepower available for this problem by about three orders of magnitude under the tight constraints of size, weight and power which are typical of an orbiting satellite. By combining the CORDIC rotation algorithm, systolic array concepts, Givens transformations, and restructurable VLSI, we built a system as small as a package of cigarettes, but capable of the equivalent of almost three billion operations per second. Our work was motivated by the severe limitations of size, weight and power which apply to computation aboard a spacecraft, although the same factors impose costs which are worth reducing in other circumstances. For an array of N antennas, the cost of the adaptive nulling computation grows as N/sup 3/, so simply using more resources when N is large is not practical. The architecture developed, called MUSE (matrix update systolic experiment) determines the nulling weights for N=64 antenna elements in a sidelobe cancelling configuration. After explaining the antenna nulling system, we discuss another DSP computation that might benefit from similar architecture, technology, or algorithms: the solution of Toeplitz linear equations.

Journal ArticleDOI
TL;DR: The purpose is to reveal the capabilities, limits, and effectiveness of massively parallel processors with symmetric multiprocessors and clusters of workstations in signal processing.
Abstract: We assess the state-of-the-art technology in massively parallel processors (MPPs) and their variations in different architectural platforms. Architectural and programming issues are identified in using MPPs for time-critical applications such as adaptive radar signal processing. We review the enabling technologies. These include high-performance CPU chips and system interconnects, distributed memory architectures, and various latency hiding mechanisms. We characterize the concept of scalability in three areas: resources, applications, and technology. Scalable performance attributes are analytically defined. Then we compare MPPs with symmetric multiprocessors (SMPs) and clusters of workstations (COWs). The purpose is to reveal their capabilities, limits, and effectiveness in signal processing. We evaluate the IBM SP2 at MHPCC, the Intel Paragon at SDSC, the Gray T3D at Gray Eagan Center, and the Gray T3E and ASCI TeraFLOP system proposed by Intel. On the software and programming side, we evaluate existing parallel programming environments, including the models, languages, compilers, software tools, and operating systems. Some guidelines for program parallelization are provided. We examine data-parallel, shared-variable, message-passing, and implicit programming models. Communication functions and their performance overhead are discussed. Available software tools and communication libraries are also introduced.


Journal ArticleDOI
TL;DR: The charts and tables presented reflect up-to-date information on the most widely used programmable DSP chips, DSP board products, major software tools in wide use, types of commercial A-D converters, advanced A- D converters in research, available FIR filters (standard and weighted), IC frequency synthesizers, and integrated FFT chipsets.
Abstract: The explosive growth of digital signal processing techniques has given way to a myriad of high performance DSP devices and tools for today's hardware designer and software specialist. The charts and tables presented reflect up-to-date information on the most widely used programmable DSP chips, DSP board products, major software tools in wide use, types of commercial A-D converters, advanced A-D converters in research, available FIR filters (standard and weighted), IC frequency synthesizers, and integrated FFT chipsets.

Journal ArticleDOI
TL;DR: Signal processing analysis and simulation software tools should be used knowledgably for purposes of productivity enhancement, and should not be used blindly without the capability to determine when the answer provided by the tool “looks right.”
Abstract: Restoring the Nyquist Barrier “Results of data analyzed by software simulation tools are meaningless.” This was my first impression after reading the SP Lite article “Breaking the Nyquist Bmier” by Lynn Smith in the July 1995 issue [ 11. This article contains a number of fundamental conceptual errors upon which I shall comment. The author has also rediscovered filter banks, despite extensive published art on this topic. However, beyond these conceptual and rediscovery issues, I was most struck by the dependence of the author on the use of a software simulation tool to justify the author’s erroneous conclusions without an apparent full understanding of the graphical results that the tool produced. The Smith article reinforces a concern that I have been expressing to my colleagues in academia regarding the extensive use of DSP software simulation tools in virtual signal environments as a means for teaching signal processing. A selection of DSP software tools were highlighted in the article by Ebel and Younan 171 that appeared in the November 1995 IEEE Signal Processing Magazine, an issue dedicated, coincidentally, to signal processing education. Specifically, there appears to be a growing dependence on these tools with canned experiments that fails to adequately prepare many students for solving real world signal processing problems. This is most manifest during technical interviews that I often conduct with new graduates who are candidates for employment. Without access to software tools during the interview, I have observed with increasing incidence that these graduates, when presented with situations involving typical signal processing applications of importance to my employer, are unable to confidently propose signal processing operations using only knowledge of basic signal processing principles. The most evident difficulty has been their inability to relate properties of continuous time-domain and spatial-domain signals with discrete-domain digital representations of and operations on those signals. Mathematical normalization of parameters (for example, the assumption of an unity sampling rate, or expressing frequency in radian units) often utilized in academic treatments of signal processing operations also handicaps students in forming an intuitive sense of time and frequency scale when confronted with actual signals and their transforms. Signal processing analysis and simulation software tools should be used knowledgably for purposes of productivity enhancement, and should not be used blindly without the capability to determine when the answer provided by the tool “looks right.” This viewpoint is reminiscent of the debate concerning the introduction of hand calculators in public schools, in which it was argued whether hand calculators should be used by students as a substitute before learning the mathematical operations performed by the calculators or should be used only as productivity aids after they had substantial experience with the mathematical operations. I would now like to demonstrate, by use of first principles, “restoration” of the Nyquist bamer of the demonstration signal used in the Smith article [l] by showing that it was never broken in the first place. I will do this armed only with four basic waveforms (depicted in Fig. l), their transforms, and two variants of the convolution theorm. Specifically, if x(t) wX(f) designates the Fourier transform relationship between the temporal waveform x(t) and its Fourier transform X(f), while y(t) -Y(f) designates the Fourier transform relationship between

Journal ArticleDOI
TL;DR: Individuals from all related scientific disciplines and specialties are encouraged to participate to provide insight into issues pertinent to the area of amplification and signal processing and to formulate the future directions of hearing aid research and development.
Abstract: Individuals from all related scientific disciplines and specialties are encouraged to participate to provide insight into issues pertinent to the area of amplification and signal processing and to formulate the future directions of hearing aid research and development. Scientific abstracts emphasizing current research findings are due March 15,1997. The conference format will include both podium presentations and poster sessions, with considerable time allotted for audience discussion. Abstracts will be peer reviewed for scientific merit and relevance. Exhibit space will be available.

Journal Article
TL;DR: In this article, Radial basis function (RBF) and Volterra series (VS) nonlinear predictors are examined with a view to reducing their complexity while maintaining prediction performance.
Abstract: Radial basis function (RBF) and Volterra series (VS) nonlinear predictors are examined with a view to reducing their complexity while maintaining prediction performance. A geometrical interpretation is presented. This interpretation indicates that while a multiplicity of choices of reduced state predictors exists, some choices are better than others in terms of the numerical conditioning of the solution. Two algorithms are developed using signal subspace concepts to find reduced state solutions which are ‘close to’ the minimum norm solution and which share its numerical properties. The performance of these algorithms are assessed using chaotic time series as test signals. The conclusion is drawn that the so-called Direct Method, which only uses the eigenstructure to characterise the signal subspace, offers the best performance.

Journal ArticleDOI
TL;DR: The primary purpose of this contribution is to expose a fundamental misconception regarding the universality of the sampling theorem as taught in most digital signal processing textbooks.
Abstract: The author comments that Smith (IEEE Signal Processing Magazine, Forum Feedback, May 1996) continues to proclaim the novelty of an approach (Smith, 1995) that he purports to "break the Nyquist barrier," in spite of the revelation (Marple Jr., 1996) that his approach is simply a special two-filter case of well known analysis and synthesis filter banks performed with sample-and-hold waveforms. Smith's Fig.4(b) in Smith (1995) can be compared with the conventional filter banks of Fig.4 in Marple Jr. (1996) and it is observed that they are identical. Smith also makes further observations to which the present author responds with additional commentary. However, the primary purpose of this contribution is to expose a fundamental misconception regarding the universality of the sampling theorem as taught in most digital signal processing textbooks. It is this misconception that led Smith to prematurely claim victory over a perceived impenetrable Nyquist barrier.

Journal ArticleDOI
TL;DR: The proposed method is nothing else than an analog form of the well-known filter banks, whose theory is consistent with the (correct) definition of the Nyquist frequency, and in general, the method can not be used with the zero-order hold operation, unless the filters are designed with frequency-dependent magnitude to compensate for the spectrum alteration produced by the sample-and-hold operation.
Abstract: The discussion focuses only on those remarks of J. Lynn Smith (see ibid., p.14, May 1996) that were directly related to earlier comments (see ibid., p.41, July 1995). It complements Marple's (see ibid., p.24, January 1996) filter-bank analysis and tries to remove some clutter due to the improper test signals used. The frame of this discussion consist of the following three statements: the "successful" use of a sampling rate lower than the Nyquist rate is an illusion; the proposed method is nothing else than an analog form of the well-known filter banks, whose theory is consistent with the (correct) definition of the Nyquist frequency; and in general, the method can not be used with the zero-order hold operation, unless the filters are designed with frequency-dependent magnitude to compensate for the spectrum alteration produced by the sample-and-hold operation.