scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Signal Processing in 2000"


Journal ArticleDOI
TL;DR: This link facilitates the derivation of powerful identifiability results for MI-SAP, shows that the uniqueness of single- and multiple-invariance ESPRIT stems from uniqueness of low-rank decomposition of three-way arrays, and allows tapping on the available expertise for fitting the PARAFAC model.
Abstract: This paper links multiple invariance sensor array processing (MI-SAP) to parallel factor (PARAFAC) analysis, which is a tool rooted in psychometrics and chemometrics. PARAFAC is a common name for low-rank decomposition of three- and higher way arrays. This link facilitates the derivation of powerful identifiability results for MI-SAP, shows that the uniqueness of single- and multiple-invariance ESPRIT stems from uniqueness of low-rank decomposition of three-way arrays, and allows tapping on the available expertise for fitting the PARAFAC model. The results are applicable to both data-domain and subspace MI-SAP formulations. The paper also includes a constructive uniqueness proof for a special PARAFAC model.

625 citations


Journal ArticleDOI
TL;DR: This work proposes a novel solution called partial encryption, in which a secure encryption algorithm is used to encrypt only part of the compressed data, resulting in a significant reduction in encryption and decryption time.
Abstract: The increased popularity of multimedia applications places a great demand on efficient data storage and transmission techniques. Network communication, especially over a wireless network, can easily be intercepted and must be protected from eavesdroppers. Unfortunately, encryption and decryption are slow, and it is often difficult, if not impossible, to carry out real-time secure image and video communication and processing. Methods have been proposed to combine compression and encryption together to reduce the overall processing time, but they are either insecure or too computationally intensive. We propose a novel solution called partial encryption, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption is applied to several image and video compression algorithms in this paper. Only 13-27% of the output from quadtree compression algorithms is encrypted for typical images, and less than 2% is encrypted for 512/spl times/512 images compressed by the set partitioning in hierarchical trees (SPIHT) algorithm. The results are similar for video compression, resulting in a significant reduction in encryption and decryption time. The proposed partial encryption schemes are fast, secure, and do not reduce the compression performance of the underlying compression algorithm.

612 citations


Journal ArticleDOI
TL;DR: This definition is based on a particular set of eigenvectors of the DFT matrix, which constitutes the discrete counterpart of the set of Hermite-Gaussian functions, and is exactly unitary, index additive, and reduces to the D FT for unit order.
Abstract: We propose and consolidate a definition of the discrete fractional Fourier transform that generalizes the discrete Fourier transform (DFT) in the same sense that the continuous fractional Fourier transform generalizes the continuous ordinary Fourier transform. This definition is based on a particular set of eigenvectors of the DFT matrix, which constitutes the discrete counterpart of the set of Hermite-Gaussian functions. The definition is exactly unitary, index additive, and reduces to the DFT for unit order. The fact that this definition satisfies all the desirable properties expected of the discrete fractional Fourier transform supports our confidence that it will be accepted as the definitive definition of this transform.

604 citations


Journal ArticleDOI
TL;DR: This paper links the direct-sequence code-division multiple access (DS-CDMA) multiuser separation-equalization-detection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics, and derives a deterministic blind PARAFAC DS- CDMA receiver with performance close to non-blind minimum mean-squared error (MMSE).
Abstract: This paper links the direct-sequence code-division multiple access (DS-CDMA) multiuser separation-equalization-detection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics Exploiting this link, it derives a deterministic blind PARAFAC DS-CDMA receiver with performance close to non-blind minimum mean-squared error (MMSE) The proposed PARAFAC receiver capitalizes on code, spatial, and temporal diversity-combining, thereby supporting small sample sizes, more users than sensors, and/or less spreading than users Interestingly, PARAFAC does not require knowledge of spreading codes, the specifics of multipath (interchip interference), DOA-calibration information, finite alphabet/constant modulus, or statistical independence/whiteness to recover the information-bearing signals Instead, PARAFAC relies on a fundamental result regarding the uniqueness of low-rank three-way array decomposition due to Kruskal (1977, 1988) (and generalized herein to the complex-valued case) that guarantees identifiability of all relevant signals and propagation parameters These and other issues are also demonstrated in pertinent simulation experiments

590 citations


Journal ArticleDOI
TL;DR: A new theoretical framework is introduced for analyzing the performance of a finite length minimum-mean-square error decision feedback equalizer (MMSE-DFE) in a multi-input multi-output (MIMO) environment and quantifies the diversity performance improvement as a function of the number of transmit/receive antennas and equalizer taps.
Abstract: A new theoretical framework is introduced for analyzing the performance of a finite length minimum-mean-square error decision feedback equalizer (MMSE-DFE) in a multi-input multi-output (MIMO) environment. The framework includes transmit and receive diversity systems as special cases and quantifies the diversity performance improvement as a function of the number of transmit/receive antennas and equalizer taps. Fast and parallelizable algorithms for computing the finite-length MIMO MMSE-DFE are presented for three common multi-user detection scenarios.

360 citations


Journal ArticleDOI
TL;DR: An upper bound for the number of the detectable chirp components using the DCFT is provided in terms of signal length and signal and noise powers, and it is shown that the N-point DCFT performs optimally when N is a prime.
Abstract: The discrete Fourier transform (DFT) has found tremendous applications in almost all fields, mainly because it can be used to match the multiple frequencies of a stationary signal with multiple harmonics. In many applications, wideband and nonstationary signals, however, often occur. One of the typical examples of such signals is chirp-type signals that are usually encountered in radar signal processing, such as synthetic aperture radar (SAR) and inverse SAR imaging. Due to the motion of a target, the radar return signals are usually chirps, and their chirp rates include the information about the target, such as the location and the velocity. In this paper, we study discrete chirp-Fourier transform (DCFT), which is analogous to the DFT. Besides the multiple frequency matching similar to the DFT, the DCFT can be used to match the multiple chirp rates in a chirp-type signal with multiple chirp components. We show that when the signal length N is prime, the magnitudes of all the sidelobes of the DCFT of a quadratic chirp signal are 1, whereas the magnitude of the mainlobe of the DCFT is /spl radic/N. With this result, an upper bound for the number of the detectable chirp components using the DCFT is provided in terms of signal length and signal and noise powers. We also show that the N-point DCFT performs optimally when N is a prime.

329 citations


Journal ArticleDOI
TL;DR: The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems and yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter.
Abstract: The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems. The support vector machine has the advantage that a smaller number of parameters for the model can be identified in a manner that does not require the extent of prior information or heuristic assumptions that some previous techniques require. Furthermore, the optimization method of a support vector machine is quadratic programming, which is a well-studied and understood mathematical programming technique. Support vector machine simulations are carried out on nonlinear problems previously studied by other researchers using neural networks. This allows initial comparison against other techniques to determine the feasibility of using the proposed method for nonlinear detection. Results show that support vector machines perform as well as neural networks on the nonlinear problems investigated. A method is then proposed to introduce decision feedback processing to support vector machines to address the fact that intersymbol interference (ISI) data generates input vectors having temporal correlation, whereas a standard support vector machine assumes independent input vectors. Presenting the problem from the viewpoint of the pattern space illustrates the utility of a bank of support vector machines. This approach yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter. A simulation using a linear system shows that the proposed method performs equally to a conventional decision feedback equalizer for this problem.

313 citations


Journal ArticleDOI
TL;DR: An algorithm is proposed that models images by two dimensional (2-D) hidden Markov models (HMMs) that outperforms CART/sup TM/, LVQ, and Bayes VQ in classification by context.
Abstract: For block-based classification, an image is divided into blocks, and a feature vector is formed for each block by grouping statistics extracted from the block. Conventional block-based classification algorithms decide the class of a block by examining only the feature vector of this block and ignoring context information. In order to improve classification by context, an algorithm is proposed that models images by two dimensional (2-D) hidden Markov models (HMMs). The HMM considers feature vectors statistically dependent through an underlying state process assumed to be a Markov mesh, which has transition probabilities conditioned on the states of neighboring blocks from both horizontal and vertical directions. Thus, the dependency in two dimensions is reflected simultaneously. The HMM parameters are estimated by the EM algorithm. To classify an image, the classes with maximum a posteriori probability are searched jointly for all the blocks. Applications of the HMM algorithm to document and aerial image segmentation show that the algorithm outperforms CART/sup TM/, LVQ, and Bayes VQ.

296 citations


Journal ArticleDOI
TL;DR: A new type of DFRFT is introduced, which are unitary, reversible, and flexible, which works in performance similar to the continuous fractional Fourier transform (FRFT) and can be efficiently calculated by the FFT.
Abstract: The discrete fractional Fourier transform (DFRFT) is the generalization of discrete Fourier transform. Many types of DFRFT have been derived and are useful for signal processing applications. We introduce a new type of DFRFT, which are unitary, reversible, and flexible; in addition, the closed-form analytic expression can be obtained. It works in performance similar to the continuous fractional Fourier transform (FRFT) and can be efficiently calculated by the FFT. Since the continuous FRFT can be generalized into the continuous affine Fourier transform (AFT) (the so-called canonical transform), we also extend the DFRFT into the discrete affine Fourier transform (DAFT). We derive two types of the DFRFT and DAFT. Type 1 is similar to the continuous FRFT and AFT and can be used for computing the continuous FRFT and AFT. Type 2 is the improved form of type 1 and can be used for other applications of digital signal processing. Meanwhile, many important properties continuous FRFT and AFT are kept in the closed-form DFRFT and DAFT, and some applications, such as filter design and pattern recognition, are also discussed. The closed-form DFRFT we introduce has the lowest complexity among all current DFRFTs that is still similar to the continuous FRFT.

287 citations


Journal ArticleDOI
TL;DR: A real-valued (unitary) formulation of the popular root-MUSIC direction-of-arrival (DOA) estimation technique is considered and shows identical asymptotic properties of both algorithms in the case of uncorrelated sources and a better performance of unitary root- MUSIC in scenarios with partially correlated or fully coherent sources.
Abstract: A real-valued (unitary) formulation of the popular root-MUSIC direction-of-arrival (DOA) estimation technique is considered. This unitary root-MUSIC algorithm reduces the computational complexity in the eigenanalysis stage of root-MUSIC because it exploits the eigendecomposition of a real-valued covariance matrix. The asymptotic performance of unitary root-MUSIC is analyzed and compared with that of conventional root-MUSIC. The results of this comparison show identical asymptotic properties of both algorithms in the case of uncorrelated sources and a better performance of unitary root-MUSIC in scenarios with partially correlated or fully coherent sources. Additionally, our simulations and the results of sonar and ultrasonic real data processing demonstrate an improved threshold performance of unitary root-MUSIC relative to conventional root-MUSIC. It can be then recommended that, as a rule, the unitary root-MUSIC technique should be preferred by the user to the conventional root-MUSIC algorithm.

254 citations


Journal ArticleDOI
TL;DR: A novel viewpoint to the collision resolution problem is introduced for wireless slotted random access networks based on signal separation principles borrowed from signal processing problems, and the protocol's parameters are optimized to maximize the system throughput.
Abstract: A novel viewpoint to the collision resolution problem is introduced for wireless slotted random access networks. This viewpoint is based on signal separation principles borrowed from signal processing problems. The received collided packets are not discarded in this approach but are exploited to extract each individual user packet information. In particular, if k users collide in a given time slot, they repeat their transmission for a total of k times so that k copies of the collided packets are received. Then, the receiver has to resolve a k/spl times/k source mixing problem and separate each individual user. The proposed method does not introduce throughput penalties since it requires only k slots to transmit k colliding packets. Performance issues that are related to the implementation of the collision detection algorithm are studied. The protocol's parameters are optimized to maximize the system throughput.

Journal ArticleDOI
TL;DR: Bayes-optimal binary quantization for the detection of a shift in mean in a pair of dependent Gaussian random variables is studied, and it is seen that in certain situations, an XOR fusion rule is optimal, and in these cases, the implied decision rule is bizarre.
Abstract: Most results about quantized detection rely strongly on an assumption of independence among random variables. With this assumption removed, little is known. Thus, in this paper, Bayes-optimal binary quantization for the detection of a shift in mean in a pair of dependent Gaussian random variables is studied. This is arguably the simplest meaningful problem one could consider. If results and rules are to be found, they ought to make themselves plain in this problem. For certain problem parametrizations (meaning the signals and correlation coefficient), optimal quantization is achievable via a single threshold applied to each observation-the same as under independence. In other cases, one observation is best ignored or is quantized with two thresholds; neither behavior is seen under independence. Further, and again in distinction from the case of independence, it is seen that in certain situations, an XOR fusion rule is optimal, and in these cases, the implied decision rule is bizarre. The analysis is extended to the multivariate Gaussian problem.

Journal ArticleDOI
TL;DR: The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique.
Abstract: A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique.

Journal ArticleDOI
TL;DR: A new adaptive short-time Fourier transform algorithm, with chirping windows which is tailored for near-optimal time-frequency-based IF estimation, demonstrating modest improvements in the threshold SNR over the best fixed-window STFTs.
Abstract: Instantaneous frequency estimation (IFE) arises in a variety of important applications, including FM demodulation. We present here a new time-frequency representation (TFR)-based approach to IFE based on an adaptive short-time Fourier transform (ASTFT). This TFR leads naturally to a type of short-term ML estimator of the IF. To further improve the performance, we apply a multistate hidden Markov model (HMM)-based post-estimation tracker. The end result is up to a 16-dB reduction in the threshold SNR over the frequency discriminator (FD) and an 8-dB improvement over the phase-locked loop (PLL) for a Rayleigh fading channel.

Journal ArticleDOI
TL;DR: An adaptive beamformer that is robust to uncertainty in source direction-of-arrival (DOA) is derived using a Bayesian approach that is compared with linearly constrained minimum variance (LCMV) beamformers and data-driven approaches that attempt to estimate signal characteristics or the steering vector from the data.
Abstract: An adaptive beamformer that is robust to uncertainty in source direction-of-arrival (DOA) is derived using a Bayesian approach. The DOA is assumed to be a discrete random variable with a known a priori probability density function (PDF) that reflects the level of uncertainty in the source DOA. The resulting beamformer is a weighted sum of minimum variance distortionless response (MVDR) beamformers pointed at a set of candidate DOAs, where the relative contribution of each MVDR beamformer is determined from the a posteriori PDF of the DOA conditioned on previously observed data. A simple approximation to the a posteriori PDF results in a straightforward implementation. Performance of the approximate Bayesian beamformer is compared with linearly constrained minimum variance (LCMV) beamformers and data-driven approaches that attempt to estimate signal characteristics or the steering vector from the data.

Journal ArticleDOI
TL;DR: A filterbank interpretation of various sampling strategies, which leads to efficient interpolation and reconstruction methods and an identity is developed that leads to new sampling strategies including an extension of Papoulis' (1977) generalized sampling expansion.
Abstract: This paper introduces a filterbank interpretation of various sampling strategies, which leads to efficient interpolation and reconstruction methods An identity, which is referred to as the interpolation identity, is developed and is used to obtain particularly efficient discrete-time systems for interpolation of generalized samples as well as a class of nonuniform samples, to uniform Nyquist samples, either for further processing in that form or for conversion to continuous time The interpolation identity also leads to new sampling strategies including an extension of Papoulis' (1977) generalized sampling expansion

Journal ArticleDOI
TL;DR: A new approach to spectral estimation is presented, which is based on the use of filter banks as a means of obtaining spectral interpolation data, which replaces standard covariance estimates.
Abstract: Traditional maximum entropy spectral estimation determines a power spectrum from covariance estimates. Here, we present a new approach to spectral estimation, which is based on the use of filter banks as a means of obtaining spectral interpolation data. Such data replaces standard covariance estimates. A computational procedure for obtaining suitable pole-zero (ARMA) models from such data is presented. The choice of the zeros (MA-part) of the model is completely arbitrary. By suitable choices of filter bank poles and spectral zeros, the estimator can be tuned to exhibit high resolution in targeted regions of the spectrum.

Journal ArticleDOI
TL;DR: This work analyzes the convergence behavior of the generalized APA class of algorithms (allowing for arbitrary delay between input vectors) using a simple model for the input signal vectors and shows that the convergence rate is exponential and that it improves as the number of input signal vector used for adaptation is increased.
Abstract: A class of equivalent algorithms that accelerate the convergence of the normalized LMS (NLMS) algorithm, especially for colored inputs, has previously been discovered independently. The affine projection algorithm (APA) is the earliest and most popular algorithm in this class that inherits its name. The usual APA algorithms update weight estimates on the basis of multiple, unit delayed, input signal vectors. We analyze the convergence behavior of the generalized APA class of algorithms (allowing for arbitrary delay between input vectors) using a simple model for the input signal vectors. Conditions for convergence of the APA class are derived. It is shown that the convergence rate is exponential and that it improves as the number of input signal vectors used for adaptation is increased. However, the rate of improvement in performance (time-to-steady-state) diminishes as the number of input signal vectors increases. For a given convergence rate, APA algorithms are shown to exhibit less misadjustment (steady-state error) than NLMS. Simulation results are provided to corroborate the analytical results.

Journal ArticleDOI
TL;DR: A natural measure of the "distance" between two ARMA processes is given and it is suggested that the metric can be used in at least two circumstances: in which the authors have signals arising from various models that are unknown and where there are several possible models M/sub i/, all of which are known.
Abstract: Autoregressive-moving-average (ARMA) models seek to express a system function of a discretely sampled process as a rational function in the z-domain. Treating an ARMA model as a complex rational function, we discuss a metric defined on the set of complex rational functions. We give a natural measure of the "distance" between two ARMA processes. The paper concentrates on the mathematics behind the problem and shows that the various algebraic structures endow the choice of metric with some interesting and remarkable properties, which we discuss. We suggest that the metric can be used in at least two circumstances: (i) in which we have signals arising from various models that are unknown (so we construct the distance matrix and perform cluster analysis) and (ii) where there are several possible models M/sub i/, all of which are known, and we wish to find which of these is closest to an observed data sequence modeled as M.

Journal ArticleDOI
TL;DR: Simulations demonstrate the significant performance gain realizable by this novel ESPRIT-based two-dimensional arrival angle estimation scheme for radar and wireless mobile fading-channel communications.
Abstract: Aperture extension (interferometry baseline extension) is achieved in this novel ESPRIT-based two-dimensional (2-D) arrival angle estimation scheme using a sparse (a.k.a., thin or thinned) uniform rectangular array of electromagnetic vector sensors spaced much farther apart than a half-wavelength. An electromagnetic vector sensor is composed of six spatially co-located, orthogonally oriented, diversely polarized antennas, distinctly measuring all six electromagnetic-field components of an incident multisource wavefield. Each incident source's direction of arrival (DOA) is estimated from the source's electromagnetic-field vector component and serves as a coarse reference to disambiguate the cyclic phase ambiguities in ESPRIT's eigenvalues when the intervector sensor spacing exceeds a half wavelength. Simulations demonstrate the significant performance gain realizable by this method for radar and wireless mobile fading-channel communications.

Journal ArticleDOI
TL;DR: A two-step procedure which enables decoupling the estimation of the DOA from that of the angular spread is proposed, which combines a covariance matching algorithm with the use of the extended invariance principle (EXIP).
Abstract: In mobile communications, local scattering in the vicinity of the mobile results in angular spreading as seen from a base station antenna array. In this paper, we consider the problem of estimating the parameters [direction-of-arrival (DOA) and angular spread] of a spatially distributed source, using a uniform linear array (ULA). A two-step procedure enabling decoupling the estimation of DOA from that of the angular spread is proposed. This method combines a covariance matching algorithm with the use of the extended invariance principle (EXIP). More exactly, the first step makes use of an unstructured model for the part of the covariance matrix that depends on the angular spread. Then, the solution is refined by invoking EXIP. Instead of a 2-D search, the proposed scheme requires two successive 1-D searches. Additionally, the DOA estimate is robust to mismodeling the spatial distribution of the scatterers. A statistical analysis is carried out, and a formula for the asymptotic variance of the estimates is derived. Numerical examples illustrate the performance of the method.

Journal ArticleDOI
TL;DR: The Cramer-Rao bound on the variance of angle-of-arrival estimates for arbitrary additive, independent, identically distributed, symmetric, non-Gaussian noise is presented and improved over initial robust estimates and is valid for a wide SNR range.
Abstract: Many approaches have been studied for the array processing problem when the additive noise is modeled with a Gaussian distribution, but these schemes typically perform poorly when the noise is non-Gaussian and/or impulsive. This paper is concerned with maximum likelihood array processing in non-Gaussian noise. We present the Cramer-Rao bound on the variance of angle-of-arrival estimates for arbitrary additive, independent, identically distributed (iid), symmetric, non-Gaussian noise. Then, we focus on non-Gaussian noise modeling with a finite Gaussian mixture distribution, which is capable of representing a broad class of non-Gaussian distributions that include heavy tailed, impulsive cases arising in wireless communications and other applications. Based on the Gaussian mixture model, we develop an expectation-maximization (EM) algorithm for estimating the source locations, the signal waveforms, and the noise distribution parameters. The important problems of detecting the number of sources and obtaining initial parameter estimates for the iterative EM algorithm are discussed in detail. The initialization procedure by itself is an effective algorithm for array processing in impulsive noise. Novel features of the EM algorithm and the associated maximum likelihood formulation include a nonlinear beamformer that separates multiple source signals in non-Gaussian noise and a robust covariance matrix estimate that suppresses impulsive noise while also performing a model-based interpolation to restore the low-rank signal subspace. The EM approach yields improvement over initial robust estimates and is valid for a wide SNR range. The results are also robust to PDF model mismatch and work well with infinite variance cases such as the symmetric stable distributions. Simulations confirm the optimality of the EM estimation procedure in a variety of cases, including a multiuser communications scenario. We also compare with existing array processing algorithms for non-Gaussian noise.

Journal ArticleDOI
TL;DR: The proposed MUSIC-based or MODE-based algorithm improves and generalizes previous disambiguation schemes that populate the thin array grid with identical subarrays-such as electromagnetic vector sensors, underwater acoustic vector hydrophones, or half-wavelength spaced subarray.
Abstract: A sparse uniform Cartesian-grid array suffers cyclic ambiguity in its Cartesian direction-cosine estimates due to the spatial Nyquist sampling theorem. The proposed MUSIC-based or MODE-based algorithm improves and generalizes previous disambiguation schemes that populate the thin array grid with identical subarrays-such as electromagnetic vector sensors, underwater acoustic vector hydrophones, or half-wavelength spaced subarrays.

Journal ArticleDOI
TL;DR: Two sets of equations are developed that allow us to design the wavelet directly from the signal of interest and result that Meyer's spectrum amplitude construction for an orthonormal bandlimited wavelet is not only sufficient but necessary.
Abstract: Algorithms for designing a mother wavelet /spl psi/(x) such that it matches a signal of interest and such that the family of wavelets {2/sup -(j/2)//spl psi/(2/sup -j/x-k)} forms an orthonormal Riesz basis of L/sup 2/(/spl Rscr/) are developed. The algorithms are based on a closed form solution for finding the scaling function spectrum from the wavelet spectrum. Many applications require wavelets that are matched to a signal of interest. Most current design techniques, however, do not design the wavelet directly. They either build a composite wavelet from a library of previously designed wavelets, modify the bases in an existing multiresolution analysis or design a scaling function that generates a multiresolution analysis with some desired properties. In this paper, two sets of equations are developed that allow us to design the wavelet directly from the signal of interest. Both sets impose bandlimitedness, resulting in closed form solutions. The first set derives expressions for continuous matched wavelet spectrum amplitudes. The second set of equations provides a direct discrete algorithm for calculating close approximations to the optimal complex wavelet spectrum. The discrete solution for the matched wavelet spectrum amplitude is identical to that of the continuous solution at the sampled frequencies. An interesting byproduct of this work is the result that Meyer's spectrum amplitude construction for an orthonormal bandlimited wavelet is not only sufficient but necessary. Specific examples are given which demonstrate the performance of the wavelet matching algorithms for both known orthonormal wavelets and arbitrary signals.

Journal ArticleDOI
TL;DR: An adaptive algorithm for blind source separation is derived, which is called the multiuser kurtosis (MUK) algorithm, which combines a stochastic gradient update and a Gram-Schmidt orthogonalization procedure in order to satisfy the criterion's whiteness constraints.
Abstract: We consider the problem of recovering blindly (i.e., without the use of training sequences) a number of independent and identically distributed source (user) signals that are transmitted simultaneously through a linear instantaneous mixing channel. The received signals are, hence, corrupted by interuser interference (IUI), and we can model them as the outputs of a linear multiple-input-multiple-output (MIMO) memoryless system. Assuming the transmitted signals to be mutually independent, i.i.d., and to share the same non-Gaussian distribution, a set of necessary and sufficient conditions for the perfect blind recovery (up to scalar phase ambiguities) of all the signals exists and involves the kurtosis as well as the covariance of the output signals. We focus on a straightforward blind constrained criterion stemming from these conditions. From this criterion, we derive an adaptive algorithm for blind source separation, which we call the multiuser kurtosis (MUK) algorithm. At each iteration, the algorithm combines a stochastic gradient update and a Gram-Schmidt orthogonalization procedure in order to satisfy the criterion's whiteness constraints. A performance analysis of its stationary points reveals that the MUK algorithm is free of any stable undesired local stationary points for any number of sources; hence, it is globally convergent to a setting that recovers them all.

Journal ArticleDOI
TL;DR: The proposed recursive version of the Levenberg-Marquardt algorithm for on-line training of neural nets for nonlinear adaptive filtering has better convergence properties than the other algorithms.
Abstract: The Levenberg-Marquardt algorithm is often superior to other training algorithms in off-line applications. This motivates the proposal of using a recursive version of the algorithm for on-line training of neural nets for nonlinear adaptive filtering. The performance of the suggested algorithm is compared with other alternative recursive algorithms, such as the recursive version of the off-line steepest-descent and Gauss-Newton algorithms. The advantages and disadvantages of the different algorithms are pointed out. The algorithms are tested on some examples, and it is shown that generally the recursive Levenberg-Marquardt algorithm has better convergence properties than the other algorithms.

Journal ArticleDOI
TL;DR: A lattice structure for an M-channel linear-phase perfect reconstruction filter bank (LPPRFB) based on the singular value decomposition (SVD) is introduced, which can be proven to use a minimal number of delay elements and to completely span a large class of LPPRFBs.
Abstract: A lattice structure for an M-channel linear-phase perfect reconstruction filter bank (LPPRFB) based on the singular value decomposition (SVD) is introduced. The lattice can be proven to use a minimal number of delay elements and to completely span a large class of LPPRFBs: all analysis and synthesis filters have the same FIR length, sharing the same center of symmetry. The lattice also structurally enforces both linear-phase and perfect reconstruction properties, is capable of providing fast and efficient implementation, and avoids the costly matrix inversion problem in the optimization process. From a block transform perspective, the new lattice can be viewed as representing a family of generalized lapped biorthogonal transform (GLBT) with an arbitrary number of channels M and arbitrarily large overlap. The relaxation of the orthogonal constraint allows the GLBT to have significantly different analysis and synthesis basis functions, which can then be tailored appropriately to fit a particular application. Several design examples are presented along with a high-performance GLBT-based progressive image coder to demonstrate the potential of the new transforms.

Journal ArticleDOI
TL;DR: The amplitude estimation techniques discussed in this paper do not model the observation noise, and yet, they are all asymptotically statistically efficient and provide a computationally simple and statistically accurate solution to the problem of system identification.
Abstract: This paper considers the problem of amplitude estimation of sinusoidal signals from observations corrupted by colored noise. A relatively large number of amplitude estimators, which encompass least squares (LS) and weighted least squares (WLS) methods, are described. Additionally, filterbank approaches, which are widely used for spectral analysis, are extended to amplitude estimation; more exactly, we consider the matched-filterbank (MAFI) approach and show that by appropriately designing the prefilters, the MAFI approach to amplitude estimation includes the WLS approach. The amplitude estimation techniques discussed in this paper do not model the observation noise, and yet, they are all asymptotically statistically efficient. It is, however, their different finite-sample properties that are of particular interest to this study. Numerical examples are provided to illustrate the differences among the various amplitude estimators. Although amplitude estimation applications are numerous, we focus herein on the problem of system identification using sinusoidal probing signals for which we provide a computationally simple and statistically accurate solution.

Journal ArticleDOI
TL;DR: In antenna array applications, the propagation environment is often more complicated than the ordinarily assumed model of plane wavefronts, so a low-complexity algorithm is suggested for estimating both the DOA and the spread angle of a source subject to local scattering, using a uniform linear array.
Abstract: In antenna array applications, the propagation environment is often more complicated than the ordinarily assumed model of plane wavefronts. Here, a low-complexity algorithm is suggested for estimating both the DOA and the spread angle of a source subject to local scattering, using a uniform linear array. The parameters are calculated from the estimates obtained using a standard algorithm such as root-MUSIC to fit a two-ray model to the data. The algorithm is shown to give consistent estimates, and the statistical performance is studied analytically and through simulations.

Journal ArticleDOI
TL;DR: It is demonstrated through theoretic analysis that in the presence of undernulled interference, the ASB is a pliable false alarm regulatory (FAR) detector that maintains good target sensitivity.
Abstract: A two-dimensional (2-D) adaptive sidelobe blanker (ASB) detection algorithm was developed through experimentation as an extenuate for false alarms caused by undernulled interference encountered when applying the adaptive matched filter (AMF) in nonhomogeneous environments. The algorithm's utility has been demonstrated empirically. Considering theoretic performance analyses of the ASB detection algorithm as well as the AMF generalized likelihood ratio test (GLRT), and the adaptive cosine estimator (ACE), under nonideal conditions, can become fairly intractable rather quickly, especially in an adaptive processing context involving covariance estimation. In this paper, however, we have developed and exploited a theoretic framework through which the performance of these algorithms under nonhomogeneous conditions can be examined theoretically. It is demonstrated through theoretic analysis that in the presence of undernulled interference, the ASB is a pliable false alarm regulatory (FAR) detector that maintains good target sensitivity. A viable method of ASB threshold selection is also presented and demonstrated.