scispace - formally typeset
Search or ask a question

Showing papers in "EURASIP Journal on Advances in Signal Processing in 2004"


Journal ArticleDOI
TL;DR: An introduction proposes a modular scheme of the training and test phases of a speaker verification system, and the most commonly speech parameterization used in speaker verification, namely, cepstral analysis, is detailed.
Abstract: This paper presents an overview of a state-of-the-art text-independent speaker verification system. First, an introduction proposes a modular scheme of the training and test phases of a speaker verification system. Then, the most commonly speech parameterization used in speaker verification, namely, cepstral analysis, is detailed. Gaussian mixture modeling, which is the speaker modeling technique used in most systems, is then explained. A few speaker modeling alternatives, namely, neural networks and support vector machines, are mentioned. Normalization of scores is then explained, as this is a very important step to deal with real-world data. The evaluation of a speaker verification system is then detailed, and the detection error trade-off (DET) curve is explained. Several extensions of speaker verification are then enumerated, including speaker tracking and segmentation by speakers. Then, some applications of speaker verification are proposed, including on-site applications, remote applications, applications relative to structuring audio information, and games. Issues concerning the forensic area are then recalled, as we believe it is very important to inform people about the actual performance and limitations of speaker verification systems. This paper concludes by giving a few research trends in speaker verification for the next couple of years.

874 citations


Journal ArticleDOI
TL;DR: The overall paradigm for this multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application is introduced.
Abstract: We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.

460 citations


Journal ArticleDOI
TL;DR: This work investigates the problem of bearings-only tracking of manoeuvring targets using particle filters (PFs) and confirms the superiority of the PFs for this difficult nonlinear tracking problem.
Abstract: We investigate the problem of bearings-only tracking of manoeuvring targets using particle filters (PFs). Three different (PFs) are proposed for this problem which is formulated as a multiple model tracking problem in a jump Markov system (JMS) framework. The proposed filters are (i) multiple model PF (MMPF), (ii) auxiliary MMPF (AUX-MMPF), and (iii) jump Markov system PF (JMS-PF). The performance of these filters is compared with that of standard interacting multiple model (IMM)-based trackers such as IMM-EKF and IMM-UKF for three separate cases: (i) single-sensor case, (ii) multisensor case, and (iii) tracking with hard constraints. A conservative CRLB applicable for this problem is also derived and compared with the RMS error performance of the filters. The results confirm the superiority of the PFs for this difficult nonlinear tracking problem.

289 citations


Journal ArticleDOI
TL;DR: Newly developed resampling algorithms for particle filters suitable for real-time implementation that reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access are described.
Abstract: Newly developed resampling algorithms for particle filters suitable for real-time implementation are described and their analysis is presented. The new algorithms reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access. Moreover, the algorithms allow for use of higher sampling frequencies by overlapping in time the resampling step with the other particle filtering steps. Since resampling is not dependent on any particular application, the analysis is appropriate for all types of particle filters that use resampling. The performance of the algorithms is evaluated on particle filters applied to bearings-only tracking and joint detection and estimation in wireless communications. We have demonstrated that the proposed algorithms reduce the complexity without performance degradation.

248 citations


Journal ArticleDOI
TL;DR: It is argued that the similarity plot encodes a projection of gait dynamics, which is also correspondence-free, robust to segmentation noise, and works well with low-resolution video.
Abstract: Gait is one of the few biometrics that can be measured at a distance, and is hence useful for passive surveillance as well as biometric applications. Gait recognition research is still at its infancy, however, and we have yet to solve the fundamental issue of finding gait features which at once have suffcient discrimination power and can be extracted robustly and accurately from low-resolution video. This paper describes a novel gait recognition technique based on the image self-similarity of a walking person. We contend that the similarity plot encodes a projection of gait dynamics. It is also correspondence-free, robust to segmentation noise, and works well with low-resolution video. The method is tested on multiple data sets of varying sizes and degrees of diffculty. Performance is best for fronto-parallel viewpoints, whereby a recognition rate of 98% is achieved for a data set of 6 people, and 70% for a data set of 54 people.

162 citations


Journal ArticleDOI
TL;DR: A system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM) and achieves satisfactory results, which compare well with the results of other algorithms that consider only global features.
Abstract: We developed a system that automatically authenticates offine handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offine. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.

159 citations


Journal ArticleDOI
TL;DR: A parametric signal processing approach for DNA sequence analysis based on autoregressive (AR) modeling is presented, indicating a high specificity of coding DNA sequences, while AR feature-based analysis helps distinguish between coding and noncoding DNA sequences.
Abstract: A parametric signal processing approach for DNA sequence analysis based on autoregressive (AR) modeling is presented. AR model residual errors and AR model parameters are used as features. The AR residual error analysis indicates a high specificity of coding DNA sequences, while AR feature-based analysis helps distinguish between coding and noncoding DNA sequences. An AR model-based string searching algorithm is also proposed. The effect of several types of numerical mapping rules in the proposed method is demonstrated.

123 citations


Journal ArticleDOI
TL;DR: A new time-frequency-based EEG seizure detection technique that uses an estimate of the distribution function of the singular vectors associated with the time- frequency distribution of an EEG epoch to characterise the patterns embedded in the signal.
Abstract: The nonstationary and multicomponent nature of newborn EEG seizures tends to increase the complexity of the seizure detection problem. In dealing with this type of problems, time-frequency-based techniques were shown to outperform classical techniques. This paper presents a new time-frequency-based EEG seizure detection technique. The technique uses an estimate of the distribution function of the singular vectors associated with the time-frequency distribution of an EEG epoch to characterise the patterns embedded in the signal. The estimated distribution functions related to seizure and nonseizure epochs were used to train a neural network to discriminate between seizure and nonseizure patterns.

100 citations


Journal ArticleDOI
TL;DR: This paper presents a new class of particle filtering methods that do not assume explicit mathematical forms of the probability distributions of the noise in the system, and are simpler, more robust, and more flexible than standard particle filters.
Abstract: In recent years, particle filtering has become a powerful tool for tracking signals and time-varying parameters of random dynamic systems. These methods require a mathematical representation of the dynamics of the system evolution, together with assumptions of probabilistic models. In this paper, we present a new class of particle filtering methods that do not assume explicit mathematical forms of the probability distributions of the noise in the system. As a consequence, the proposed techniques are simpler, more robust, and more flexible than standard particle filters. Apart from the theoretical development of specific methods in the new class, we provide computer simulation results that demonstrate the performance of the algorithms in the problem of autonomous positioning of a vehicle in a 2-dimensional space.

98 citations


Journal ArticleDOI
TL;DR: An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented and experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.
Abstract: An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented. The segmentation uses three block features: the block clusters degree, the block mean information, and the block variance. An optimal linear classifier has been trained for the classification per block and the criteria of minimal number of misclassified samples are used. Morphology has been applied as post processing to reduce the number of classification errors. The algorithm is tested on FVC2002 database, only 2.45% of the blocks are misclassified, while the postprocessing further reduces this ratio. Experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.

97 citations


Journal ArticleDOI
TL;DR: This work proposes a two-tier group-oriented fingerprinting scheme where users likely to collude with each other are assigned correlated fingerprints and extends the construction to represent the natural social and geographic hierarchical relationships between users by developing a more flexible tree-structure-based fingerprinting system.
Abstract: Digital fingerprinting of multimedia data involves embedding information in the content signal and offers protection to the digital rights of the content by allowing illegitimate usage of the content to be identified by authorized parties. One potential threat to fingerprinting is collusion, whereby a group of adversaries combine their individual copies in an attempt to remove the underlying fingerprints. Former studies indicate that collusion attacks based on a few dozen independent copies can confound a fingerprinting system that employs orthogonal modulation. However, in practice an adversary is more likely to collude with some users than with other users due to geographic or social circumstances. To take advantage of prior knowledge of the collusion pattern, we propose a two-tier group-oriented fingerprinting scheme where users likely to collude with each other are assigned correlated fingerprints. Additionally, we extend our construction to represent the natural social and geographic hierarchical relationships between users by developing a more flexible tree-structure-based fingerprinting system. We also propose a multistage colluder identification scheme by taking advantage of the hierarchial nature of the fingerprints. We evaluate the performance of the proposed fingerprinting scheme by studying the collusion resistance of a fingerprinting system employing Gaussian-distributed fingerprints. Our results show that the group-oriented fingerprinting system provides the superior collusion resistance over a system employing orthogonal modulation when knowledge of the potential collusion pattern is available.

Journal ArticleDOI
TL;DR: A study to compare the performance of bearing fault detection using three types of artificial neural networks (ANNs), namely, multilayer perceptron (MLP), radial basis function (RBF) network, and probabilistic neural network (PNN).
Abstract: A study is presented to compare the performance of besaring fault detection using three types of artificial neural networks (ANNs), namely, multilayer perceptron (MLP), radial basis function (RBF) network, and probabilistic neural network (PNN). The time domain vibration signals of a rotating machine with normal and defective bearings are processed for feature extraction. The extracted features from original and preprocessed signals are used as inputs to all three ANN classifiers: MLP, RBF, and PNN for two-class (normal or fault) recognition. The characteristic parameters like number of nodes in the hidden layer of MLP and the width of RBF, in case of RBF and PNN along with the selection of input features, are optimized using genetic algorithms (GA). For each trial, the ANNs are trained with a subset of the experimental data for known machine conditions. The ANNs are tested using the remaining set of data. The procedure is illustrated using the experimental vibration data of a rotating machine with and without bearing faults. The results show the relative effectiveness of three classifiers in detection of the bearing condition.

Journal ArticleDOI
TL;DR: A general digital signal processing (DSP)-oriented framework where the functional equivalence of these two approaches is systematically elaborated and the conditions of building mixed models are studied.
Abstract: Digital waveguides and finite difference time domain schemes have been used in physical modeling of spatially distributed systems. Both of them are known to provide exact modeling of ideal one-dimensional (1D) band-limited wave propagation, and both of them can be composed to approximate two-dimensional (2D) and three-dimensional (3D) mesh structures. Their equal capabilities in physical modeling have been shown for special cases and have been assumed to cover generalized cases as well. The ability to form mixed models by joining substructures of both classes through converter elements has been proposed recently. In this paper, we formulate a general digital signal processing (DSP)-oriented framework where the functional equivalence of these two approaches is systematically elaborated and the conditions of building mixed models are studied. An example of mixed modeling of a 2D waveguide is presented.

Journal ArticleDOI
TL;DR: It is shown that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates of location, and the derived algorithm provides a better tradeoff solution between convergence speed and implementation complexity.
Abstract: Least absolute deviation (LAD) regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE) of location The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity

Journal ArticleDOI
TL;DR: An image retrieval methodology suited for search in large collections of heterogeneous images is presented, which employs a fully unsupervised segmentation algorithm to divide images into regions and endow the indexing and retrieval system with content-based functionalities.
Abstract: An image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions and endow the indexing and retrieval system with content-based functionalities. Low-level descriptors for the color, position, size, and shape of each region are subsequently extracted. These arithmetic descriptors are automatically associated with appropriate qualitative intermediate-level descriptors, which form a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) and their relations in a human-centered fashion. When querying for a specific semantic object (or objects), the intermediate-level descriptor values associated with both the semantic object and all image regions in the collection are initially compared, resulting in the rejection of most image regions as irrelevant. Following that, a relevance feedback mechanism, based on support vector machines and using the low-level descriptors, is invoked to rank the remaining potentially relevant image regions and produce the final query results. Experimental results and comparisons demonstrate, in practice, the effectiveness of our approach.

Journal ArticleDOI
TL;DR: A network framework for evaluating the theoretical performance limits of wireless data communication and addresses the problem of providing the best possible service to new users joining the system without affecting existing users, dubbed PhantomNet.
Abstract: We present a network framework for evaluating the theoretical performance limits of wireless data communication. We address the problem of providing the best possible service to new users joining the system without affecting existing users. Since, interference-wise, new users are required to be invisible to existing users, the network is dubbed Phantom Net. The novelty is the generality obtained in this context. Namely, we can deal with multiple users, multiple antennas, and multiple cells on both the uplink and the downlink. The solution for the uplink is effectively the same as for a single cell system since all the base stations (BSs) simply amount to one composite BS with centralized processing. The optimum strategy, following directly from known results, is successive decoding (SD), where the new user is decoded before the existing users so that the new users' signal can be subtracted out to meet its invisibility requirement. Only the BS needs to modify its decoding scheme in the handling of new users, since existing users continue to transmit their data exactly as they did before the new arrivals. The downlink, even with the BSs operating as one composite BS, is more problematic. With multiple antennas at each BS site, the optimal coding scheme and the capacity region for this channel are unsolved problems. SD and dirty paper (DP) are two schemes previously reported to achieve capacity in special cases. For PhantomNet, we show that DP coding at the BS is equal to or better than SD. The new user is encoded before the existing users so that the interference caused by his signal to existing users is known to the transmitter. Thus the BS modifies its encoding scheme to accommodate new users so that existing users continue to operate as before: they achieve the same rates as before and they decode their signal in precisely the same way as before. The solutions for the uplink and the downlink are particularly interesting in the way they exhibit a remarkable simplicity and an unmistakable, near-perfect, up-down symmetry.

Journal ArticleDOI
TL;DR: Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method, which assumes specific characteristics of the noise-contaminated image component.
Abstract: The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD). This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

Journal ArticleDOI
TL;DR: A comparison with other windows has shown that a difference in performance exists between the ultraspherical and Kaiser windows, which depends critically on the required specifications.
Abstract: A method for the design of ultraspherical window functions that achieves prescribed spectral characteristics is proposed. The method comprises a collection of techniques that can be used to determine the three independent parameters of the ultraspherical window such that a specified ripple ratio and main-lobe width or null-to-null width along with a user-defined side-lobe pattern can be achieved. Other known two-parameter windows can achieve a specified ripple ratio and main-lobe width; however, their side-lobe pattern cannot be controlled as in the proposed method. A comparison with other windows has shown that a difference in performance exists between the ultraspherical and Kaiser windows, which depends critically on the required specifications. The paper also highlights some applications of the proposed method in the areas of digital beamforming and image processing.

Journal ArticleDOI
TL;DR: A novel algebraic method for the simultaneous estimation of MIMO channel parameters fromChannel sounder measurements is developed and a multidimensional extension of the RARE algorithm is developed, analyzed, and applied to measurement data recorded with the RUSK vector channel sounder in the 2 GHz band.
Abstract: A novel algebraic method for the simultaneous estimation of MIMO channel parameters from channel sounder measurements is developed. We consider a parametric multipath propagation model with P discrete paths where each path is characterized by its complex path gain, its directions of arrival and departure, time delay, and Doppler shift. This problem is treated as a special case of the multidimensional harmonic retrieval problem. While the well-known ESPRIT-type algorithms exploit shift-invariance between specific partitions of the signal matrix, the rank reduction estimator (RARE) algorithm exploits their internal Vandermonde structure. A multidimensional extension of the RARE algorithm is developed, analyzed, and applied to measurement data recorded with the RUSK vector channel sounder in the 2 GHz band.

Journal ArticleDOI
TL;DR: Gaussian distributed input signals, and PAs that can be modeled by memoryless or memory polynomials are considered, and closed-form expressions of the PA output power spectral density are derived, for an arbitrary nonlinear order, based on the so-called Leonov-Shiryaev formula.
Abstract: The majority of the nonlinearity in a communication system is attributed to the power amplifier (PA) present at the final stage of the transmitter chain. In this paper, we consider Gaussian distributed input signals (such as OFDM), and PAs that can be modeled by memoryless or memory polynomials. We derive closed-form expressions of the PA output power spectral density, for an arbitrary nonlinear order, based on the so-called Leonov-Shiryaev formula. We then apply these results to answer practical questions such as the contribution of AM/PM conversion to spectral regrowth and the relationship between memory effects and spectral asymmetry.

Journal ArticleDOI
TL;DR: This paper analyzes one of the recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash, and introduces a new methodology based on three components: the intrapersonal scatter, the interpersonal entropy, and the correlation between both measures.
Abstract: In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space.We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

Journal ArticleDOI
TL;DR: This paper explores the use of particle filters for beat tracking in musical audio examples with two alternative algorithms, one which performs Rao-Blackwellisation to produce an almost deterministic formulation while the second is a formulation which models tempo as a Brownian motion process.
Abstract: This paper explores the use of particle filters for beat tracking in musical audio examples. The aim is to estimate the time-varying tempo process and to find the time locations of beats, as defined by human perception. Two alternative algorithms are presented, one which performs Rao-Blackwellisation to produce an almost deterministic formulation while the second is a formulation which models tempo as a Brownian motion process. The algorithms have been tested on a large and varied database of examples and results are comparable with the current state of the art. The deterministic algorithm gives the better performance of the two algorithms.

Journal ArticleDOI
TL;DR: In this paper, the frequency-domain analysis in the genomes of various organisms using tricolor spectrograms was performed, identifying several types of distinct visual patterns characterizing specific DNA regions.
Abstract: We perform frequency-domain analysis in the genomes of various organisms using tricolor spectrograms, identifying several types of distinct visual patterns characterizing specific DNA regions. We relate patterns and their frequency characteristics to the sequence characteristics of the DNA. At times, the spectrogram patterns can be related to the structure of the corresponding protein region by using various public databases such as GenBank. Some patterns are explained from the biological nature of the corresponding regions, which relate to chromosome structure and protein coding, and some patterns have yet unknown biological significance. We found biologically meaningful patterns, on the scale of millions of base pairs, to a few hundred base pairs. Chromosome-wide patterns include periodicities ranging from 2 to 300. The color of the spectrogram depends on the nucleotide content at specific frequencies, and therefore can be used as a local indicator of CG content and other measures of relative base content. Several smaller-scale patterns are found to represent different types of domains made up of various tandem repeats.

Journal ArticleDOI
TL;DR: Simulation studies reported in this paper indicate that the proposed generalized selection weighted vector filter class is computationally attractive, yields excellent performance, and is able to preserve fine details and color information while efficiently suppressing impulsive noise.
Abstract: This paper introduces a class of nonlinear multichannel filters capable of removing impulsive noise in color images. The here-proposed generalized selection weighted vector filter class constitutes a powerful filtering framework for multichannel signal processing. Previously defined multichannel filters such as vector median filter, basic vector directional filter, directional-distance filter, weighted vector median filters, and weighted vector directional filters are treated from a global viewpoint using the proposed framework. Robust order-statistic concepts and increased degree of freedom in filter design make the proposed method attractive for a variety of applications. Introduced multichannel sigmoidal adaptation of the filter parameters and its modifications allow to accommodate the filter parameters to varying signal and noise statistics. Simulation studies reported in this paper indicate that the proposed filter class is computationally attractive, yields excellent performance, and is able to preserve fine details and color information while efficiently suppressing impulsive noise. This paper is an extended version of the paper by Lukac et al. presented at the 2003 IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03) in Grado, Italy.

Journal ArticleDOI
TL;DR: This paper investigates partial crosstalk cancellation for upstream VDSL with significantly reduced run-time complexity and presents a number of algorithms which exploit these properties to reduce the complexity of crosStalk cancellation.
Abstract: Crosstalk is a major problem in modern DSL systems such as VDSL. Many crosstalk cancellation techniques have been proposed to help mitigate crosstalk, but whilst they lead to impressive performance gains, their complexity grows with the square of the number of lines within a binder. In binder groups which can carry up to hundreds of lines, this complexity is outside the scope of current implementation. In this paper, we investigate partial crosstalk cancellation for upstream VDSL. The majority of the detrimental effects of crosstalk are typically limited to a small subset of lines and tones. Furthermore, significant crosstalk is often only seen from neighbouring pairs within the binder configuration. We present a number of algorithms which exploit these properties to reduce the complexity of crosstalk cancellation. These algorithms are shown to achieve the majority of the performance gains of full crosstalk cancellation with significantly reduced run-time complexity.

Journal ArticleDOI
TL;DR: Novel algorithms to select and extract separately all the components, using the time-frequency distribution (TFD), of a given multicomponent frequency-modulated (FM) signal and the results are compared with those of the higher-order ambiguity function (HAF) algorithm.
Abstract: We propose novel algorithms to select and extract separately all the components, using the time-frequency distribution (TFD), of a given multicomponent frequency-modulated (FM) signal. These algorithms do not use any a priori information about the various components. However, their performances highly depend on the cross-terms suppression ability and high time-frequency resolution of the considered TFD. To illustrate the usefulness of the proposed algorithms, we applied them for the estimation of the instantaneous frequency coefficients of a multicomponent signal and the results are compared with those of the higher-order ambiguity function (HAF) algorithm. Monte Carlo simulation results show the superiority of the proposed algorithms over the HAF.

Journal ArticleDOI
TL;DR: An effective method is proposed for the estimation of the signal subspace dimension which is able to operate against colored noise with performances superior to those exhibited by the classical information theoretic criteria of Akaike and Rissanen.
Abstract: In order to operate properly, the superresolution methods based on orthogonal subspace decomposition, such as multiple signal classification (MUSIC) or estimation of signal parameters by rotational invariance techniques (ESPRIT), need accurate estimation of the signal subspace dimension, that is, of the number of harmonic components that are superimposed and corrupted by noise. This estimation is particularly difficult when the S/N ratio is low and the statistical properties of the noise are unknown. Moreover, in some applications such as radar imagery, it is very important to avoid underestimation of the number of harmonic components which are associated to the target scattering centers. In this paper, we propose an effective method for the estimation of the signal subspace dimension which is able to operate against colored noise with performances superior to those exhibited by the classical information theoretic criteria of Akaike and Rissanen. The capabilities of the new method are demonstrated through computer simulations and it is proved that compared to three other methods it carries out the best trade-off from four points of view, S/N ratio in white noise, frequency band of colored noise, dynamic range of the harmonic component amplitudes, and computing time.

Journal ArticleDOI
TL;DR: A class of linear quasi-orthogonal space-time block codes that achieve full diversity over quasistatic fading channels for any transmit antennas and gives an iterative construction of these codes with a practical decoding algorithm.
Abstract: We construct a class of linear quasi-orthogonal space-time block codes that achieve full diversity over quasistatic fading channels for any transmit antennas. These codes achieve a normalized rate of one symbol per channel use. Constellation rotation is shown to be necessary for the full-diversity feature of these codes. When the number of transmit antennas is a power of 2, these codes are also delay "optimal." The quasi-orthogonal property of the code makes one half of the symbols orthogonal to the other half, and we show that this allows each half to be decoded separately without any loss of performance. We give an iterative construction of these codes with a practical decoding algorithm. Numerical simulations are presented to evaluate the performance of these codes in terms of capacity as well as probability of error versus SNR curves. For some special cases, we compute the pairwise probability of error averaged over all the channel states as a single integral that shows the diversity and coding gain more clearly.

Journal ArticleDOI
TL;DR: The use of time-frequency distributions is proposed as a nonlinear signal processing technique that is combined with a pattern recognition approach to identify superimposed transmission modes in a reconfigurable wireless terminal based on software-defined radio techniques.
Abstract: The use of time-frequency distributions is proposed as a nonlinear signal processing technique that is combined with a pattern recognition approach to identify superimposed transmission modes in a reconfigurable wireless terminal based on software-defined radio techniques. In particular, a software-defined radio receiver is described aiming at the identification of two coexistent communication modes: frequency hopping code division multiple access and direct sequence code division multiple access. As a case study, two standards, based on the previous modes and operating in the same band (industrial, scientific, and medical), are considered: IEEE WLAN 802.11b (direct sequence) and Bluetooth (frequency hopping). Neural classifiers are used to obtain identification results. A comparison between two different neural classifiers is made in terms of relative error frequency.

Journal ArticleDOI
TL;DR: A framework for selecting the transformation from face imagery using one of three methods: Karhunen-Loève analysis, linear regression of color distribution, and a genetic algorithm is presented.
Abstract: This paper concerns the conversion of color images to monochromatic form for the purpose of human face recognition. Many face recognition systems operate using monochromatic information alone even when color images are available. In such cases, simple color transformations are commonly used that are not optimal for the face recognition task. We present a framework for selecting the transformation from face imagery using one of three methods: Karhunen-Loeve analysis, linear regression of color distribution, and a genetic algorithm. Experimental results are presented for both the well-known eigenface method and for extraction of Gabor-based face features to demonstrate the potential for improved overall system performance. Using a database of 280 images, our experiments using these methods resulted in performance improvements of approximately 4% to 14%.