scispace - formally typeset
Search or ask a question

Showing papers on "Multidimensional signal processing published in 2010"


Journal ArticleDOI
TL;DR: The effect of different events on the EEG signal, and different signal processing methods used to extract the hidden information from the signal are discussed in detail.
Abstract: The EEG (Electroencephalogram) signal indicates the electrical activity of the brain. They are highly random in nature and may contain useful information about the brain state. However, it is very difficult to get useful information from these signals directly in the time domain just by observing them. They are basically non-linear and nonstationary in nature. Hence, important features can be extracted for the diagnosis of different diseases using advanced signal processing techniques. In this paper the effect of different events on the EEG signal, and different signal processing methods used to extract the hidden information from the signal are discussed in detail. Linear, Frequency domain, time - frequency and non-linear techniques like correlation dimension (CD), largest Lyapunov exponent (LLE), Hurst exponent (H), different entropies, fractal dimension(FD), Higher Order Spectra (HOS), phase space plots and recurrence plots are discussed in detail using a typical normal EEG signal.

449 citations


Journal ArticleDOI
TL;DR: Tensor algebra and multidimensional HR are shown to be central for target localization in a variety of pertinent MIMO radar scenarios, and compared to the classical radar-imaging-based methods such as Capon or MUSIC, these algebraic techniques yield improved performance, especially for closely spaced targets, at modest complexity.
Abstract: Detection and estimation problems in multiple-input multiple-output (MIMO) radar have recently drawn considerable interest in the signal processing community. Radar has long been a staple of signal processing, and MIMO radar presents challenges and opportunities in adapting classical radar imaging tools and developing new ones. Our aim in this article is to showcase the potential of tensor algebra and multidimensional harmonic retrieval (HR) in signal processing for MIMO radar. Tensor algebra and multidimensional HR are relatively mature topics, albeit still on the fringes of signal processing research. We show they are in fact central for target localization in a variety of pertinent MIMO radar scenarios. Tensor algebra naturally comes into play when the coherent processing interval comprises multiple pulses, or multiple transmit and receive subarrays are used (multistatic configuration). Multidimensional harmonic structure emerges for far-field uniform linear transmit/receive array configurations, also taking into account Doppler shift; and hybrid models arise in-between. This viewpoint opens the door for the application and further development of powerful algorithms and identifiability results for MIMO radar. Compared to the classical radar-imaging-based methods such as Capon or MUSIC, these algebraic techniques yield improved performance, especially for closely spaced targets, at modest complexity.

294 citations


Journal ArticleDOI
TL;DR: It is shown that analog wavelet transform is successfully implemented in biomedical signal processing for design of low-power pacemakers and also in ultra-wideband (UWB) wireless communications.

214 citations


Journal ArticleDOI
TL;DR: Four types of noise (Gaussian noise, Salt & Pepper noise, Speckle noise and Poisson noise) are used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter .
Abstract: Image processing is basically the use of computer algorithms to perform image processing on digital images. Digital image processing is a part of digital signal processing. Digital image processing has many significant advantages over analog image processing. Image processing allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing of images. Wavelet transforms have become a very powerful tool for de-noising an image. One of the most popular methods is wiener filter. In this work four types of noise (Gaussian noise , Salt & Pepper noise, Speckle noise and Poisson noise) is used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter . Further results have been compared for all noises.

203 citations


Journal ArticleDOI
TL;DR: A practical scheme to perform the fast Fourier transform in the optical domain is introduced, which performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.
Abstract: A practical scheme to perform the fast Fourier transform in the optical domain is introduced. Optical real-time FFT signal processing is performed at speeds far beyond the limits of electronic digital processing, and with negligible energy consumption. To illustrate the power of the method we demonstrate an optical 400 Gbit/s OFDM receiver. It performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.

186 citations


Journal ArticleDOI
TL;DR: This paper considers the data expansion required to pass from the plaintext to the encrypted representation of signals, due to the use of cryptosystems operating on very large algebraic structures, and proposes a general composite signal representation.
Abstract: Signal processing tools working directly on encrypted data could provide an efficient solution to application scenarios where sensitive signals must be protected from an untrusted processing device. In this paper, we consider the data expansion required to pass from the plaintext to the encrypted representation of signals, due to the use of cryptosystems operating on very large algebraic structures. A general composite signal representation allowing us to pack together a number of signal samples and process them as a unique sample is proposed. The proposed representation permits us to speed up linear operations on encrypted signals via parallel processing and to reduce the size of the encrypted signal. A case study-1-D linear filtering-shows the merits of the proposed representation and provides some insights regarding the signal processing algorithms more suited to work on the composite representation.

147 citations


Journal ArticleDOI
TL;DR: An ultra-high speed linear spline interpolation (LSI) method for λ-to-k spectral re-sampling that can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks is realized.
Abstract: We realized graphics processing unit (GPU) based real-time 4D (3D+time) signal processing and visualization on a regular Fourier-domain optical coherence tomography (FD-OCT) system with a nonlinear k-space spectrometer. An ultra-high speed linear spline interpolation (LSI) method for lambda-to-k spectral re-sampling is implemented in the GPU architecture, which gives average interpolation speeds of >3,000,000 line/s for 1024-pixel OCT (1024-OCT) and >1,400,000 line/s for 2048-pixel OCT (2048-OCT). The complete FD-OCT signal processing including lambda-to-k spectral re-sampling, fast Fourier transform (FFT) and post-FFT processing have all been implemented on a GPU. The maximum complete A-scan processing speeds are investigated to be 680,000 line/s for 1024-OCT and 320,000 line/s for 2048-OCT, which correspond to 1GByte processing bandwidth. In our experiment, a 2048-pixel CMOS camera running up to 70 kHz is used as an acquisition device. Therefore the actual imaging speed is camera- limited to 128,000 line/s for 1024-OCT or 70,000 line/s for 2048-OCT. 3D Data sets are continuously acquired in real time at 1024-OCT mode, immediately processed and visualized as high as 10 volumes/second (12,500 A-scans/volume) by either en face slice extraction or ray-casting based volume rendering from 3D texture mapped in graphics memory. For standard FD-OCT systems, a GPU is the only additional hardware needed to realize this improvement and no optical modification is needed. This technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks.

140 citations


Journal ArticleDOI
TL;DR: GPU-NUFFT provides an accurate approximation to GPU-NUDFT in terms of image quality, but offers >10 times higher processing speed and improved sensitivity roll-off, higher local signal-to-noise ratio and immunity to side-lobe artifacts caused by the interpolation error.
Abstract: We implemented fast Gaussian gridding (FGG)-based non-uniform fast Fourier transform (NUFFT) on the graphics processing unit (GPU) architecture for ultrahigh-speed, real-time Fourier-domain optical coherence tomography (FD-OCT). The Vandermonde matrix-based non-uniform discrete Fourier transform (NUDFT) as well as the linear/cubic interpolation with fast Fourier transform (InFFT) methods are also implemented on GPU to compare their performance in terms of image quality and processing speed. The GPU accelerated InFFT/NUDFT/NUFFT methods are applied to process both the standard half-range FD-OCT and complex full-range FD-OCT (C-FD-OCT). GPU-NUFFT provides an accurate approximation to GPU-NUDFT in terms of image quality, but offers >10 times higher processing speed. Compared with the GPU-InFFT methods, GPU-NUFFT has improved sensitivity roll-off, higher local signal-to-noise ratio and immunity to side-lobe artifacts caused by the interpolation error. Using a high speed CMOS line-scan camera, we demonstrated the real-time processing and display of GPU-NUFFT-based C-FD-OCT at a camera-limited rate of 122 k line/s (1024 pixel/A-scan).

103 citations


Proceedings ArticleDOI
23 Aug 2010
TL;DR: The proposed VAD algorithm demonstrates the simplicity of 1-D LBP processing with low computational complexity and it is shown that distinct LBP features are obtained to identify the voiced and the unvoiced components of speech signals.
Abstract: Local Binary Patterns (LBP) have been used in 2-D image processing for applications such as texture segmentation and feature detection. In this paper a new 1-dimensional local binary pattern (LBP) signal processing method is presented. Speech systems such as hearing aids require fast and computationally inexpensive signal processing. The practical use of LBP based speech processing is demonstrated on two signal processing problems: — (i) signal segmentation and (ii) voice activity detection (VAD). Both applications use the underlying features extracted from the 1-D LBP. The proposed VAD algorithm demonstrates the simplicity of 1-D LBP processing with low computational complexity. It is also shown that distinct LBP features are obtained to identify the voiced and the unvoiced components of speech signals.

92 citations


Journal ArticleDOI
TL;DR: An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail, which includes automated peak recognition based on statistical methods.
Abstract: Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15N- and 13C-edited NOESY-HSQC spectra of human ubiquitin.

80 citations


Journal ArticleDOI
TL;DR: In this paper, a frequency-division fast linear canonical transform algorithm comparable to the Sande-Tukey fast Fourier transform is proposed, and results calculated with an implementation of this algorithm are compared with the corresponding analytic functions.
Abstract: The linear canonical transform provides a mathematical model of paraxial propagation though quadratic phase systems. We review the literature on numerical approximation of this transform, including discretization, sampling, and fast algorithms, and identify key results. We then propose a frequency-division fast linear canonical transform algorithm comparable to the Sande-Tukey fast Fourier transform. Results calculated with an implementation of this algorithm are presented and compared with the corresponding analytic functions.

Book
01 Jan 2010
TL;DR: Methods and Algorithms of Digital Filtering of Signal/Image Processing and Computer Generated Holograms.
Abstract: 1. Introduction.- 2. Optical Signals and Transforms.- 3. Digital Representation of Signals.- 4. Digital Representation of Signal Transformations.- 5. Methods and Algorithms of Digital Filtering.- 6. Fast Algorithms.- 7. Statistical Methods and Algorithms.- 8. Sensor Signal Perfecting, Image Restoration, Reconstruction and Enhancement.- 9. Image Resampling and Geometrical Transformations.- 10. Signal Parameter Estimation and Measurement. Object Localization.- 11. Target Location in Clutter.- 12. Nonlinear Filters in Signal/Image Processing.- 13. Computer Generated Holograms.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a quasicrystals-based irregular sampling strategy to reduce the number of measures needed to recover a signal or an image whose Fourier transform is supported by a compact set with a given measure.
Abstract: This contribution is addressing an issue named in signal processing. Let be a lattice and be the dual lattice. Then the standard Shannon–Nyquist theorem says that any signal f whose Fourier transform is supported by a compact subset can be recovered from the samples if and only if the translated sets are pairwise disjoint. This sufficient condition on K is also necessary. When it is not satisfied may occur. Olevskii and Ulanovskii designed irregular sampling strategies which remedy . Then one can optimally reduce the number of measures needed to recover a signal or an image whose Fourier transform is supported by a compact set K with a given measure. The present contribution is aimed at bridging the gap between this advance on irregular sampling and the theory of quasicrystals.

Journal ArticleDOI
TL;DR: In this paper, a high speed algorithm for computation of frequency-wavenumber (f-k) spectra is developed, and two real-time infrasonic data processing techniques that it makes possible, are described.
Abstract: Summary A high speed algorithm for computation of frequency-wavenumber (f-k) spectra is developed, and two real-time infrasonic data processing techniques that it makes possible, are described: (1) Signal detection by a search of f-k space. This process is compared to the N-4 correlator, a broad-band signal detector. The f-k search with a Fisher detector has a theoretical advantage, which we verify in practice. (2) An f-k filter technique for calculating ‘best beam’ estimates. This technique traces the beam containing maximum power, from frequency to frequency through f-k space, and thus allows for wandering of signal velocity and arrival azimuth. This maximum power function is taken as the frequency spectrum of the best beam. In our programs the Fisher statistic of the signal estimate, and the velocity and azimuth, are computed and displayed as functions of frequency. Examples from real data for both processing techniques are discussed.

Book
Jinho Choi1
01 Jan 2010
TL;DR: Various optimal and suboptimal signal combining and detection techniques are explained in the context of multiple-input multiple-output (MIMO) systems, including successive interference cancellation based detection and lattice reduction aided detection.
Abstract: With signal combining and detection methods now representing a key application of signal processing in communication systems, this book provides a range of key techniques for receiver design when multiple received signals are available. Various optimal and suboptimal signal combining and detection techniques are explained in the context of multiple-input multiple-output (MIMO) systems, including successive interference cancellation (SIC) based detection and lattice reduction (LR) aided detection. The techniques are then analyzed using performance analysis tools. The fundamentals of statistical signal processing are also covered, with two chapters dedicated to important background material. With a carefully balanced blend of theoretical elements and applications, this book is ideal for both graduate students and practising engineers in wireless communications.

Journal ArticleDOI
TL;DR: This paper proposes a uniform sampling and reconstruction scheme for a class of signals which are nonbandlimited in FrFT sense, and derives conditions under which exact recovery of parameters of the signal is possible.
Abstract: Sampling theory for continuous time signals which have a bandlimited representation in fractional Fourier transform (FrFT) domain-a transformation which generalizes the conventional Fourier transform-has blossomed in the recent past. The mechanistic principles behind Shannon's sampling theorem for fractional bandlimited (or fractional Fourier bandlimited) signals are the same as for the Fourier domain case i.e. sampling (and reconstruction) in FrFT domain can be seen as an orthogonal projection of a signal onto a subspace of fractional bandlimited signals. As neat as this extension of Shannon's framework is, it inherits the same fundamental limitation that is prevalent in the Fourier regime-what happens if the signals have singularities in the time domain (or the signal has a nonbandlimited spectrum)? In this paper, we propose a uniform sampling and reconstruction scheme for a class of signals which are nonbandlimited in FrFT sense. Specifically, we assume that samples of a smoothed version of a periodic stream of Diracs (which is sparse in time-domain) are accessible. In its parametric form, this signal has a finite number of degrees of freedom per unit time. Based on the representation of this signal in FrFT domain, we derive conditions under which exact recovery of parameters of the signal is possible. Knowledge of these parameters leads to exact reconstruction of the original signal.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work proposes the use of Kronecker product matrices in CS to use such matrices as sparsifying bases that jointly model the different types of structure present in the signal.
Abstract: Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While CS literature has mostly focused on problems involving 1-D and 2-D signals, many important applications involve signals that are multidimensional. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed measurement settings can be easily expressed as Kronecker products. This new formulation enables the derivation of analytical bounds for sparse approximation and CS recovery of multidimensional signals.

Journal ArticleDOI
TL;DR: In this article, a regular acquisition grid that minimizes the mixing between the unknown spectrum of the well-sampled signal and aliasing artifacts is proposed to recover 2D signals that are band-limited in one spatial dimension.
Abstract: Random sampling can lead to algorithms in which the Fourier reconstruction is almost perfect when the underlying spectrum of the signal is sparse or band-limited. Conversely, regular sampling often hampers the Fourier data recovery methods. However, 2D signals that are band-limited in one spatial dimension can be recovered by designing a regular acquisition grid that minimizes the mixing between the unknown spectrum of the well-sampled signal and aliasing artifacts. This concept can be easily extended to higher dimensions and used to define potential strategies for acquisition-guided Fourier reconstruction. The wavenumber response of various sampling operators is derived and sampling conditions for optimal Fourier reconstruction are investigated using synthetic and real data examples.

Journal ArticleDOI
TL;DR: The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing.
Abstract: The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

Journal ArticleDOI
Sang Bo Han1
TL;DR: In this paper, an effective and simple way to reconstruct displacement signal from a measured acceleration signal is proposed, which utilizes curve-fitting around the significant frequency components of the Fourier transform of the acceleration signal before it is inverse-Fourier transformed.
Abstract: An effective and simple way to reconstruct displacement signal from a measured acceleration signal is proposed in this paper. To reconstruct displacement signal by means of double-integrating the time domain acceleration signal, the Nyquist frequency of the digital sampling of the acceleration signal should be much higher than the highest frequency component of the signal. On the other hand, to reconstruct displacement signal by taking the inverse Fourier transform, the magnitude of the significant frequency components of the Fourier transform of the acceleration signal should be greater than the 6 dB increment line along the frequency axis. With a predetermined resolution in time and frequency domain, determined by the sampling rate to measure and record the original signal, reconstructing high-frequency signals in the time domain and reconstructing low-frequency signals in the frequency domain will produce biased errors. Furthermore, because of the DC components inevitably included in the sampling process, low-frequency components of the signals are overestimated when displacement signals are reconstructed from the Fourier transform of the acceleration signal. The proposed method utilizes curve-fitting around the significant frequency components of the Fourier transform of the acceleration signal before it is inverse-Fourier transformed. Curve-fitting around the dominant frequency components provides much better results than simply ignoring the insignificant frequency components of the signal.

Book
01 Jan 2010
TL;DR: This book discusses digital signal processing in the context of continuous time systems, as well as discrete time Fourier series and transform, and some of the techniques used in this area.
Abstract: 1. Introduction to signals 2. Introduction to systems Part I. Continuous Time Signals and Systems: 3. Time domain analysis of systems 4. Signal representation using Fourier series 5. Continuous-time Fourier transform 6. Laplace transform 7. Continuous-time filters 8. Case studies for CT systems Part II. Discrete Time Signals and Systems: 9. Sampling and quantization 10. Time domain analysis 11. Discrete-time Fourier series and transform 12. Discrete Fourier transform 13. Z-transform 14. Digital filters 15. FIR filter design 16. IIR filter design 17. Applications of digital signal processing Bibliography Appendices: A. Mathematical tables B. Introduction to complex numbers C. Linear constant coefficient differential equations D. Partial fraction expansion E. Introduction to MATLAB F. About the CD-ROM.

Proceedings ArticleDOI
01 Jan 2010
TL;DR: Information theoretic analysis of real EEG signals is presented and it can be established generally that compressive sensing not only compresses but also secures while sampling, which may provide multi-pronged solutions to reduce some systems computational complexity.
Abstract: In a traditional signal processing system sampling is carried out at a frequency which is at least twice the highest frequency component found in the signal This is in order to guarantee that complete signal recovery is later on possible The sampled signal can subsequently be subjected to further processing leading to, for example, encryption and compression This processing can be computationally intensive and, in the case of battery operated systems, unpractically power hungry Compressive sensing has recently emerged as a new signal sampling paradigm gaining huge attention from the research community According to this theory it can potentially be possible to sample certain signals at a lower than Nyquist rate without jeopardizing signal recovery In practical terms this may provide multi-pronged solutions to reduce some systems computational complexity In this work, information theoretic analysis of real EEG signals is presented that shows the additional benefits of compressive sensing in preserving data privacy Through this it can then be established generally that compressive sensing not only compresses but also secures while sampling

Patent
15 Feb 2010
TL;DR: In this article, a method for signal processing includes distributing an analog input signal to a plurality of processing channels, where the input signal is mixed with a respective periodic waveform including multiple spectral lines, so as to produce a respective baseband signal.
Abstract: A method for signal processing includes distributing an analog input signal to a plurality of processing channels. In each processing channel, the input signal is mixed with a respective periodic waveform including multiple spectral lines, so as to produce a respective baseband signal in which multiple spectral slices of the input signal are superimposed on one another. The baseband signal produced in each of the processing channels is digitized, to produce a set of digital sample sequences that represent the input signal.

Book ChapterDOI
16 Jun 2010
TL;DR: Thorough studies have shown that the estimation and detection tasks in many signal processing and communications applications such as data compression, data filtering, parameter estimation, pattern recognition, and neural analysis can be significantly improved by using the subspace and componentbased methodology.
Abstract: This chapter contains sections titled: Introduction Linear Algebra Review Observation Model and Problem Statement Preliminary Example: Oja's Neuron Subspace Tracking Eigenvectors Tracking Convergence and Performance Analysis Issues Illustrative Examples Concluding Remarks Problems References


Journal ArticleDOI
TL;DR: This work introduces here an extension of Array-OL to deal with states or delays by the way of uniform inter-repetition dependences and shows that this specification language is able to express the main patterns of computation of the intensive signal processing domain.
Abstract: Intensive signal processing applications appear in many application domains such as video processing or detection systems. These applications handle multidimensional data structures (mainly arrays) to deal with the various dimensions of the data (space, time, frequency). A specification language allowing the direct manipulation of these different dimensions with a high level of abstraction is a key to handling the complexity of these applications and to benefit from their massive potential parallelism. The Array-OL specification language is designed to do just that. We introduce here an extension of Array-OL to deal with states or delays by the way of uniform inter-repetition dependences. We show that this specification language is able to express the main patterns of computation of the intensive signal processing domain.

Journal ArticleDOI
TL;DR: This paper investigates the design of a field-programmable-gate-array based optical orthogonal frequency-division multiplexing (OFDM) transmitter implementing real-time digital signal processing at 21.4 GSample/s and describes a transmission experiment over 800 and 1600 km of uncompensated standard fiber with negligible optical SNR penalties and bit error rate.
Abstract: In this paper, we investigate the design of a field-programmable-gate-array (FPGA) based optical orthogonal frequency-division multiplexing (OFDM) transmitter implementing real-time digital signal processing at 21.4 GSample/s. The transmitter was utilized to generate 8.34 Gb/s QPSK-OFDM signals for direct detection. We study the impact of the finite resolutions of the inverse fast Fourier transform cores and the digital-to-analog converters on the system performance. Furthermore, we describe a transmission experiment over 800 and 1600 km of uncompensated standard fiber with negligible optical SNR penalties and bit error rate <; 10-3.

Journal ArticleDOI
TL;DR: The proposed gridding-FFT (GFFT) method increases the processing speed sharply compared with the previously proposed non- uniform Fourier Transform, and may speed up application of the non-uniform sparse sampling approaches.

Proceedings ArticleDOI
05 May 2010
TL;DR: The self-organizing neural networks are used for pattern vectors classification using a specific statistical criterion proposed to evaluate distances of individual feature vector values from corresponding cluster centers.
Abstract: Signal analysis of multi-channel data form a specific area of general digital signal processing methods. The paper is devoted to application of these methods for electroencephalogram (EEG) signal processing including signal de-noising, evaluation of its principal components and segmentation based upon feature detection both by the discrete wavelet transform (DWT) and discrete Fourier transform (DFT). The self-organizing neural networks are then used for pattern vectors classification using a specific statistical criterion proposed to evaluate distances of individual feature vector values from corresponding cluster centers. Results achieved are compared for different data sets and selected mathematical methods to detect and to classify signal segments features. Proposed methods are accompanied by the appropriate graphical user interface (GUI) designed in the MATLAB environment.

Journal ArticleDOI
TL;DR: Estimation procedures are proposed for some mixtures of copula-based densities and are compared in the hidden Markov chain setting, in order to perform statistical unsupervised classification of signals or images.
Abstract: Parametric modeling and estimation of non-Gaussian multidimensional probability density function is a difficult problem whose solution is required by many applications in signal and image processing. A lot of efforts have been devoted to escape the usual Gaussian assumption by developing perturbed Gaussian models such as spherically invariant random vectors (SIRVs). In this work, we introduce an alternative solution based on copulas that enables theoretically to represent any multivariate distribution. Estimation procedures are proposed for some mixtures of copula-based densities and are compared in the hidden Markov chain setting, in order to perform statistical unsupervised classification of signals or images. Useful copulas and SIRV for multivariate signal classification are particularly studied through experiments.