scispace - formally typeset
Search or ask a question

Showing papers in "EURASIP Journal on Advances in Signal Processing in 2014"



Journal ArticleDOI
TL;DR: This paper covers some of the state-of-the-art seizure detection and prediction algorithms and provides comparison between these algorithms and concludes with future research directions and open problems in this topic.
Abstract: Epilepsy patients experience challenges in daily life due to precautions they have to take in order to cope with this condition. When a seizure occurs, it might cause injuries or endanger the life of the patients or others, especially when they are using heavy machinery, e.g., deriving cars. Studies of epilepsy often rely on electroencephalogram (EEG) signals in order to analyze the behavior of the brain during seizures. Locating the seizure period in EEG recordings manually is difficult and time consuming; one often needs to skim through tens or even hundreds of hours of EEG recordings. Therefore, automatic detection of such an activity is of great importance. Another potential usage of EEG signal analysis is in the prediction of epileptic activities before they occur, as this will enable the patients (and caregivers) to take appropriate precautions. In this paper, we first present an overview of seizure detection and prediction problem and provide insights on the challenges in this area. Second, we cover some of the state-of-the-art seizure detection and prediction algorithms and provide comparison between these algorithms. Finally, we conclude with future research directions and open problems in this topic.

215 citations


Journal ArticleDOI
TL;DR: The throat polyp prediction procedure based on wavelet packet transform and support vector machine intelligent algorithm was deduced and the correct rate of prediction was stable under different number of samples and different random measurement matrices.
Abstract: Classification in large-scale data is a key problem in big data domain. The theory of compressive sensing enables the recovery of a sparse signal from a small set of linear, random projections which provides a compressive classification method operating directly on the compressed data without reconstructing for big data. In this paper, we collected the compressed vowel /a:/ and /i:/ voice signals using compressive sensing for throat polyp detection. The throat polyp prediction procedure based on wavelet packet transform and support vector machine intelligent algorithm was deduced. The experiments for throat polyp prediction with the proposed classification algorithm were carried out. The results showed that the correct rate of prediction was stable under different number of samples and different random measurement matrices.

203 citations


Journal ArticleDOI
TL;DR: The proposed classification scheme obtained promising results on the two medical image sets and was evaluated on the UCI breast cancer dataset (diagnostic), and a competitive result was obtained.
Abstract: Classification of medical images is an important issue in computer-assisted diagnosis. In this paper, a classification scheme based on a one-class kernel principle component analysis (KPCA) model ensemble has been proposed for the classification of medical images. The ensemble consists of one-class KPCA models trained using different image features from each image class, and a proposed product combining rule was used for combining the KPCA models to produce classification confidence scores for assigning an image to each class. The effectiveness of the proposed classification scheme was verified using a breast cancer biopsy image dataset and a 3D optical coherence tomography (OCT) retinal image set. The combination of different image features exploits the complementary strengths of these different feature extractors. The proposed classification scheme obtained promising results on the two medical image sets. The proposed method was also evaluated on the UCI breast cancer dataset (diagnostic), and a competitive result was obtained.

105 citations


Journal ArticleDOI
TL;DR: A new pedestrian dead reckoning (PDR)-based navigation algorithm by using magnetic, angular rate, and gravity sensors which are equipped in existing commercial smartphone, which consists of step detection, stride length estimation, and heading estimation.
Abstract: The demand for navigating pedestrian by using a hand-held mobile device increased remarkably over the past few years, especially in GPS-denied scenario. We propose a new pedestrian dead reckoning (PDR)-based navigation algorithm by using magnetic, angular rate, and gravity (MARG) sensors which are equipped in existing commercial smartphone. Our proposed navigation algorithm consists of step detection, stride length estimation, and heading estimation. To eliminate the gauge step errors of the random bouncing motions, we designed a reliable algorithm for step detection. We developed a BP neural network-based stride length estimation algorithm to apply to different users. In response to the challenge of magnetic disturbance, a quaternion-based extended Kalman filter (EKF) is introduced to determine the user's heading direction for each step. The performance of our proposed pedestrian navigation algorithm is verified by using a smartphone in providing accurate, reliable, and continuous location tracking services.

101 citations


Journal ArticleDOI
TL;DR: Analytical expressions for the number of extra sensors to be added to a CSA to guarantee that the CSA peak side lobe height is less than that of the full ULA with the same aperture are derived.
Abstract: A coprime sensor array (CSA) is a non-uniform linear array obtained by interleaving two uniform linear arrays (ULAs) that are undersampled by coprime factors. A CSA provides the resolution of a fully populated ULA of the same aperture using fewer sensors. However, the peak side lobe level in a CSA is higher than the peak side lobe of the equivalent full ULA with the same resolution. Adding more sensors to a CSA can reduce its peak side lobe level. This paper derives analytical expressions for the number of extra sensors to be added to a CSA to guarantee that the CSA peak side lobe height is less than that of the full ULA with the same aperture. The analytical expressions are derived and compared for the uniform, Hann, Hamming, and Dolph-Chebyshev shadings.

100 citations


Journal ArticleDOI
TL;DR: A new scheme is proposed that aims to maximally overcome the identified drawbacks of its predecessors while still trying to keep their advantages, and Simulation results illustrate the improvement achieved by the proposal.
Abstract: In the vision towards future radio systems, where access to information and sharing of data is to be available anywhere and anytime to anyone for anything, a wide variety of applications and services are therefore envisioned. This naturally calls for a more flexible system to support. Moreover, the demand for drastically increased data traffic, as well as the fact of spectrum scarcity, would eventually force future spectrum access to a more dynamic fashion. For addressing the challenges, a powerful and flexible physical layer technology must be prepared, which naturally brings us to the question whether the legacy of the OFDM system can still fit in this context. In fact, during the past years, extensive research effort has been made in this area and several enhanced alternatives have been reported in the literature. Nevertheless, up to date, all of the proposed schemes have advantages and disadvantages. In this paper, we give a detailed analysis on these well-known schemes from different aspects and point out their open issues. Then, we propose a new scheme that aims to maximally overcome the identified drawbacks of its predecessors while still trying to keep their advantages. Simulation results illustrate the improvement achieved by our proposal.

81 citations


Journal ArticleDOI
TL;DR: This study uses a true Global Navigation Satellite System tide gauge, installed at the Onsala Space Observatory, and finds that the SNR analysis performs better in rough sea surface conditions than the phase delay analysis.
Abstract: Global Positioning System (GPS) tide gauges have been realized in different configurations, e.g., with one zenith-looking antenna, using the multipath interference pattern for signal-to-noise ratio (SNR) analysis, or with one zenith- and one nadir-looking antenna, analyzing the difference in phase delay, to estimate the sea level height. In this study, for the first time, we use a true Global Navigation Satellite System (GNSS) tide gauge, installed at the Onsala Space Observatory. This GNSS tide gauge is recording both GPS and Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS) signals and makes it possible to use both the one- and two-antenna analysis approach. Both the SNR analysis and the phase delay analysis were evaluated using dual-frequency GPS and GLONASS signals, i.e., frequencies in the L-band, during a 1-month-long campaign. The GNSS-derived sea level results were compared to independent sea level observations from a co-located pressure tide gauge and show a high correlation for both systems and frequency bands, with correlation coefficients of 0.86 to 0.97. The phase delay results show a better agreement with the tide gauge sea level than the SNR results, with root-mean-square differences of 3.5 cm (GPS L1 and L2) and 3.3/3.2 cm (GLONASS L1/L2 bands) compared to 4.0/9.0 cm (GPS L1/L2) and 4.7/8.9 cm (GLONASS L1/L2 bands). GPS and GLONASS show similar performance in the comparison, and the results prove that for the phase delay analysis, it is possible to use both frequencies, whereas for the SNR analysis, the L2 band should be avoided if other signals are available. Note that standard geodetic receivers using code-based tracking, i.e., tracking the un-encrypted C/A-code on L1 and using the manufacturers’ proprietary tracking method for L2, were used. Signals with the new C/A-code on L2, the so-called L2C, were not tracked. Using wind speed as an indicator for sea surface roughness, we find that the SNR analysis performs better in rough sea surface conditions than the phase delay analysis. The SNR analysis is possible even during the highest wind speed observed during this campaign (17.5 m/s), while the phase delay analysis becomes difficult for wind speeds above 6 m/s.

81 citations


Journal ArticleDOI
TL;DR: A new self-adaptive algorithm for segmenting human skin regions in color images that learns a local skin color model on the fly and takes advantage of textural features for computing local propagation costs that are used in the distance transform.
Abstract: In this paper, we introduce a new self-adaptive algorithm for segmenting human skin regions in color images Skin detection and segmentation is an active research topic, and many solutions have been proposed so far, especially concerning skin tone modeling in various color spaces Such models are used for pixel-based classification, but its accuracy is limited due to high variance and low specificity of human skin color In many works, skin model adaptation and spatial analysis were reported to improve the final segmentation outcome; however, little attention has been paid so far to the possibilities of combining these two improvement directions Our contribution lies in learning a local skin color model on the fly, which is subsequently applied to the image to determine the seeds for the spatial analysis Furthermore, we also take advantage of textural features for computing local propagation costs that are used in the distance transform The results of an extensive experimental study confirmed that the new method is highly competitive, especially for extracting the hand regions in color images

73 citations


Journal ArticleDOI
TL;DR: A general method based on a polynomial fitting of the HPA characteristics is proposed and theoretical expressions for the BER are given for any HPA model, including different HPA models.
Abstract: In this paper, we introduce an analytical study of the impact of high-power amplifier (HPA) nonlinear distortion (NLD) on the bit error rate (BER) of multicarrier techniques. Two schemes of multicarrier modulations are considered in this work: the classical orthogonal frequency division multiplexing (OFDM) and the filter bank-based multicarrier using offset quadrature amplitude modulation (FBMC/OQAM), including different HPA models. According to Bussgang’s theorem, the in-band NLD is modeled as a complex gain in addition to an independent noise term for a Gaussian input signal. The BER performance of OFDM and FBMC/OQAM modulations, transmitting over additive white Gaussian noise (AWGN) and Rayleigh fading channels, is theoretically investigated and compared to simulation results. For simple HPA models, such as the soft envelope limiter, it is easy to compute the BER theoretical expression. However, for other HPA models or for real measured HPA, BER derivation is generally intractable. In this paper, we propose a general method based on a polynomial fitting of the HPA characteristics and we give theoretical expressions for the BER for any HPA model.

67 citations


Journal ArticleDOI
TL;DR: Block term decomposition (BTD) is presented, allowing to model more variability in the data than what would be possible with CPD, and various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics are shown.
Abstract: Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10–i18, 2007; NeuroImage 37:844–854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- (L r ,L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.

Journal ArticleDOI
TL;DR: The results show that this approach performs practically as state-of-the-art OFDM schemes known in the literature, while it additionally can reduce the sidelobes of the spectrum emission.
Abstract: Generalized frequency division multiplexing (GFDM) is a block filtered multicarrier modulation scheme recently proposed for future wireless communication systems. It generalizes the concept of orthogonal frequency division multiplexing (OFDM), featuring multiple circularly pulse-shaped subsymbols per subcarrier. This paper presents an algorithm for GFDM synchronization and investigates the use of a preamble that consists of two identical parts combined with a windowing process in order to satisfy low out of band radiation requirements. The performance of time and frequency estimation, with and without windowing, is evaluated in terms of the statistical properties of residual offsets and the impact on symbol error rate over frequency-selective channels. A flexible metric that quantifies the penalty of misalignments is derived. The results show that this approach performs practically as state-of-the-art OFDM schemes known in the literature, while it additionally can reduce the sidelobes of the spectrum emission.

Journal ArticleDOI
TL;DR: It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of water marking schemes.
Abstract: While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present, their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes.

Journal ArticleDOI
TL;DR: Because all the receiver baseband signal processing functionalities are proposed in the frequency domain, the overall architecture is suitable for multiuser asynchronous transmission on fragmented spectrum.
Abstract: Relaxed synchronization and access to fragmented spectrum are considered for future generations of wireless networks. Frequency division multiple access for filter bank multicarrier (FBMC) modulation provides promising performance without strict synchronization requirements contrary to conventional orthogonal frequency division multiplexing (OFDM). The architecture of a FBMC receiver suitable for this scenario is considered. Carrier frequency offset (CFO) compensation is combined with intercarrier interference (ICI) cancellation and performs well under very large frequency offsets. Channel estimation and interpolation had to be adapted and proved effective even for heavily fragmented spectrum usage. Channel equalization can sustain large delay spread. Because all the receiver baseband signal processing functionalities are proposed in the frequency domain, the overall architecture is suitable for multiuser asynchronous transmission on fragmented spectrum.

Journal ArticleDOI
TL;DR: An overview of constrained parallel factor (PARAFAC) models where the constraints model linear dependencies among columns of the factor matrices of the Tensor decomposition or, alternatively, the pattern of interactions between different modes of the tensor which are captured by the equivalent core tensor.
Abstract: In this paper, we present an overview of constrained parallel factor (PARAFAC) models where the constraints model linear dependencies among columns of the factor matrices of the tensor decomposition or, alternatively, the pattern of interactions between different modes of the tensor which are captured by the equivalent core tensor. Some tensor prerequisites with a particular emphasis on mode combination using Kronecker products of canonical vectors that makes easier matricization operations, are first introduced. This Kronecker product‐based approach is also formulated in terms of an index notation, which provides an original and concise formalism for both matricizing tensors and writing tensor models. Then, after a brief reminder of PARAFAC and Tucker models, two families of constrained tensor models, the co‐called PARALIND/CONFAC and PARATUCK models, are described in a unified framework, for N th‐order tensors. New tensor models, called nested Tucker models and block PARALIND/CONFAC models, are also introduced. A link between PARATUCK models and constrained PARAFAC models is then established. Finally, new uniqueness properties of PARATUCK models are deduced from sufficient conditions for essential uniqueness of their associated constrained PARAFAC models.

Journal ArticleDOI
TL;DR: This is the first work to distinguish documents produced by laser printer, inkjet printer, and copier based on features extracted from individual characters in the documents, and has an average accuracy of 90% and works with JPEG compression.
Abstract: This paper describes a method to distinguish documents produced by laser printers, inkjet printers, and electrostatic copiers, three commonly used document creation devices The proposed approach can distinguish between documents produced by these sources based on features extracted from the characters in the documents Hence, it can also be used to detect tampered documents produced by a mixture of these sources We analyze the characteristics associated with laser/inkjet printers and electrostatic copiers and determine the signatures created by the different physical and technical processes involved in each type of printing Based on the analysis of these signatures, we computed the features of noise energy, contour roughness, and average gradient To the best of our knowledge, this is the first work to distinguish documents produced by laser printer, inkjet printer, and copier based on features extracted from individual characters in the documents Experimental results show that this method has an average accuracy of 90% and works with JPEG compression

Journal ArticleDOI
TL;DR: A two-phase algorithm was proposed to reduce the MSE computation of FIC, and it achieved performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations.
Abstract: Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

Journal ArticleDOI
TL;DR: This work compares the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set.
Abstract: Efficient exploitation of hyperspectral imagery is of great importance in remote sensing Artificial intelligence approaches have been receiving favorable reviews for classification of hyperspectral data because the complexity of such data challenges the limitations of many conventional methods Artificial neural networks (ANNs) were shown to outperform traditional classifiers in many situations However, studies that use the full spectral dimensionality of hyperspectral images to classify a large number of surface covers are scarce if non-existent We advocate the need for methods that can handle the full dimensionality and a large number of classes to retain the discovery potential and the ability to discriminate classes with subtle spectral differences We demonstrate that such a method exists in the family of ANNs We compare the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set Rigorous evaluation of the classification accuracies shows that the ANN outperforms the other methods and achieves ≈90% accuracy on test data

Journal ArticleDOI
TL;DR: This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints, seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.
Abstract: Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.

Journal ArticleDOI
TL;DR: A joint time-delay and channel estimator to assess the achievable positioning performance of the Long Term Evolution (LTE) system in multipath channels with a novel channel parameterization able to characterize close-in multipath.
Abstract: This paper presents a joint time-delay and channel estimator to assess the achievable positioning performance of the Long Term Evolution (LTE) system in multipath channels. LTE is a promising technology for localization in urban and indoor scenarios, but its performance is degraded due to the effect of multipath. In those challenging environments, LTE pilot signals are of special interest because they can be used to estimate the multipath channel and counteract its effect. For this purpose, a channel estimation model based on equi-spaced taps is combined with the time-delay estimation, leading to a low-complexity estimator. This model is enhanced with a novel channel parameterization able to characterize close-in multipath, by introducing an arbitrary tap with variable position between the first two equi-spaced taps. This new hybrid approach is adopted in the joint maximum likelihood (JML) time-delay estimator to improve the ranging performance in the presence of short-delay multipath. The JML estimator is then compared with the conventional correlation-based estimator in usual LTE conditions. These conditions are characterized by the extended typical urban (ETU) multipath channel model, additive white Gaussian noise (AWGN) and LTE signal bandwidths equal to 1.4, 5 and 10 MHz. The resulting time-delay estimation performance is assessed by computing the cumulative density function (CDF) of the errors in the absence of noise and the root-mean-square error (RMSE) and bias for signal-to-noise ratio (SNR) values between −20 and 30 dB.

Journal ArticleDOI
TL;DR: In this article, the authors present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations.
Abstract: Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

Journal ArticleDOI
TL;DR: This study introduced a generalized tensor rank one discriminant analysis (GTR1DA), which involves considering the distribution of the data points near the classification boundary to calculate better projection tensors and achieves greater classification accuracy than other vector- and tensor-based methods.
Abstract: Applications based on electrocardiogram (ECG) signal feature extraction and classification are of major importance to the autodiagnosis of heart diseases. Most studies on ECG classification methods have targeted only 1- or 2-lead ECG signals. This limitation results from the unavailability of real clinical 12-lead ECG data, which would help train the classification models. In this study, we propose a new tensor-based scheme, which is motivated by the lack of effective feature extraction methods for direct tensor data input. In this scheme, an ECG signal is represented by third-order tensors in the spatial-spectral-temporal domain after using short-time Fourier transform on the raw ECG data. To overcome the limitations of tensor rank one discriminant analysis (TR1DA) inherited from linear discriminant analysis, we introduced a generalized tensor rank one discriminant analysis (GTR1DA). This approach involves considering the distribution of the data points near the classification boundary to calculate better projection tensors. The experimental results showed that the proposed method achieves greater classification accuracy than other vector- and tensor-based methods. Finally, GTR1DA features a better convergence property than the original TR1DA.

Journal ArticleDOI
TL;DR: An overview of the state of the art of this relatively new and, in some respects, underutilised remote sensing technique is provided.
Abstract: The Global Navigation Satellite System (GNSS) signals are always available, globally, and the signal structures are well known, except for those dedicated to military use. They also have some distinctive characteristics, including the use of L-band frequencies, which are particularly suited for remote sensing purposes. The idea of using GNSS signals for remote sensing - the atmosphere, oceans or Earth surface - was first proposed more than two decades ago. Since then, GNSS remote sensing has been intensively investigated in terms of proof of concept studies, signal processing methodologies, theory and algorithm development, and various satellite-borne, airborne and ground-based experiments. It has been demonstrated that GNSS remote sensing can be used as an alternative passive remote sensing technology. Space agencies such as NASA, NOAA, EUMETSAT and ESA have already funded, or will fund in the future, a number of projects/missions which focus on a variety of GNSS remote sensing applications. It is envisaged that GNSS remote sensing can be either exploited to perform remote sensing tasks on an independent basis or combined with other techniques to address more complex applications. This paper provides an overview of the state of the art of this relatively new and, in some respects, underutilised remote sensing technique. Also addressed are relevant challenging issues associated with GNSS remote sensing services and the performance enhancement of GNSS remote sensing to accurately and reliably retrieve a range of geophysical parameters.

Journal ArticleDOI
TL;DR: Due to the better frequency localization of both PHYDYAS and IOTA waveforms, FBMC technique is demonstrated to be more robust to timing asynchronism compared to OFDM one, and is a potential candidate for the physical layer of future cognitive radio systems.
Abstract: In this paper, we investigate the impact of timing asynchronism on the performance of multicarrier techniques in a spectrum coexistence context. Two multicarrier schemes are considered: cyclic prefix-based orthogonal frequency division multiplexing (CP-OFDM) with a rectangular pulse shape and filter bank-based multicarrier (FBMC) with physical layer for dynamic spectrum access and cognitive radio (PHYDYAS) and isotropic orthogonal transform algorithm (IOTA) waveforms. First, we present the general concept of the so-called power spectral density (PSD)-based interference tables which are commonly used for multicarrier interference characterization in spectrum sharing context. After highlighting the limits of this approach, we propose a new family of interference tables called ‘instantaneous interference tables’. The proposed tables give the interference power caused by a given interfering subcarrier on a victim one, not only as a function of the spectral distance separating both subcarriers but also with respect to the timing misalignment between the subcarrier holders. In contrast to the PSD-based interference tables, the accuracy of the proposed tables has been validated through different simulation results. Furthermore, due to the better frequency localization of both PHYDYAS and IOTA waveforms, FBMC technique is demonstrated to be more robust to timing asynchronism compared to OFDM one. Such a result makes FBMC a potential candidate for the physical layer of future cognitive radio systems.

Journal ArticleDOI
TL;DR: This paper proposes and analyzes a novel probabilistic soft SSDF attack model, which goes beyond the existing models for its generalization, and finds an interesting trade-off between destructiveness and stealthiness.
Abstract: In cognitive radio networks, spectrum sensing data falsification (SSDF) attack is a crucial factor deteriorating the detection performance of cooperative spectrum sensing. In this paper, we propose and analyze a novel probabilistic soft SSDF attack model, which goes beyond the existing models for its generalization. Under this generalized SSDF attack model, we firstly obtain closed form expressions of global sensing performance at the fusion center. Then, we theoretically evaluate the performance of the proposed attack model, in terms of destructiveness and stealthiness, sequentially. Numerical simulations match the analytical results well. Last but not least, an interesting trade-off between destructiveness and stealthiness is discovered, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.

Journal ArticleDOI
TL;DR: Simulation results indicate, among other, the benefit provided to the jammer when it is employed with the spectrum sensing algorithm in proactive frequency hopping and power alteration schemes.
Abstract: Cognitive radio (CR) promises to be a solution for the spectrum underutilization problems. However, security issues pertaining to cognitive radio technology are still an understudied topic. One of the prevailing such issues are intelligent radio frequency (RF) jamming attacks, where adversaries are able to exploit on-the-fly reconfigurability potentials and learning mechanisms of cognitive radios in order to devise and deploy advanced jamming tactics. In this paper, we use a game-theoretical approach to analyze jamming/anti-jamming behavior between cognitive radio systems. A non-zero-sum game with incomplete information on an opponent’s strategy and payoff is modelled as an extension of Markov decision process (MDP). Learning algorithms based on adaptive payoff play and fictitious play are considered. A combination of frequency hopping and power alteration is deployed as an anti-jamming scheme. A real-life software-defined radio (SDR) platform is used in order to perform measurements useful for quantifying the jamming impacts, as well as to infer relevant hardware-related properties. Results of these measurements are then used as parameters for the modelled jamming/anti-jamming game and are compared to the Nash equilibrium of the game. Simulation results indicate, among other, the benefit provided to the jammer when it is employed with the spectrum sensing algorithm in proactive frequency hopping and power alteration schemes.

Journal ArticleDOI
TL;DR: In this paper, a weighted minimum mean squared error (WMMSE) precoder was proposed to take advantage of the nonhomogeneous average signal-to-noise ratio (SNR) conditions.
Abstract: This paper is concerned with linear precoding designs for multiuser downlink transmissions. We consider a multiple-input single-output (MISO) system with multiple single-antenna user equipment (UE) experiencing non-homogeneous average signal-to-noise ratio (SNR) conditions. The first part of this work examines different precoding schemes with perfect channel state information (CSI) and average SNR at the base-station (eNB). We then propose a weighted minimum mean squared error (WMMSE) precoder, which takes advantage of the non-homogeneous SNR conditions. Given in a closed-form solution, the proposed WMMSE precoder outperforms other well-known linear precoders, such as zero-forcing (ZF), regularized ZF (RZF), while achieving a close performance to the locally optimal iterative WMMSE (IWMMSE) precoder, in terms of the achievable network sum-rate. In the second part of this work, we consider the non-homogeneous multiuser system with limited and quantized channel quality indicator (CQI) and channel direction indicator (CDI) feedbacks. Based on the CQI and CDI feedback models proposed for the Long-Term Evolution Advanced standard, we then propose a robust WMMSE precoder in a closed-form solution which takes into account the quantization errors. Simulation shows a significant improvement in the achievable network sum-rate by the proposed robust WMMSE precoder, compared to non-robust linear precoder designs.

Journal ArticleDOI
TL;DR: This paper formulate the schedule-based localization problem as an estimation problem in a Bayesian framework, which provides robustness with respect to uncertainty in such system parameters as anchor locations and timing devices and derives a sequential approximate maximum a posteriori (AMAP) estimator.
Abstract: In this paper, we consider the schedule-based network localization concept, which does not require synchronization among nodes and does not involve communication overhead. The concept makes use of a common transmission sequence, which enables each node to perform self-localization and to localize the entire network, based on noisy propagation-time measurements. We formulate the schedule-based localization problem as an estimation problem in a Bayesian framework. This provides robustness with respect to uncertainty in such system parameters as anchor locations and timing devices. Moreover, we derive a sequential approximate maximum a posteriori (AMAP) estimator. The estimator is fully decentralized and copes with varying noise levels. By studying the fundamental constraints given by the considered measurement model, we provide a system design methodology which enables a scalable solution. Finally, we evaluate the performance of the proposed AMAP estimator by numerical simulations emulating an impulse-radio ultra-wideband (IR-UWB) wireless network.

Journal ArticleDOI
TL;DR: This paper addresses the channel estimation problem for multiple-input multiple-output (MIMO) multi-relay systems exploiting measurements collected at the destination only and proposes a joint estimation approach that provides the destination with the instantaneous knowledge of all the channel matrices involved in the communication.
Abstract: In this paper, we address the channel estimation problem for multiple-input multiple-output (MIMO) multi-relay systems exploiting measurements collected at the destination only. Assuming that the source, relays, and destination are multiple-antenna devices and considering a three-hop amplify-and-forward (AF)-based training scheme, new channel estimation algorithms capitalizing on a tensor modeling of the end-to-end communication channel are proposed. Our approach provides the destination with the instantaneous knowledge of all the channel matrices involved in the communication. Instead of using separate estimations for each matrix, we are interested in a joint estimation approach. Two receiver algorithms are formulated to solve the joint channel estimation problem. The first one is an iterative method based on a trilinear alternating least squares (TALS) algorithm, while the second one is a closed-form solution based on a Kronecker least squares (KRLS) factorization. A useful lower-bound on the channel training length is derived from an identifiability study. We also show the proposed tensor-based approach is applicable to two-way MIMO relaying systems. Simulation results corroborate the effectiveness of the proposed estimators and provide a comparison with existing methods in terms of channel estimation accuracy and bit error rate (BER).

Journal ArticleDOI
TL;DR: Four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems are presented and are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.
Abstract: This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.