scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 2019"


Proceedings ArticleDOI
28 May 2019
TL;DR: In the case of a slow Rayleigh fading channel, deep JSCC can learn to communicate without explicit pilot signals or channel estimation, and significantly outperforms separation-based digital communication at all SNR and channel bandwidth values.
Abstract: We propose a joint source and channel coding (JSCC) technique for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols. We parameterize the encoder and decoder functions by two convolutional neural networks (CNNs), which are trained jointly, and can be considered as an autoencoder with a non-trainable layer in the middle that represents the noisy communication channel. Our results show that the proposed deep JSCC scheme outperforms digital transmission concatenating JPEG or JPEG2000 compression with a capacity achieving channel code at low signal-to-noise ratio (SNR) and channel bandwidth values in the presence of additive white Gaussian noise (AWGN). More strikingly, deep JSCC does not suffer from the “cliff effect,” and it provides a graceful performance degradation as the channel SNR varies with respect to the SNR value assumed during training. In the case of a slow Rayleigh fading channel, deep JSCC learns noise resilient coded representations and significantly outperforms separation-based digital communication at all SNR and channel bandwidth values.

187 citations


Journal ArticleDOI
TL;DR: The hybrid classification scheme has been demonstrated effective in classifying a large amount of ZigBee devices and validated the robustness by carrying out the classification process 18 months after the training, which is the longest time gap.
Abstract: Radio frequency (RF) fingerprint is the inherent hardware characteristics and has been employed to classify and identify wireless devices in many Internet of Things applications. This paper extracts novel RF fingerprint features, designs a hybrid and adaptive classification scheme adjusting to the environment conditions, and carries out extensive experiments to evaluate the performance. In particular, four modulation features, namely differential constellation trace figure, carrier frequency offset, modulation offset and I/Q offset extracted from constellation trace figure, are employed. The feature weights under different channel conditions are calculated at the training stage. These features are combined smartly with the weights selected according to the estimated signal to noise ratio at the classification stage. We construct a testbed using universal software radio peripheral platform as the receiver and 54 ZigBee nodes as the candidate devices to be classified, which are the most ZigBee devices ever tested. Extensive experiments are carried out to evaluate the classification performance under different channel conditions, namely line-of-sight (LOS) and nonline-of-sight scenarios. We then validate the robustness by carrying out the classification process 18 months after the training, which is the longest time gap. We also use a different receiver platform for classification for the first time. The classification error rate is as low as 0.048 in LOS scenario, and 0.1105 even when a different receiver is used for classification 18 months after the training. Our hybrid classification scheme has thus been demonstrated effective in classifying a large amount of ZigBee devices.

163 citations


Journal ArticleDOI
TL;DR: A novel sub-optimal scheme is presented which provides a GP formulation to efficiently and globally maximize the minimum uplink user rate and substantially outperforms the existing schemes in the literature.
Abstract: A cell-free massive multiple-input multiple-output system is considered using a max-min approach to maximize the minimum user rate with per-user power constraints. First, an approximated uplink user rate is derived based on channel statistics. Then, the original max-min signal-to-interference-plus-noise ratio problem is formulated for the optimization of receiver filter coefficients at a central processing unit and user power allocation. To solve this max-min non-convex problem, we decouple the original problem into two sub-problems, namely, receiver filter coefficient design and power allocation. The receiver filter coefficient design is formulated as a generalized Eigenvalue problem, whereas the geometric programming (GP) is used to solve the user power allocation problem. Based on these two sub-problems, an iterative algorithm is proposed, in which both problems are alternately solved while one of the design variables is fixed. This iterative algorithm obtains a globally optimum solution, whose optimality is proved through establishing an uplink-downlink duality. Moreover, we present a novel sub-optimal scheme which provides a GP formulation to efficiently and globally maximize the minimum uplink user rate. The numerical results demonstrate that the proposed scheme substantially outperforms the existing schemes in the literature.

154 citations


Journal ArticleDOI
TL;DR: This paper establishes the first practically viable solution for initial access and, hence, the first demonstration of stand-alone mmWave communication in the relevant regime of low (−10 dB to +5 dB) raw SNR.
Abstract: Millimeter wave (mmWave) communication with large antenna arrays is a promising technique to enable extremely high data rates due to large available bandwidth in mmWave frequency bands. In addition, given the knowledge of an optimal directional beamforming vector, large antenna arrays have been shown to overcome both the severe signal attenuation in mmWave as well as the interference problem. However, fundamental limits on achievable learning rate of an optimal beamforming vector remain. This paper considers the problem of adaptive and sequential optimization of the beamforming vectors during the initial access phase of communication. With a single-path channel model, the problem is reduced to actively learning the Angle-of-Arrival (AoA) of the signal sent from the user to the Base Station (BS). Drawing on the recent results in the design of a hierarchical beamforming codebook, sequential measurement dependent noisy search strategies, and active learning from an imperfect labeler, an adaptive and sequential alignment algorithm is proposed. For any given resolution and error probability of the estimated AoA, an upper bound on the expected search time of the proposed algorithm is derived via Extrinsic Jensen-Shannon Divergence. The upper bound demonstrates that the search time of the proposed algorithm asymptotically matches the performance of the noiseless bisection search up to a constant factor, in effect, characterizing the AoA acquisition rate. Furthermore, the upper bound shows that the acquired AoA error probability decays exponentially fast with the search time with an exponent that is a decreasing function of the acquisition rate. Numerically, the proposed algorithm is compared with prior work where a significant improvement of the system communication rate is observed. Most notably, in the relevant regime of low (−10 dB to +5 dB) raw SNR, this establishes the first practically viable solution for initial access and, hence, the first demonstration of stand-alone mmWave communication.

102 citations


Journal ArticleDOI
TL;DR: This paper deals with the robust waveform design of multiple-input multiple-output radar to improve target detectability embedded in signal-dependent interferences to maximize the worst case signal-to-interference-plus-noise ratio (SINR) over steering matrix mismatches.
Abstract: This paper deals with the robust waveform design of multiple-input multiple-output radar to improve target detectability embedded in signal-dependent interferences. Two iterative algorithms with ensuring convergence properties are introduced to maximize the worst case signal-to-interference-plus-noise ratio (SINR) over steering matrix mismatches under the constant modulus and similarity constraints. Each iteration of the proposed algorithms splits the high-dimensional problem into multiple one dimensional problems, to which the optimal solutions can be found in polynomial times. Numerical examples are provided to assess the capabilities of the proposed techniques in comparison with the existing methods in terms of the SINR and the computational times for both the continuous and discrete phase cases of the probing signal.

90 citations


Journal ArticleDOI
TL;DR: The proposed BA scheme is highly robust to fast channel variations caused by the large Doppler spread between the multipath components, and it is shown that after achieving BA, the beamformed channel is essentially frequency-flat, such that single-carrier communication needs no equalization in the time domain.
Abstract: Communication at millimeter wave (mm-wave) bands is expected to become a key ingredient of the next generation (5G) wireless networks. Effective mm-wave communications require fast and reliable methods for beamforming at both the user equipment (UE) and the base station sides, in order to achieve a sufficiently large signal-to-noise ratio after beamforming. We refer to the problem of finding a pair of strongly coupled narrow beams at the transmitter and receiver as the beam alignment problem. In this paper, we propose an efficient BA scheme for single-carrier mm-wave communications. In the proposed scheme, the BS periodically probes the channel in the downlink via a pre-specified pseudo-random beamforming codebook and pseudo-random spreading codes, letting each UE estimate the angle-of-arrival/angle-of-departure (AoA-AoD) pair of the multipath channel for which the energy transfer is maximum. We leverage the sparse nature of mm-wave channels in the AoA-AoD domain to formulate the BA problem as the estimation of a sparse non-negative vector. Based on the recently developed non-negative least squares technique, we efficiently find the strongest AoA-AoD pair connecting each UE to the BS. We evaluate the performance of the proposed scheme under a realistic channel model, where the propagation channel consists of a few multipath components each having different delays, AoAs-AoDs, and Doppler shifts. The channel model parameters are consistent with the experimental channel measurements. The simulation results indicate that the proposed method is highly robust to fast channel variations caused by the large Doppler spread between the multipath components. Furthermore, we also show that after achieving BA, the beamformed channel is essentially frequency-flat, such that single-carrier communication needs no equalization in the time domain.

84 citations


Journal ArticleDOI
Jiabao Gao1, Xuemei Yi1, Caijun Zhong1, Xiaoming Chen1, Zhaoyang Zhang1 
TL;DR: A deep learning based signal detector which exploits the underlying structural information of the modulated signals, and is shown to achieve the state of the art detection performance, requiring no prior knowledge about channel state information or background noise.
Abstract: In cognitive radio systems, the ability to accurately detect primary user’s signal is essential to secondary user in order to utilize idle licensed spectrum. Conventional energy detector is a good choice for blind signal detection, while it suffers from the well-known SNR-wall due to noise uncertainty. In this letter, we firstly propose a deep learning based signal detector which exploits the underlying structural information of the modulated signals, and is shown to achieve the state of the art detection performance, requiring no prior knowledge about channel state information or background noise. In addition, the impacts of modulation scheme and sample length on performance are investigated. Finally, a deep learning based cooperative detection system is proposed, which is shown to provide substantial performance gain over conventional cooperative sensing methods.

83 citations


Journal ArticleDOI
TL;DR: An improved feed-forward denoising convolution neural network (DnCNN) is proposed to suppress random noise in desert seismic data and can open a new direction in the area of seismic data processing.
Abstract: High-quality seismic data are the basis for stratigraphic imaging and interpretation, but the existence of random noise can greatly affect the quality of seismic data. At present, most understanding and processing of random noise still stay at the level of Gaussian white noise. With the reduction of resource, the acquired seismic data have lower signal-to-noise ratio and more complex noise natures. In particular, the random noise in the desert area has the characteristics of low frequency, non-Gaussian, nonstationary, high energy, and serious aliasing between effective signal and random noise in the frequency domain, which has brought great difficulties to the recovery of seismic events by conventional denoising methods. To solve this problem, an improved feed-forward denoising convolution neural network (DnCNN) is proposed to suppress random noise in desert seismic data. DnCNN has the characteristics of automatic feature extraction and blind denoising. According to the characteristics of desert noise, we modify the original DnCNN from the aspects of patch size, convolution kernel size, network depth, and training set to make it suitable for low-frequency and non-Gaussian desert noise suppression. Both simulation and practical experiments prove that the improved DnCNN has obvious advantages in terms of desert noise and surface wave suppression as well as effective signal amplitude preservation. In addition, the improved DnCNN, in contrast to existing methods, has considerable potential to benefit from large data sets. Therefore, we believe that it can open a new direction in the area of seismic data processing.

73 citations


Journal ArticleDOI
TL;DR: The extensive simulations demonstrated the exceptional classification performance for new key features based on high order cumulants and the robustness of the proposed method for a variety of conditions, such as frequency offset, multi-path, and so on.
Abstract: By considering the different cumulant combinations of the 2FSK, 4FSK, 2PSK, 4PSK, 2ASK, and 4ASK, this paper established new identification parameters to achieve the recognition of those digital modulations. The deep neural network (DNN) was also employed to improve the recognition rate, which was designed to classify the signal based on the distinct feature of each signal type that was extracted with high order cumulants. The extensive simulations demonstrated the exceptional classification performance for new key features based on high order cumulants. The overall success rate of the proposed algorithm was over 99% at the signal to noise ratio (SNR) of -5 dB and 100% at the SNR of -2 dB. The results of the experiments also showed the robustness of the proposed method for a variety of conditions, such as frequency offset, multi-path, and so on.

70 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a coverage analysis framework that includes realistic channel model and antenna element radiation patterns to evaluate the system-level performance of various 5G connectivity scenarios in the mmWave band.
Abstract: Millimeter-wave (mmWave) bands will play an important role in 5G wireless systems. The system performance can be assessed by using models from stochastic geometry that cater for the directivity in the desired signal transmissions as well as the interference, and by calculating the signal-to-interference-plus-noise ratio ( $\mathsf {SINR}$ ) coverage. Nonetheless, the accuracy of the existing coverage expressions derived through stochastic geometry may be questioned, as it is not clear whether they would capture the impact of the detailed mmWave channel and antenna features. In this paper, we propose an $\mathsf {SINR}$ coverage analysis framework that includes realistic channel model and antenna element radiation patterns. We introduce and estimate two parameters, aligned gain and misaligned gain , associated with the desired signal beam and the interfering signal beam, respectively. The distributions of these gains are used to determine the distribution of the $\mathsf {SINR}$ which is compared with the corresponding $\mathsf {SINR}$ coverage, calculated through the system-level simulations. The results show that both aligned and misaligned gains can be modeled as exponential-logarithmically distributed random variables with the highest accuracy, and can further be approximated as exponentially distributed random variables with reasonable accuracy. These approximations can be used as a tool to evaluate the system-level performance of various 5G connectivity scenarios in the mmWave band.

63 citations


Journal ArticleDOI
TL;DR: A patch-based singular value shrinkage method for diffusion magnetic resonance image estimation targeted at low signal to noise ratio and accelerated acquisitions, compared with related approaches which generally operate on magnitude-only data and use data-based noise level estimation and singular value truncation.

Journal ArticleDOI
TL;DR: The sparse autoencoder method introduced in this article is effective in attenuating the seismic noise and is capable of preserving subtle features of the data, while removing the spatially incoherent random noise.
Abstract: Seismic waves that are recorded by near-surface sensors are usually disturbed by strong noise. Hence, the recorded seismic data are sometimes of poor quality; this phenomenon can be characterized as a low signal-to-noise ratio (SNR). The low SNR of the seismic data may lower the quality of many subsequent seismological analyses, such as inversion and imaging. Thus, the removal of unwanted seismic noise has significant importance. In this article, we intend to improve the SNR of many seismological datasets by developing new denoising framework that is based on an unsupervised machine-learning technique. We leverage the unsupervised learning philosophy of the autoencoding method to adaptively learn the seismic signals from the noisy observations. This could potentially enable us to better represent the true seismic-wave components. To mitigate the influence of the seismic noise on the learned features and suppress the trivial components associated with low-amplitude neurons in the hidden layer, we introduce a sparsity constraint to the autoencoder neural network. The sparse autoencoder method introduced in this article is effective in attenuating the seismic noise. More importantly, it is capable of preserving subtle features of the data, while removing the spatially incoherent random noise. We apply the proposed denoising framework to a reflection seismic image, depth-domain receiver function gather, and an earthquake stack dataset. The purpose of this study is to demonstrate the framework’s potential in real-world applications. INTRODUCTION Seismic phases from the discontinuities in the Earth’s interior contain significant constraints for high-resolution deep Earth imaging; however, they sometimes arrive as weak-amplitude waveforms (Rost and Weber, 2001; Rost and Thomas, 2002; Deuss, 2009; Saki et al., 2015; Guan and Niu, 2017, 2018; Schneider et al., 2017; Chai et al., 2018). The detection of these weak-amplitude seismic phases is sometimes challenging because of three main reasons: (1) the amplitude of these phases is very small and can be neglected easily when seen next to the amplitudes of neighboring phases that are much larger; (2) the coherency of the weak-amplitude seismic phases is seriously degraded because of insufficient array coverage and spatial sampling; and (3) the strong random background noise that is even stronger than the weak phases in amplitude makes the detection even harder. As an example of the challenges presented, the failure in detecting the weak reflection phases from mantle discontinuities could result in misunderstanding of the mineralogy or temperature properties of the Earth interior. To conquer the challenges in detecting weak seismic phases, we need to develop specific processing techniques. In earthquake seismology, in order to highlight a specific weak phase, recordings in the seismic arrays are often shifted and stacked for different slowness and back-azimuth values (Rost and Thomas, 2002). Stacking serves as one of the most widely used approaches in enhancing the energy of target signals. Shearer (1991a) stacked long-period seismograms of shallow earthquakes that were recorded from the Global Digital Seismograph Network for 5 yr and obtained a gather that shows typical arrivals clearly from the deep Earth. Morozov and Dueker (2003) investigated the effectiveness of stacking in enhancing the signals of the receiver functions. They defined a signal-to-noise ratio (SNR) metric that was based on the multichannel coherency of the signals and the incoherency of the random noise, and they showed that the stacking can significantly improve the SNR of the stacked seismic trace. However, stacking methods have some drawbacks. First, they do not necessarily remove the noise present in the signal. Second, they require a large array of seismometers. Third, they require coherency of arrivals across the array, which are not always about earthquake seismology. From this point of view, a single-channel method seems to be a better substitute for improving the SNR of seismograms (Mousavi and Langston, 2016; 2017). In the reflection seismology community, many noise attenuation methods have been proposed and implemented in field applications over the past several decades. Prediction-based methods utilize the predictive property of the seismic signal to construct a predictive filter that prevents noise. Median filters and their variants use the statistical principle to reject Gaussian white noise or impulsive noise (Mi et al., 2000; Bonar and Sacchi, 2012). The dictionary-learning-based methods adaptively learn the basis from the data to sparsify the noisy seismic data, which will in turn suppress the noise (Zhang, van der Baan, et al., 2018). These methods require experimenters to solve the dictionary-updating and sparse-coding methods and can be very 1552 Seismological Research Letters Volume 90, Number 4 July/August 2019 doi: 10.1785/0220190028 Downloaded from https://pubs.geoscienceworld.org/ssa/srl/article-pdf/90/4/1552/4790732/srl-2019028.1.pdf by Seismological Society of America, Mattie Adam on 09 July 2019 expensive, computationally speaking. Decomposition-based methods decompose the noisy data into constitutive components, so that one can easily select the components that primarily represent the signal and remove those associated with noise. This category includes singular value decomposition (SVD)-based methods (Bai et al., 2018), empirical-mode decomposition (Chen, 2016), continuous wavelet transform (Mousavi et al., 2016), morphological decomposition (Huang et al., 2017), and so on. Rank-reduction-based methods assume that seismic data have a low-rank structure (Kumar et al., 2015; Zhou et al., 2017). If the data consist of κ complex linear events, the constructed Hankel matrix of the frequency data is a matrix of rank κ (Hua, 1992). Noise will increase the rank of theHankel matrix of the data, which can be attenuated via rank reduction. Such methods include Cadzow filtering (Cadzow, 1988; Zu et al., 2017) and SVD (Vautard et al., 1992). Most of the denoising methods are largely effective in processing reflection seismic images. The applications in more general seismological datasets are seldom reported, partially because of the fact that many seismological datasets have extremely low data quality. That is, they are characterized by low SNR and poor spatial sampling. Besides, most traditional denoising algorithms are based on carefully tuned parameters to obtain satisfactory performance. These parameters are usually data dependent and require a great deal of experiential knowledge. Thus, they are not flexible enough to use in application to many real-world problems. More research efforts have been dedicated to using machine-learning methods for seismological data processing (Chen, 2018a,b; Zhang, Wang, et al., 2018; Bergen et al., 2019; Lomax et al., 2019; McBrearty et al., 2019). Recently, supervised learning (Zhu et al., 2018) has been successfully applied for denoising of the seismic signals. However, supervised methods with deep networks require very large training datasets (sometimes to an order of a billion) of clean signals and their noisy contaminated realizations. In this article, we develop a new automatic denoising framework for improving the SNR of the seismological datasets based on an unsupervised machine-learning (UML) approach; this would be the autoencoder method. We leverage the autoencoder neural network to adaptively learn the features from the raw noisy seismological datasets during the encoding process, and then we optimally represent the data using these learned features during the decoding process. To effectively suppress the random noise, we use the sparsity constraint to regularize the neurons in the hidden layer. We apply the proposed UML-based denoising framework to a group of seismological datasets, including a reflection seismic image, a receiver function gather, and an earthquake stack. We observe a very encouraging performance, which demonstrates its great potential in a wide range of applications. METHOD Unsupervised Autoencoder Method Wewill first introduce the autoencoder neural network that we use for denoising seismological datasets. Autoencoders are specific neural networks that consist of two connected parts (decoder and encoder) that try to copy their input to the output layer. Hence, they can automatically learn the main features of the data in an unsupervised manner. In this article, the network is simply a three-layer architecture with an input layer, a hidden layer, and an output layer. The encoding process in the autoencoder neural network can be expressed as follows: EQ-TARGET;temp:intralink-;df1;323;673 ξ W1x b1 ; 1 in which x is the training sample (x∈Rn), ξ is the activation function. The decoding process can be expressed as follows: EQ-TARGET;temp:intralink-;df2;323;608 x ⌢ ξ W2x b2 : 2 In equations (1) and (2), W1 is the weighting matrix between the input layer and the hidden layer; b1 is the forward bias vector; W2 is the weighting matrix between the hidden layer and output layer; b2 is the backward bias vector; and ξ is the activation function. In this study, we use the softplus function as the activation function: EQ-TARGET;temp:intralink-;df3;323;505 ξ x log 1 e : 3 Sparsity Regularized Autoencoder To mitigate the influence of the seismic noise on the learned features and suppress the trivial components associated with low-amplitude neurons in the hidden layer, we apply a sparsity constraint to the hidden layer; that is, the output or last layer of the encoder. The sparsity constraint can help dropout the extracted nontrivial features that correspond to the noise and a small value in the hidden units. It can thus highlight the most dominant features in the data—the useful signals. The sparse penalty term can be written as follows: EQ-TARGET;temp:intralink-;df4;323;335~ R p ; 4 in which R is the penalty function: EQ-TARGET;temp:

Journal ArticleDOI
TL;DR: The study shows that the BER gap among users decreases with the increase of the modulation order but at the cost of a higher power consumption in order to achieve a better signal to noise ratio.
Abstract: Visible light communication (VLC) and non-orthogonal multiple access (NOMA) are deemed two promising technologies in the next generation wireless communication systems in achieving high capacity and massive connectivity. In this paper we study the performance of a NOMA-enabled VLC system using different modulation schemes. In particular system level bit error rate (BER) is derived for different modulation schemes. Conventional methods used for analyzing the BER under orthogonal multiple access cannot be directly applied to NOMA. In order to obtain the closed-form BER expressions for the NOMA-enabled VLC systems, an analytical framework based on bitwise-decision axis and signal space is proposed.Moreover, the analysis method can be extended to the any wireless communication networks with NOMA. Simulation results demonstrate the accuracy of the theoretical analysis. The study shows that the BER gap among users decreases with the increase of the modulation order but at the cost of a higher power consumption in order to achieve a better signal to noise ratio. It is observed that 8-PSK modulation in NOMA-enabled VLC systems strikes a good tradeoff between the power cost and the achievable BER.

Journal ArticleDOI
TL;DR: The developed approach for identifying emitters using convolutional neural networks to estimate the inphase/quadrature (IQ) imbalance parameters of each emitter, using only the received raw IQ data as input is shown to outperform a comparable feature-based approach while making fewer assumptions and using fewer data per decision.
Abstract: Specific Emitter Identification is the association of a received signal to a unique emitter, and is made possible by the naturally occurring and unintentional characteristics an emitter imparts onto each transmission, known as its radio frequency fingerprint. This paper presents an approach for identifying emitters using convolutional neural networks to estimate the inphase/quadrature (IQ) imbalance parameters of each emitter, using only the received raw IQ data as input. Because an emitter’s IQ imbalance parameters will not change as it changes modulation schemes, the proposed approach has the ability to track emitters, even as they change the modulation scheme. The performance of the developed approach is evaluated using simulated quadrature amplitude modulation and phase-shift keying signals, and the impact of signal-to-noise ratio, imbalance value, and modulation scheme are considered. Furthermore, the developed approach is shown to outperform a comparable feature-based approach, while making fewer assumptions and using fewer data per decision.

Journal ArticleDOI
TL;DR: ROArray is presented, a RObust Array based system that accurately localizes a target even with low SNRs, and significantly outperforms state-of-the-art solutions in terms of localization accuracy; when medium or high SNRs are present, it achieves comparable accuracy.
Abstract: With the multi-antenna design of WiFi interfaces, phased array has become a promising mechanism for accurate WiFi localization. State-of-the-art WiFi-based solutions using Angle-of-Arrival (AoA), however, face a number of critical challenges. First, their localization accuracy degrades dramatically due to low Signal-to-Noise Ratio (SNR) and incoherent processing. Second, they tend to produce outliers when the available number of packets is low. Moreover, the prior phase calibration schemes are not multipath robust and accurate enough. All of the above degrade the robustness of localization systems. In this paper, we present ROArray, a RObust Array based system that accurately localizes a target even with low SNRs. The key insight of ROArray is to use sparse recovery and coherent processing across all available domains, including time, frequency, and spatial domains. Specifically, in the spatial domain, ROArray can produce sharp AoA spectrums by parameterizing the steering vector based on a sparse grid. Then, to expand into the frequency domain, it jointly estimates the Time-of-Arrival (ToAs) and AoAs of all the paths using multi-subcarrier OFDM measurements. Furthermore, through a novel multi-packet fusion scheme, ROArray is enabled to perform coherent estimation over multiple packets. Such coherent processing not only increases the virtual aperture size, which enlarges the number of maximum resolvable paths but also improves the system robustness to noise. In addition, ROArray includes an online phase calibration technique that can eliminate random phase offsets while keeping communication uninterrupted. Our implementation using off-the-shelf WiFi cards demonstrates that, with low SNRs, ROArray significantly outperforms state-of-the-art solutions in terms of localization accuracy; when medium or high SNRs are present, it achieves comparable accuracy.

Journal ArticleDOI
TL;DR: The proposed DNN model consists of three modified U-Nets (3U-net) and the results show that the proposed method could improve the PET SNR without having higher SNR PET images.
Abstract: PET images often suffer poor signal-to-noise ratio (SNR). Our objective is to improve the SNR of PET images using a deep neural network (DNN) model and MRI images without requiring any higher SNR PET images in training. Our proposed DNN model consists of three modified U-Nets (3U-net). The PET training input data and targets were reconstructed using filtered-backprojection (FBP) and maximum likelihood expectation maximization (MLEM), respectively. FBP reconstruction was used because of its computational efficiency so that the trained network not only removes noise, but also accelerates image reconstruction. Digital brain phantoms downloaded from BrainWeb were used to evaluate the proposed method. Poisson noise was added into sinogram data to simulate a 6 min brain PET scan. Attenuation effect was included and corrected before the image reconstruction. Extra Poisson noise was introduced to the training inputs to improve the network denoising capability. Three independent experiments were conducted to examine the reproducibility. A lesion was inserted into testing data to evaluate the impact of mismatched MRI information using the contrast-to-noise ratio (CNR). The negative impact on noise reduction was also studied when miscoregistration between PET and MRI images occurs. Compared with 1U-net trained with only PET images, training with PET/MRI decreased the mean squared error (MSE) by 31.3% and 34.0% for 1U-net and 3U-net, respectively. The MSE reduction is equivalent to an increase in the count level by 2.5 folds and 2.9 folds for 1U-net and 3U-net, respectively. Compared with the MLEM images, the lesion CNR was improved 2.7 folds and 1.4 folds for 1U-net and 3U-net, respectively. The results show that the proposed method could improve the PET SNR without having higher SNR PET images.

Journal ArticleDOI
TL;DR: An effective approach for peak point detection and localization in noisy electrocardiogram (ECG) signals is presented, and the experimental results show that the proposed method reaches most satisfactory performance, even when challenging ECG signals are adopted.
Abstract: Cardiac signal processing is usually a computationally demanding task as signals are heavily contaminated by noise and other artifacts. In this paper, an effective approach for peak point detection and localization in noisy electrocardiogram (ECG) signals is presented. Six stages characterize the implemented method, which adopts the Hilbert transform and a thresholding technique for the detection of zones inside the ECG signal which could contain a peak. Subsequently, the identified zones are analyzed using the wavelet transform for R point detection and localization. The conceived signal processing technique has been evaluated, adopting ECG signals belonging to MIT-BIH Noise Stress Test Database, which includes specially selected Holter recordings characterized by baseline wander, muscle artifacts and electrode motion artifacts as noise sources. The experimental results show that the proposed method reaches most satisfactory performance, even when challenging ECG signals are adopted. The results obtained are presented, discussed and compared with some other R wave detection algorithms indicated in literature, which adopt the same database as a test bench. In particular, for a signal to noise ratio (SNR) equal to 6 dB, results with minimal interference from noise and artifacts have been obtained, since Se e +P achieve values of 98.13% and 96.91, respectively.

Journal ArticleDOI
TL;DR: In this article, a single-ended long-range phase-sensitive optical time domain reflectometer (ϕ-OTDR) sensing system without optical amplification in the sensing fiber is proposed.
Abstract: Long range phase-sensitive optical time domain reflectometer (ϕ-OTDR) sensing mainly employs distributed amplification in the sensing fiber. It requires the light to be injected into both ends of the sensing fiber, which reduces the degree of freedom in embedding the fiber into structures. To overcome this problem, the key factors that affects the signal-to-noise ratio (SNR) in ϕ-OTDR system based on matched filter is analyzed thoroughly, and a single-ended long-range ϕ-OTDR that does not require distributed amplification is proposed. In this system, two key techniques are adopted for SNR improvement. To boost the pulse energy and suppress the self-phase modulation, the distortion of the amplified pulse is rectified by using iterative predistortion method; to mitigate the influence of the interference fading and the stimulated Brillouin backscattering, a three-carrier pulse is employed. In combination with the non-linear frequency modulation technique which yields a 42.7 dB side lobe suppression ratio, these approaches guarantee an achievable ϕ-OTDR of 80 km sensing range, 2.7 m spatial resolution, 49.6 dB dynamic range in the experiment. To the best of authors’ knowledge, this is the first time that a ϕ-OTDR without optical amplification in the sensing fiber is realized over such a long sensing range.

Journal ArticleDOI
TL;DR: In this paper, a signal distortion correction module (CM) is proposed to improve the accuracy of CNN-based modulation recognition schemes, which can be thought of as an estimator of carrier frequency and phase offset introduced by the channel.
Abstract: Modulation recognition is a challenging task while performing spectrum sensing in cognitive radio. Recently, deep learning techniques, such as convolutional neural networks (CNNs) have been shown to achieve state-of-the-art accuracy for modulation recognition. However, CNNs are not explicitly designed to undo distortions caused by wireless channels. To improve the accuracy of CNN-based modulation recognition schemes, we propose a signal distortion correction module (CM). The proposed CM is also based on a neural network that can be thought of as an estimator of carrier frequency and phase offset introduced by the channel. The CM output is used to shift the signal frequency and phase before modulation recognition and is differentiable with respect to its weights. This allows the CM to be co-trained end-to-end in tandem with the CNN used for modulation recognition. For supervision, only the modulation scheme label is used and the knowledge of true frequency or phase offset is not required for co-training the combined network (CM+CNN).

Proceedings ArticleDOI
12 May 2019
TL;DR: In this article, the authors apply a deep neural network (DNN) approach to the problem of estimating the number of sources and their angles of arrival from a single antenna array observation and analyze its advantages with respect to signal processing algorithms.
Abstract: The problem of estimating the number of sources and their angles of arrival from a single antenna array observation has been an active area of research in the signal processing community for the last few decades. When the number of sources is large, the maximum likelihood estimator is intractable due to its very high complexity, and therefore alternative signal processing methods have been developed with some performance loss. In this paper, we apply a deep neural network (DNN) approach to the problem and analyze its advantages with respect to signal processing algorithms. We show that an appropriate designed network can attain the maximum likelihood performance with feasible complexity and outperform other feasible signal processing estimation methods over various signal to noise ratios and array response inaccuracies.

Journal ArticleDOI
TL;DR: In this paper, a new band pass filter design method based on time frequency (TF) analysis is proposed, where a function named "max-TF" is constructed from the TF energy distribution of the de-chirped signal, reflecting the changes of the maximum signal component amplitude with respect to time.
Abstract: The interrupted-sampling repeater jamming (ISRJ) is coherent with an emitted signal, and significantly limits radar's ability to detect, track and recognise targets. This study focuses on the research of ISRJ suppression for linear frequency modulation radars. A new band pass filter design method based on time frequency (TF) analysis is proposed. A function named ‘max-TF’ is constructed from the TF energy distribution of the de-chirped signal, reflecting the changes of the maximum signal component amplitude with respect to time. Based on the ‘max-TF’ function, jamming-free signal segments are automatically and accurately extracted to generate the filter, which is smoothed subsequently. After filtering, jamming signal peaks in pulse compression results are suppressed while real targets are retained simultaneously. Comparing with the state-of-the-art filtering method, the proposed method has improved jamming suppression ability and extended the feasible scope of signal-to-noise ratio and jamming-to-signal ratio conditions. Simulations have validated the improvements and demonstrated how the parameters affect performance. The average signal to jamming improvement and average radar detection rate of the proposed method is about 7.4 dB and 23% higher than those of the state-of-the-art filtering method, respectively. The direction of further works is inferred.

Journal ArticleDOI
TL;DR: A Digitized Transmission of 20 MHz LTE signal having 64 quadrature amplitude modulation over 70 Km of Standard Single Mode Fiber for broadband wireless signal transportation and distribution applications proves this is a cost and power effective solution for next generation wireless networks.

Journal ArticleDOI
TL;DR: This paper addresses the spectrum sensing problem in an orthogonal frequency-division multiplexing (OFDM) system based on machine learning, and proposes a class-reduction assisted prediction method to reduce spectrum sensing time.
Abstract: This paper addresses the spectrum sensing problem in an orthogonal frequency-division multiplexing (OFDM) system based on machine learning. To adapt to signal-to-noise ratio (SNR) variations, we first formulate the sensing problem into a novel SNR-related multi-class classification problem. Then, we train a naive Bayes classifier (NBC), and propose a class-reduction assisted prediction method to reduce spectrum sensing time. We derive the performance bounds by translating the Bayes error rate into spectrum sensing error rate. Compared with the conventional methods, the proposed method is shown by simulation to achieve higher spectrum sensing accuracy, in particular at critical areas of low SNRs. It offers a potential solution to the hidden node problem.

Journal ArticleDOI
TL;DR: A high-precision NLFM signal generator with the ability of predistortion compensation is developed, and this signal generator will be employed in LuTan-1 (LT-1, i.e., TwinSAR-L) mission, an innovative spaceborne bistatic SAR mission planned to launch in 2020.
Abstract: Generally, synthetic aperture radar (SAR) system transmits linear frequency modulation (LFM) signal to obtain the high-resolution image and weighted windowing is usually employed to suppress sidelobes. However, it will cause a 1–2-dB signal-to-noise ratio (SNR) loss. Nonlinear frequency modulation (NLFM) signal, which can construct the signal’s power spectral density (PSD) to reduce sidelobes without loss of SNR, is a promising candidate. However, the real-time generation of precise NLFM signal is still a technical challenge. In this letter, a high-precision NLFM signal generator with the ability of predistortion compensation is developed, and this signal generator will be employed in LuTan-1 (LT-1, i.e., TwinSAR-L) mission which is an innovative spaceborne bistatic SAR mission and planned to launch in 2020. In addition, a two-step error compensation method is developed to compensate the system error. Finally, the ground experiment is performed to validate the designed signal generator.

Journal ArticleDOI
TL;DR: A completely reversible data hiding method for ECG (Electrocardiogram) data that can find out false ownership claims as well as detect the tampered region of ECG data and 100% reversibility is proposed.

Journal ArticleDOI
TL;DR: A noise suppression method based on noise debiasing that can be easily applied to the accelerated SVD method to bridge the gap between real-time implementation and high imaging quality is proposed and validated under different ultrasound imaging parameters.
Abstract: Ultrasound microvessel imaging (UMI) based on the combination of singular value decomposition (SVD) clutter filtering and ultrafast plane wave imaging has recently demonstrated significantly improved Doppler sensitivity, especially to small vessels that are invisible to conventional Doppler imaging. Practical implementation of UMI is hindered by the high computational cost associated with SVD and low blood signal-to-noise ratio (SNR) in deep regions of the tissue due to the lack of transmit focusing of plane waves. Concerning the high computational cost, an accelerated SVD clutter filtering method based on randomized SVD (rSVD) and randomized spatial downsampling (rSD) was recently proposed by our group, which showed the feasibility of real-time implementation of UMI. Concerning the low blood flow SNR in deep imaging regions, here we propose a noise suppression method based on noise debiasing that can be easily applied to the accelerated SVD method to bridge the gap between real-time implementation and high imaging quality. The proposed method experimentally measures the noise-induced bias by collecting the noise signal using the identical imaging sequence as regular UMI, but with the ultrasound transmission turned off. The estimated bias can then be subtracted from the original power Doppler (PD) image to obtain effective noise suppression. The feasibility of the proposed method was validated under different ultrasound imaging parameters [including transmitting voltages and time-gain compensation (TGC) settings] with a phantom experiment. The noise-debiased images showed an increase of up to 15.3 and 13.4 dB in SNR as compared to original PD images on the blood flow phantom and an in vivo human kidney data set, respectively. The proposed noise suppression method has negligible computational cost and can be conveniently combined with the previously proposed accelerated SVD clutter filtering technique to achieve high quality, real-time UMI imaging.

Journal ArticleDOI
28 Feb 2019
TL;DR: In this paper, the authors established a mathematical link between the probability of success of a sidechannel attack and the minimum number of queries to reach a given success rate, valid for any possible distinguishing rule and with the best possible knowledge on the attacker's side.
Abstract: Using information-theoretic tools, this paper establishes a mathematical link between the probability of success of a side-channel attack and the minimum number of queries to reach a given success rate, valid for any possible distinguishing rule and with the best possible knowledge on the attacker’s side. This link is a lower bound on the number of queries highly depends on Shannon’s mutual information between the traces and the secret key. This leads us to derive upper bounds on the mutual information that are as tight as possible and can be easily calculated. It turns out that, in the case of an additive white Gaussian noise, the bound on the probability of success of any attack is directly related to the signal to noise ratio. This leads to very easy computations and predictions of the success rate in any leakage model.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed MTV and MWTV approaches have better denoising performance with (average and average) values of (29.12 dB and 68.56%) and ( 29.29 dB and 67.51%), respectively, as compared to the existing techniques.
Abstract: The electroencephalogram (EEG) signal is contaminated with various noises or artifacts during recording. For the automated detection of neurological disorders, it is a vital task to filter out these artifacts from the EEG signal. In this paper, we propose two novel approaches for the removal of motion artifact from the single channel EEG signal. These methods are based on the multiresolution total variation (MTV) and multiresolution weighted total variation (MWTV) filtering schemes. The multiresolution analysis using the discrete wavelet transform (DWT) helps to segregate the EEG signal into various subband signals. The total variation (TV) and weighted TV (WTV) are applied to the approximation subband signal. The filtered approximation subband signal is evaluated based on the difference between the noisy approximation subband signal and the output of the TV or WTV filter. The processed EEG signal is obtained using the multiresolution wavelet-based reconstruction. The difference in the signal to noise ratio ( $\Delta $ SNR) and the percentage of reduction in correlation coefficients ( $\eta $ ) is used for evaluating the diagnostic quality of the processed EEG signal. The experimental results demonstrate that the proposed MTV and MWTV approaches have better denoising performance with (average $\Delta $ SNR, and average $\eta $ ) values of (29.12 dB and 68.56%) and (29.29 dB and 67.51%), respectively, as compared to the existing techniques.

Journal ArticleDOI
TL;DR: A new snow depth estimation method using a combination of pseudorange and carrier phase of GNSS dual-frequency signals is presented, which is geometry-free and is not affected by ionospheric delays.
Abstract: Global navigation satellite system reflectometry (GNSS-R) is a new remote sensing technique, which can be used to measure a wide range of geophysical parameters. GNSS-R makes use of the simultaneous reception of the direct transmission and the coherent surface reflections of the GNSS signal with either a single antenna or multiple separate antennas. This paper presents a new snow depth estimation method using a combination of pseudorange and carrier phase of GNSS dual-frequency signals. The proposed method is geometry-free and is not affected by ionospheric delays. The formulas of the amplitude attenuation factor of reflected signals, multipath-induced carrier-phase error, and pesudorange error for ground-based GNSS receivers are used to describe the combined signals. Using theoretical formulas instead of in situ measurement data, analytical linear models are established in advance to describe the relationship between snow depth and main frequency of combined signal time series. When the main frequency of the combined measurements is obtained by spectrum analysis, the model is used to determine snow depth. Two experimental data sets recorded in two different environments were used to test the proposed method. The results demonstrate that there exists good agreement between the proposed method and the ground-truth measurements.

Journal ArticleDOI
TL;DR: This method is a hybrid approach and seeks a full augumentable array that optimizes beamformer performance and proves important for limited aperture that constrains the number of possible uniform grid points for sensor placements.
Abstract: The paper considers sparse array design for receive beamforming achieving maximum signal-to-interference plus noise ratio (MaxSINR) for both single point source and multiple point sources, operating in an interference active environment. Unlike existing sparse design methods which either deal with structured environment-independent or non-structured environment-dependent arrays, our method is a hybrid approach and seeks a full augumentable array that optimizes beamformer performance. This approach proves important for limited aperture that constrains the number of possible uniform grid points for sensor placements. The problem is formulated as quadratically constraint quadratic program (QCQP), with the cost function penalized with weighted $l_1$ -norm squared of the beamformer weight vector. Simulation results are presented to show the effectiveness of the proposed algorithms for array configurability in the case of both single and general rank signal correlation matrices. Performance comparisons among the proposed sparse array, the commonly used uniform arrays, arrays obtained by other design methods, and arrays designed without the augmentability constraint are provided.