scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 2014"


Journal ArticleDOI
TL;DR: An effective small target detection algorithm inspired by the contrast mechanism of human vision system and derived kernel model is presented, which can improve the SNR of the image significantly.
Abstract: Robust small target detection of low signal-to-noise ratio (SNR) is very important in infrared search and track applications for self-defense or attacks. Consequently, an effective small target detection algorithm inspired by the contrast mechanism of human vision system and derived kernel model is presented in this paper. At the first stage, the local contrast map of the input image is obtained using the proposed local contrast measure which measures the dissimilarity between the current location and its neighborhoods. In this way, target signal enhancement and background clutter suppression are achieved simultaneously. At the second stage, an adaptive threshold is adopted to segment the target. The experiments on two sequences have validated the detection capability of the proposed target detection method. Experimental evaluation results show that our method is simple and effective with respect to detection accuracy. In particular, the proposed method can improve the SNR of the image significantly.

694 citations


Journal ArticleDOI
TL;DR: It is observed that the nonlinear distortion produced by the transmitter PA is a significant issue in a full-duplex transceiver and, when using cheaper and less linear components, also the receiver chain nonlinearities become considerable.
Abstract: Despite the intensive recent research on wireless single-channel full-duplex communications, relatively little is known about the transceiver chain nonidealities of full-duplex devices. In this paper, the effect of nonlinear distortion occurring in the transmitter power amplifier (PA) and the receiver chain is analyzed, beside the dynamic range requirements of analog-to-digital converters (ADCs). This is done with detailed system calculations, which combine the properties of the individual electronics components to jointly model the complete transceiver chain, including self-interference cancellation. They also quantify the decrease in the dynamic range for the signal of interest caused by self-interference at the analog-to-digital interface. Using these system calculations, we provide comprehensive numerical results for typical transceiver parameters. The analytical results are also confirmed with full waveform simulations. We observe that the nonlinear distortion produced by the transmitter PA is a significant issue in a full-duplex transceiver and, when using cheaper and less linear components, also the receiver chain nonlinearities become considerable. It is also shown that, with digitally intensive self-interference cancellation, the quantization noise of the ADCs is another significant problem.

263 citations


Journal ArticleDOI
TL;DR: The designed sensor enables accurate reconstruction of chest-wall movement caused by cardiopulmonary activities, and the algorithm enables estimation of respiration, heartbeat rate, and some indicators of heart rate variability (HRV).
Abstract: The designed sensor enables accurate reconstruction of chest-wall movement caused by cardiopulmonary activities, and the algorithm enables estimation of respiration, heartbeat rate, and some indicators of heart rate variability (HRV). In particular, quadrature receiver and arctangent demodulation with calibration are introduced for high linearity representation of chest displacement; 24-bit ADCs with oversampling are adopted for radar baseband acquisition to achieve a high signal resolution; continuous-wavelet filter and ensemble empirical mode decomposition (EEMD) based algorithm are applied for cardio/pulmonary signal recovery and separation so that accurate beat-to-beat interval can be acquired in time domain for HRV analysis. In addition, the wireless sensor is realized and integrated on a printed circuit board compactly. The developed sensor system is successfully tested on both simulated target and human subjects. In simulated target experiments, the baseband signal-to-noise ratio (SNR) is 73.27 dB, high enough for heartbeat detection. The demodulated signal has 0.35% mean squared error, indicating high demodulation linearity. In human subject experiments, the relative error of extracted beat-to-beat intervals ranges from 2.53% to 4.83% compared with electrocardiography (ECG) R-R peak intervals. The sensor provides an accurate analysis for heart rate with the accuracy of 100% for p = 2% and higher than 97% for p = 1%.

192 citations


Journal ArticleDOI
TL;DR: A novel artifact reducing approach for the JPEG decompression is proposed via sparse and redundant representations over a learned dictionary, and an effective two-step algorithm is developed that outperforms the total variation and weighted total variation decompression methods.
Abstract: The JPEG compression method is among the most successful compression schemes since it readily provides good compressed results at a rather high compression ratio. However, the decompressed result of the standard JPEG decompression scheme usually contains some visible artifacts, such as blocking artifacts and Gibbs artifacts (ringing), especially when the compression ratio is rather high. In this paper, a novel artifact reducing approach for the JPEG decompression is proposed via sparse and redundant representations over a learned dictionary. Indeed, an effective two-step algorithm is developed. The first step involves dictionary learning and the second step involves the total variation regularization for decompressed images. Numerical experiments are performed to demonstrate that the proposed method outperforms the total variation and weighted total variation decompression methods in the measure of peak of signal to noise ratio, and structural similarity.

179 citations


Journal ArticleDOI
TL;DR: An implementable sensing protocol is developed that incorporates error correction, and it is shown that measurement precision can be enhanced for both one-directional and general noise.
Abstract: The signal to noise ratio of quantum sensing protocols scales with the square root of the coherence time Thus, increasing this time is a key goal in the field By utilizing quantum error correction, we present a novel way of prolonging such coherence times beyond the fundamental limits of current techniques We develop an implementable sensing protocol that incorporates error correction, and discuss the characteristics of these protocols in different noise and measurement scenarios We examine the use of entangled versue untangled states, and error correction's reach of the Heisenberg limit The effects of error correction on coherence times are calculated and we show that measurement precision can be enhanced for both one-directional and general noise

178 citations


Journal ArticleDOI
TL;DR: A variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation by minimizing an adaptive total variation with a nonlocal data fidelity term is introduced.
Abstract: Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

142 citations


Journal ArticleDOI
TL;DR: A novel physical layer authentication scheme is proposed in this paper by exploiting the time-varying carrier frequency offset (CFO) associated with each pair of wireless communications devices to validate the feasibility of using CFO for authentication.
Abstract: A novel physical layer authentication scheme is proposed in this paper by exploiting the time-varying carrier frequency offset (CFO) associated with each pair of wireless communications devices. In realistic scenarios, radio frequency oscillators in each transmitter-and-receiver pair always present device-dependent biases to the nominal oscillating frequency. The combination of these biases and mobility-induced Doppler shift, characterized as a time-varying CFO, can be used as a radiometric signature for wireless device authentication. In the proposed authentication scheme, the variable CFO values at different communication times are first estimated. Kalman filtering is then employed to predict the current value by tracking the past CFO variation, which is modeled as an autoregressive random process. To achieve the proposed authentication, the current CFO estimate is compared with the Kalman predicted CFO using hypothesis testing to determine whether the signal has followed a consistent CFO pattern. An adaptive CFO variation threshold is derived for device discrimination according to the signal-to-noise ratio and the Kalman prediction error. In addition, a software-defined radio (SDR) based prototype platform has been developed to validate the feasibility of using CFO for authentication. Simulation results further confirm the effectiveness of the proposed scheme in multipath fading channels.

129 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the first direct measurement of a microelectro-mechanical system (MEMS) cantilever displacement with a noise floor at 40% of the shot noise limit (SNL).
Abstract: The displacement of micro-electro-mechanical-systems (MEMS) cantilevers is used to measure a broad variety of phenomena in devices ranging from force microscopes to biochemical sensors to thermal imaging systems. We demonstrate the first direct measurement of a MEMS cantilever displacement with a noise floor at 40% of the shot noise limit (SNL). By combining multi-spatial-mode quantum light sources with a simple differential measurement, we show that sub-SNL MEMS displacement sensitivity is highly accessible compared to previous efforts that measured the displacement of macroscopic mirrors with very distinct spatial structures crafted with multiple optical parametric amplifiers and locking loops. These results support a new class of quantum MEMS sensor with an ultimate signal to noise ratio determined by quantum correlations, enabling ultra-trace sensing, imaging, and microscopy applications in which signals were previously obscured by shot noise.

123 citations


Journal ArticleDOI
TL;DR: This letter investigates the effect of cochannel interference (CCI) on a hybrid satellite-terrestrial cooperative relay network (HSTCN), where both the satellite-destination and satellite-relay links undergo the shadowed Rician fading, whereas the relay-destinations link follows the Rayleigh fading.
Abstract: This letter investigates the effect of cochannel interference (CCI) on a hybrid satellite–terrestrial cooperative relay network (HSTCN), where both the satellite–destination and satellite–relay links undergo the shadowed Rician fading, whereas the relay–destination link follows the Rayleigh fading. By assuming that decode-and-forward (DF) protocol is adopted at the terrestrial relay, and the destination is corrupted by multiple CCI, we first derive the analytical expression for the moment generating function (MGF) of the output signal-to-interference-plus-noise ratio (SINR). Then, based on the Meijer-G functions, we present an approximated yet accurate method to evaluate the average symbol error rate (ASER) of the cooperative network. Moreover, the asymptotic ASER at high signal-to-noise ratio (SNR) with respect to diversity order and array gain are also given to gain further insights. Finally, numerical results are provided to demonstrate the validity of the performance analysis as well as the impact of CCI on the HSTCN.

96 citations


Journal ArticleDOI
TL;DR: The derivation of exact distribution of the test statistic is revisited, hidden assumptions on the primary user signal model are unveiled, and scope of detection probability results is discussed for identifying various classes of random primary transmissions.
Abstract: Cognitive radio is a promising solution to current problem of spectrum scarcity. It relies on efficient spectrum sensing. Energy detection is the most dominantly used spectrum sensing approach owing to its low computational complexity and ability to identify spectrum holes without requiring a priori knowledge of primary transmission characteristics. This paper offers a comprehensive tutorial on energy detection based spectrum sensing and presents an in depth analysis of the test statistic for energy detector. General structure of the test statistic and corresponding threshold are presented to address existing ambiguities in the literature. The derivation of exact distribution of the test statistic, reported in the literature, is revisited and hidden assumptions on the primary user signal model are unveiled. In addition, the scope of detection probability results is discussed for identifying various classes of random primary transmissions. Gaussian approximations of the test statistic are investigated. Specifically, the roles of signal to noise ratio and performance constraint in terms of probability of detection or false alarm are highlighted when Normal approximations are used in place of exact expressions.

94 citations


Proceedings ArticleDOI
10 Jun 2014
TL;DR: A random access procedure where control and data information is transmitted in the same “access” slot and it is shown by simulations that sparse signal processing algorithms are indeed “strong” enough to retrieve the information symbols out of the induced noise.
Abstract: We introduce a random access procedure where control and data information is transmitted in the same “access” slot. The key idea is data-overlayed control signalling together with a dedicated frequency area for compressive measurements exploiting sparse channel profiles and, potentially, sparse user activity. This architecture is resource-efficent since otherwise pilots have to be suitably placed in the time-frequency grid for every potential user. We analyze the achievable rates depending on the key design parameters and show by simulations that sparse signal processing algorithms are indeed “strong” enough to retrieve the information symbols out of the induced noise. Moreover, for the very high dimensional receive space applied in this paper, the number of detected users is only limited by the sheer complexity rather than performance.

Journal ArticleDOI
TL;DR: This study considers a relay-assisted free-space optical communication scheme over strong atmospheric turbulence channels with misalignment-induced pointing errors, and presents a cumulative density function (CDF) analysis for the end-to-end signal- to-noise ratio.
Abstract: In this study, we consider a relay-assisted free-space optical communication scheme over strong atmospheric turbulence channels with misalignment-induced pointing errors. The links from the source to the destination are assumed to be all-optical links. Assuming a variable gain relay with amplify-and-forward protocol, the electrical signal at the source is forwarded to the destination with the help of this relay through all-optical links. More specifically, we first present a cumulative density function (CDF) analysis for the end-to-end signal-to-noise ratio. Based on this CDF, the outage probability, bit-error rate, and average capacity of our proposed system are derived. Results show that the system diversity order is related to the minimum value of the channel parameters.

Journal ArticleDOI
21 Jan 2014
TL;DR: New methods for the automatic identification of commonly occurring contaminant types in surface EMG signals are presented and show that the contaminants can readily be distinguished at lower signal to noise ratios, with a growing degree of confusion at higher signal to Noise ratios.
Abstract: The ability to recognize various forms of contaminants in surface electromyography (EMG) signals and to ascertain the overall quality of such signals is important in many EMG-enabled rehabilitation systems. In this paper, new methods for the automatic identification of commonly occurring contaminant types in surface EMG signals are presented. Such methods are advantageous because the contaminant type is typically not known in advance. The presented approach uses support vector machines as the main classification system. Both simulated and real EMG signals are used to assess the performance of the methods. The contaminants considered include: 1) electrocardiogram interference; 2) motion artifact; 3) power line interference; 4) amplifier saturation; and 5) additive white Gaussian noise. Results show that the contaminants can readily be distinguished at lower signal to noise ratios, with a growing degree of confusion at higher signal to noise ratios, where their effects on signal quality are less significant.

Journal ArticleDOI
TL;DR: In this paper, a joint beamforming algorithm for a multiuser wireless information and power transfer (MU-WIPT) system that is compatible with the conventional multi-user MIMO system is proposed.
Abstract: In this paper, we propose a joint beamforming algorithm for a multiuser wireless information and power transfer (MU-WIPT) system that is compatible with the conventional multiuser multiple input multiple output (MU-MIMO) system. The proposed joint beamforming vectors are initialized using the well established MU-MIMO zero-forcing beamforming (ZFBF) and are further updated to maximize the total harvested energy of energy harvesting (EH) users and guarantee the signal to interference plus noise ratio (SINR) constraints of the co-scheduled information decoding (ID) users. When ID and EH users are simultaneously served by joint beamforming vectors, the harvested energy can be increased at the cost of an SINR loss for ID users. To characterize the SINR loss, the target SINR ratio $\mu$ is introduced as the target SINR (i.e., SINR constraint) normalized by the received SINR achievable with ZFBF. Based on that ratio, the sum rate and harvested energy obtained from the proposed algorithm are analyzed under perfect/imperfect channel state information at the transmitter (CSIT). Through simulations and numerical results, we validate the derived analyses and demonstrate the EH and ID performance compared to both state of the art and conventional schemes.

Journal ArticleDOI
TL;DR: The simulation results show that the penalized least squares using FORP can improve the Signal to Noise Ratio (SNR) compared to other denoising methods.
Abstract: In this paper, a new denoising method for hyperspectral images is proposed using First Order Roughness Penalty (FORP). FORP is applied in the wavelet domain to exploit the Multi-Resolution Analysis (MRA) property of wavelets. Stein's Unbiased Risk Estimator (SURE) is used to choose the tuning parameters automatically. The simulation results show that the penalized least squares using FORP can improve the Signal to Noise Ratio (SNR) compared to other denoising methods. The proposed method is also applied to a corrupted hyperspectral data set and it is shown that certain classification indices improve significantly.

Journal ArticleDOI
TL;DR: This paper proposes the Bayesian approach to signal detection in compressed sensing (CS) using compressed measurements directly and a new bound of the probability of error is derived in terms of a piecewise function.

Journal ArticleDOI
TL;DR: An improved QRS (Q wave, R wave, S wave) complex detection algorithm is proposed based on the multiresolution wavelet analysis, which presents considerable capability in cases of low signal-to-noise ratio, high baseline wander and abnormal morphologies.
Abstract: The electrocardiogram (ECG) signal is considered as one of the most important tools in clinical practice in order to assess the cardiac status of patients. In this study, an improved QRS (Q wave, R wave, S wave) complex detection algorithm is proposed based on the multiresolution wavelet analysis. In the first step, high frequency noise and baseline wander can be distinguished from ECG data based on their specific frequency contents. Hence, removing corresponding detail coefficients leads to enhance the performance of the detection algorithm. After this, the author's method is based on the power spectrum of decomposition signals for selecting detail coefficient corresponding to the frequency band of the QRS complex. Hence, the authors have proposed a function g as the combination of the selected detail coefficients using two parameters λ 1 and λ 2, which correspond to the proportion of the frequency ranges of the selected detail compared with the frequency range of the QRS complex. The proposed algorithm is evaluated using the whole arrhythmia database. It presents considerable capability in cases of low signal-to-noise ratio, high baseline wander and abnormal morphologies. The results of evaluation show the good detection performance; they have obtained a global sensitivity of 99.87%, a positive predectivity of 99.79% and a percentage error of 0.34%.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: A novel real-valued unipolar version of orthogonal frequency division multiplexing (OFDM) that is suitable for direct intensity modulation with direct detection of optical wireless systems including VLC is proposed.
Abstract: In the next major phase of mobile telecommunications standards “5G,” Visible Light Communication (VLC) technology or light fidelity (Li-Fi) has great potential to be a breakthrough technology in the future of wireless Internet access We propose a novel real-valued unipolar version of orthogonal frequency division multiplexing (OFDM) that is suitable for direct intensity modulation with direct detection (IM/DD) optical wireless systems including VLC Without additional forms of interference estimation and cancelation to recover the symbols, the Spectral and Energy Efficient OFDM (SEE-OFDM) almost doubles the spectral efficiency of unipolar optical OFDM formats In our scheme, multiple signals are generated and added/transmitted together, where both odd and even indexed subcarriers of the inverse fast Fourier transform (IFFT) operation carry data and are not affected by any kind of interference, (eg, clipping) Monte Carlo simulations under additive white Gaussian noise (AWGN) show gains of up to 6dB in signal-to-noise ratio (SNR) compared to the conventional energy-efficient asymmetrically clipped optical OFDM (ACO-OFDM) Moreover, a peak-to-average power ratio (PAPR) reduction of 25dB is obtained as a bonus Therefore, advantages such as increased data rate and reduced PAPR make the proposed SEE-OFDM very attractive for optical wireless systems

Journal ArticleDOI
TL;DR: It is shown that the optimized constellations are much more robust with respect to the changes in the phase noise characteristics than the phase shift keying (PSK) modulation and quadrature amplitude modulation (QAM).
Abstract: In this paper, we optimize the constellation sets to be used in communication systems affected by phase noise. The main objective is to find the constellation which maximizes the channel mutual information under given power constraints. For any given constellation, the average mutual information (AMI) and the pragmatic average mutual information (PAMI) of the channel are calculated approximately, assuming that both the additive noise and phase noise are memoryless. Then, a simulated annealing algorithm is used to optimize the constellation. When the objective function is the PAMI, the proposed algorithm jointly optimizes the constellation and the binary labeling. We focus on constellations with 8, 16, 64 and 256 signals. The performances of the optimized constellations are compared with conventional constellations showing considerable gains in all system scenarios. In particular, it is shown that the optimized constellations are much more robust with respect to the changes in the phase noise characteristics than the phase shift keying (PSK) modulation and quadrature amplitude modulation (QAM).

Journal ArticleDOI
TL;DR: The proposed method showed the superior accuracy in the recovery of Raman spectra from measurements with extremely low SNR, compared with the four commonly used de-noising methods.
Abstract: Raman spectroscopy is a powerful non-destructive technique for qualitatively and quantitatively characterizing materials. However, noise often obscures interesting Raman peaks due to the inherently weak Raman signal, especially in biological samples. In this study, we develop a method based on spectral reconstruction to recover Raman spectra with low signal-to-noise ratio (SNR). The synthesis of narrow-band measurements from low-SNR Raman spectra eliminates the effect of noise by integrating the Raman signal along the wavenumber dimension, which is followed by spectral reconstruction based on Wiener estimation to recover the Raman spectrum with high spectral resolution. Non-negative principal components based filters are used in the synthesis to ensure that most variance contained in the original Raman measurements are retained. A total of 25 agar phantoms and 20 bacteria samples were measured and data were used to validate our method. Four commonly used de-noising methods in Raman spectroscopy, i.e. Savitzky-Golay (SG) algorithm, finite impulse response (FIR) filtration, wavelet transform and factor analysis, were also evaluated on the same set of data in addition to the proposed method for comparison. The proposed method showed the superior accuracy in the recovery of Raman spectra from measurements with extremely low SNR, compared with the four commonly used de-noising methods.

Journal ArticleDOI
TL;DR: An iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution is proposed and achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.
Abstract: Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical properties of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.

Journal ArticleDOI
TL;DR: A novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor, which demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique.
Abstract: Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.

Journal ArticleDOI
TL;DR: This study analyses the effect on the integration gain caused by range migration and Doppler frequency migration, and proposes a corresponding compensation method according to the different input signal-to-noise ratios (SNRs) of the echo signal.
Abstract: The high-speed movement of a target may cause range migration and Doppler frequency migration of the radar echo, which has a serious impact on the detection performance of the radar. To resolve the problem of detecting a high-speed target in linear frequency modulation radar, this study analyses the effect on the integration gain caused by range migration and Doppler frequency migration, and proposes a corresponding compensation method according to the different input signal-to-noise ratios (SNRs) of the echo signal. To compensate for range migration in high SNRs, two-dimensional median filtering and constant false alarm rate technology are combined to estimate the speed. For low SNRs, based on coarse valuations, the authors use the discrete Fourier transformation (DFT) to realise the fractional delay cell to improve speed accuracy. Furthermore, to compensate for Doppler frequency migration, an instantaneous cross-correlation method is proposed for high SNRs, which is combined with the fractional Fourier transform method to estimate the acceleration for low SNRs. The input SNR threshold for the different algorithms is then analysed using simulation data, and the theoretical reference value is shown. Finally, the study verifies the effectiveness of the proposed methods through simulation and measured data.

Journal ArticleDOI
TL;DR: A symbol scaling technique is proposed for spatial modulation in the multiple-input-single-output (MISO) channel that enhances this minimum Euclidean distance by aligning the phase of the relevant channels so that the received symbol phases are distributed in uniformly spaced angles in the received SM constellation.
Abstract: The performance of spatial modulation (SM) is known to depend on the minimum Euclidean distance in the received SM constellation. In this letter, a symbol scaling technique is proposed for spatial modulation in the multiple-input-single-output (MISO) channel that enhances this minimum distance. It achieves this by aligning the phase of the relevant channels so that the received symbol phases are distributed in uniformly spaced angles in the received SM constellation. In contrast to existing amplitude-phase scaling schemes that are data-dependent and involve an increase in the transmitted signal power for ill conditioned channels, here a phase-only shift is applied. This allows for data-independent, fixed per-antenna scaling and leaves the symbol power unchanged. The results show an improved SM performance and diversity for the proposed scheme compared to existing amplitude-phase scaling techniques.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: This work proposes an alternative frequency-weighted energy measure that uses the envelope of the derivative of the signal, which has the advantage of being nonnegative, which when applied to a detection application in newborn EEG improves performance over the Teager-Kaiser operator.
Abstract: Signal processing measures of instantaneous energy typically include only amplitude information. But measures that include both amplitude and frequency do better at assessing the energy required by the system to generate the signal, making them more sensitive measures to include in electroencephalogram (EEG) analysis. The Teager-Kaiser operator is a frequency-weighted measure that is frequently used in EEG analysis, although the operator is poorly defined in terms of common signal processing concepts. We propose an alternative frequency-weighted energy measure that uses the envelope of the derivative of the signal. This simple envelope- derivative operator has the advantage of being nonnegative, which when applied to a detection application in newborn EEG improves performance over the Teager-Kaiser operator: without post-processing filters, area-under the receiver-operating characteristic curve (AUC) is 0.57 for the Teager-Kaiser operator and 0.80 for the envelope-derivative operator. The envelope-derivative operator also satisfies important properties, similar to the Teager-Kaiser operator, such as tracking instantaneous amplitude and frequency.

Journal ArticleDOI
TL;DR: This study derives the optimum method for combining multi-coil data, namely weighting with the ratio of signal to the square of the noise, and shows that provided that the noise is uncorrelated, this is the theoretical optimal combination.

Journal ArticleDOI
TL;DR: The developed asymptotic results demonstrate that, by adding more artificial noise and performing joint antenna selection, a better secure performance can be realized at a price of imposing more complexity on the system.
Abstract: In this paper, we consider a two-way relaying scenario with one pair of source nodes, one relay and one eavesdropper. All nodes are equipped with multiple antennas, and we study the impact of antenna selection on such a secure communication scenario. Three transmission schemes with different tradeoff between secure performance and complexity are investigated respectively. Particularly, when antenna selection is implemented at the relay and no artificial noise is introduced, the condition to realize secure transmissions is established. Then by allowing the sources to inject artificial noise into the system, the secure performance is evaluated by focusing on different eavesdropping strategies. When both the relay and the sources send artificial noise, a low complexity strategy of antenna selection is proposed to efficiently utilize the antennas at the sources and the relay. The developed asymptotic results demonstrate that, by adding more artificial noise and performing joint antenna selection, a better secure performance, such as a larger secrecy rate and a lower outage probability, can be realized at a price of imposing more complexity on the system. Simulation results are also provided to demonstrate the accuracy of the developed analytical results.

Patent
12 Mar 2014
TL;DR: In this paper, the authors present methods and apparatus relating to FET arrays for monitoring chemical and/or biological reactions such as nucleic acid sequencing-by-synthesis reactions.
Abstract: Methods and apparatus relating to FET arrays for monitoring chemical and/or biological reactions such as nucleic acid sequencing-by-synthesis reactions. Some methods provided herein relate to improving signal (and also signal to noise ratio) from released hydrogen ions during nucleic acid sequencing reactions.

Journal ArticleDOI
TL;DR: This work proposes a new autocorrelation function that is immune to the main effect of background noise and permits quantitative measurements at high and moderate signal-to-noise ratios, and is able to provide motion contrast information that accurately identifies areas with movement, similar to speckle variance techniques.
Abstract: Intensity-based techniques in optical coherence tomography (OCT), such as those based on speckle decorrelation, have attracted great interest for biomedical and industrial applications requiring speed or flow information. In this work we present a rigorous analysis of the effects of noise on speckle decorrelation, demonstrate that these effects frustrate accurate speed quantitation, and propose new techniques that achieve quantitative and repeatable measurements. First, we derive the effect of background noise on the speckle autocorrelation function, finding two detrimental effects of noise. We propose a new autocorrelation function that is immune to the main effect of background noise and permits quantitative measurements at high and moderate signal-to-noise ratios. At the same time, this autocorrelation function is able to provide motion contrast information that accurately identifies areas with movement, similar to speckle variance techniques. In order to extend the SNR range, we quantify and model the second effect of background noise on the autocorrelation function through a calibration. By obtaining an explicit expression for the decorrelation time as a function of speed and diffusion, we show how to use our autocorrelation function and noise calibration to measure a flowing liquid. We obtain accurate results, which are validated by Doppler OCT, and demonstrate a very high dynamic range (> 600 mm/s) compared to that of Doppler OCT (±25 mm/s). We also derive the behavior for low flows, and show that there is an inherent non-linearity in speed measurements in the presence of diffusion due to statistical fluctuations of speckle. Our technique allows quantitative and robust measurements of speeds using OCT, and this work delimits precisely the conditions in which it is accurate.

Journal ArticleDOI
TL;DR: An analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE), provided a useful basis for understanding complex dependencies in PCD Imaging performance and the potential advantages in comparison to EIDs.
Abstract: Purpose: Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). Methods: A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1–7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. Results: The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f 50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. Conclusions: The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems.