scispace - formally typeset
Search or ask a question

Showing papers on "Noise (signal processing) published in 2013"


Journal ArticleDOI
TL;DR: Applying this procedure to cryoEM images of beta-galactosidase shows how overfitting varies greatly depending on the procedure, but in the best case shows no overfitting and a resolution of ~6 Å.

794 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an enhanced Kurtogram based on the power spectrum of the envelope of the signals extracted from wavelet packet nodes at different depths, which measured the protrusion of the sparse representation.

323 citations


Journal ArticleDOI
TL;DR: An overview of the signal processing techniques used to enhance secrecy in the physical layer of multiantenna wireless communication systems and how training procedures are developed to enable better channel estimation performance at the destination than at the eavesdropper is provided.
Abstract: This article provides an overview of the signal processing techniques used to enhance secrecy in the physical layer of multiantenna wireless communication systems. Motivated by results in information theory, signal processing techniques in both the data transmission and the channel estimation phases have been explored in the literature to enlarge the signal quality difference at the destination and the eavesdropper. In the data transmission phase, secrecy beamforming and precoding schemes are used to enhance signal quality at the destination while limiting the signal strength at the eavesdropper. Artificial noise (AN) is also used on top of beamformed or precoded signals to further reduce the reception quality at the eavesdropper. In the channel estimation phase, training procedures are developed to enable better channel estimation performance at the destination than at the eavesdropper. As a result, the effective signal-to-noise ratios (SNRs) at the two terminals will be different and a more favorable secrecy channel will be made available for use in the data transmission phase. Finally, future research directions are discussed.

244 citations


Journal ArticleDOI
TL;DR: Three iterative algorithms with different complexity vs. performance trade-offs are proposed to mitigate asynchronous impulsive noise, exploit its sparsity in the time domain, and apply sparse Bayesian learning methods to estimate and subtract the noise impulses.
Abstract: Asynchronous impulsive noise and periodic impulsive noises limit communication performance in OFDM powerline communication systems. Conventional OFDM receivers that assume additive white Gaussian noise experience degradation in communication performance in impulsive noise. Alternate designs assume a statistical noise model and use the model parameters in mitigating impulsive noise. These receivers require training overhead for parameter estimation, and degrade due to model and parameter mismatch. To mitigate asynchronous impulsive noise, we exploit its sparsity in the time domain, and apply sparse Bayesian learning methods to estimate and subtract the noise impulses. We propose three iterative algorithms with different complexity vs. performance trade-offs: (1) we utilize the noise projection onto null and pilot tones; (2) we add the information in the date tones to perform joint noise estimation and symbol detection; (3) we use decision feedback from the decoder to further enhance the accuracy of noise estimation. These algorithms are also embedded in a time-domain block interleaving OFDM system to mitigate periodic impulsive noise. Compared to conventional OFDM receivers, the proposed methods achieve SNR gains of up to 9 dB in coded and 10 dB in uncoded systems in asynchronous impulsive noise, and up to 6 dB in coded systems in periodic impulsive noise.

244 citations


Journal ArticleDOI
TL;DR: This paper derives accurate approximations for the maximal throughput in both scenarios in the high signal-to-noise ratio region, and gives new insights into the additional power cost for achieving a higher security level while maintaining a specified target throughput.
Abstract: In this paper, we investigate the design of artificial-noise-aided secure multi-antenna transmission in slow fading channels. The primary design concerns include the transmit power allocation and the rate parameters of the wiretap code. We consider two scenarios with different complexity levels: 1) the design parameters are chosen to be fixed for all transmissions; and 2) they are adaptively adjusted based on the instantaneous channel feedback from the intended receiver. In both scenarios, we provide explicit design solutions for achieving the maximal throughput subject to a secrecy constraint, given by a maximum allowable secrecy outage probability. We then derive accurate approximations for the maximal throughput in both scenarios in the high signal-to-noise ratio region, and give new insights into the additional power cost for achieving a higher security level while maintaining a specified target throughput. In the end, the throughput gain of adaptive transmission over non-adaptive transmission is also quantified and analyzed.

232 citations


Journal ArticleDOI
TL;DR: An adaptive filtering approach based on discrete wavelet transform and artificial neural network is proposed for ECG signal noise reduction that can successfully remove a wide range of noise with significant improvement on SNR (signal-to-noise ratio).

219 citations


Journal ArticleDOI
TL;DR: This study describes different CFC measures and test their applicability in simulated and real electroencephalographic (EEG) data obtained during resting state and finds that specific CFC-measures detect correctly in most cases the nature of CFC under noise conditions.
Abstract: Information processing in the brain is thought to rely on the convergence and divergence of oscillatory behaviors of widely distributed brain areas. This information flow is captured in its simplest form via the concepts of synchronization and desynchronization and related metrics. More complex forms of information flow are transient synchronizations and multi-frequency behaviors with metrics related to cross-frequency coupling (CFC). It is supposed that CFC plays a crucial role in the organization of large-scale networks and functional integration across large distances. In this study we describe different CFC measures and test their applicability in simulated and real electroencephalographic (EEG) data obtained during resting state. For these purposes, we derive generic oscillator equations from full brain network models. We systematically model and simulate the various scenarios of cross-frequency coupling under the influence of noise to obtain biologically realistic oscillator dynamics. We find that (i) specific CFC-measures detect correctly in most cases the nature of CFC under noise conditions, (ii) bispectrum and bicoherence correctly detect the CFCs in simulated data, (iii) empirical resting state EEG show a prominent delta-alpha CFC as identified by specific CFC measures and the more classic bispectrum and bicoherence. This coupling was mostly asymmetric (directed) and generally higher in the eyes-closed than in the eyes-open condition. In conjunction, these two sets of measures provide a powerful toolbox to reveal the nature of couplings from experimental data and as such allow inference on the brain state dependent information processing. Methodological advantages of using CFC measures and theoretical significance of delta and alpha interactions during resting and other brain states are discussed.

202 citations


Journal ArticleDOI
TL;DR: A sparse representation based noise reduction method for hyperspectral imagery is developed, which is dependent on the assumption that the non-noise component in an observed signal can be sparsely decomposed over a redundant dictionary while the noise component does not have this property.
Abstract: Noise reduction is an active research area in image processing due to its importance in improving the quality of image for object detection and classification. In this paper, we develop a sparse representation based noise reduction method for hyperspectral imagery, which is dependent on the assumption that the non-noise component in an observed signal can be sparsely decomposed over a redundant dictionary while the noise component does not have this property. The main contribution of the paper is in the introduction of nonlocal similarity and spectral-spatial structure of hyperspectral imagery into sparse representation. Non-locality means the self-similarity of image, by which a whole image can be partitioned into some groups containing similar patches. The similar patches in each group are sparsely represented with a shared subset of atoms in a dictionary making true signal and noise more easily separated. Sparse representation with spectral-spatial structure can exploit spectral and spatial joint correlations of hyperspectral imagery by using 3-D blocks instead of 2-D patches for sparse coding, which also makes true signal and noise more distinguished. Moreover, hyperspectral imagery has both signal-independent and signal-dependent noises, so a mixed Poisson and Gaussian noise model is used. In order to make sparse representation be insensitive to the various noise distribution in different blocks, a variance-stabilizing transformation (VST) is used to make their variance comparable. The advantages of the proposed methods are validated on both synthetic and real hyperspectral remote sensing data sets.

195 citations


Journal ArticleDOI
TL;DR: Both numerical simulations with Poisson noise and experimental data from a biological cell indicate that OSS consistently outperforms the HIO, ER-HIO and noise robust (NR)-HIO algorithms at all noise levels in terms of accuracy and consistency of the reconstructions.
Abstract: Coherent diffraction imaging (CDI) is high-resolution lensless microscopy that has been applied to image a wide range of specimens using synchrotron radiation, X-ray free-electron lasers, high harmonic generation, soft X-ray lasers and electrons Despite recent rapid advances, it remains a challenge to reconstruct fine features in weakly scattering objects such as biological specimens from noisy data Here an effective iterative algorithm, termed oversampling smoothness (OSS), for phase retrieval of noisy diffraction intensities is presented OSS exploits the correlation information among the pixels or voxels in the region outside of a support in real space By properly applying spatial frequency filters to the pixels or voxels outside the support at different stages of the iterative process (ie a smoothness constraint), OSS finds a balance between the hybrid input–output (HIO) and error reduction (ER) algorithms to search for a global minimum in solution space, while reducing the oscillations in the reconstruction Both numerical simulations with Poisson noise and experimental data from a biological cell indicate that OSS consistently outperforms the HIO, ER–HIO and noise robust (NR)–HIO algorithms at all noise levels in terms of accuracy and consistency of the reconstructions It is expected that OSS will find application in the rapidly growing CDI field, as well as other disciplines where phase retrieval from noisy Fourier magnitudes is needed The MATLAB (The MathWorks Inc, Natick, MA, USA) source code of the OSS algorithm is freely available from http://wwwphysicsuclaedu/research/imaging

173 citations


Journal ArticleDOI
TL;DR: In this article, a low-rank signal matrix with additive Gaussian noise is reconstructed using orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors.

159 citations


Journal ArticleDOI
TL;DR: In this article, a low-rank matrix completion (MC) with a designed texture patch transformation is applied to 3D seismic data reconstruction, which is based on nuclear norm minimization.
Abstract: We have developed a new algorithm for the reconstruction of seismic traces randomly missing from a uniform grid of a 3D seismic volume. Several algorithms have been developed for such reconstructions, based on properties of the seismic wavefields and on signal processing concepts, such as sparse signal representation in a transform domain. We have investigated a novel approach, originally introduced for noise removal, which is based on the premise that for suitable representation of the seismic data as matrices or tensors, the rank of the seismic data (computed by singular value decomposition) increases with noise or missing traces. Thus, we apply low-rank matrix completion (MC) with a designed texture-patch transformation to 3D seismic data reconstruction. Low-rank components capture geometrically meaningful structures in seismic data that encompass conventional local features such as events and dips. The low-rank MC is based on nuclear-norm minimization. An efficient L1-norm minimizing algorithm...

Journal ArticleDOI
TL;DR: Improved strategies to perform photonic information processing using an optoelectronic oscillator with delayed feedback are presented and it is illustrated that the performance degradation induced by noise can be compensated for via multi-level pre-processing masks.
Abstract: We present improved strategies to perform photonic information processing using an optoelectronic oscillator with delayed feedback. In particular, we study, via numerical simulations and experiments, the influence of a finite signal-to-noise ratio on the computing performance. We illustrate that the performance degradation induced by noise can be compensated for via multi-level pre-processing masks.

Journal ArticleDOI
TL;DR: In this article, the authors applied the approximate entropy (ApEn) method and empirical mode decomposition (EMD) to clearly separate the entry-exit events, and thus the size of the spall-like fault is estimated.

Journal ArticleDOI
TL;DR: The study has demonstrated the importance of the optimization of the SG parameters during the conversion of spectra into derivative form, specifically window size and polynomial order of the fitting curve.
Abstract: Calculating derivatives of spectral data by the Savitzky-Golay (SG) numerical algorithm is often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to nonideal instrument and sample properties. Addressing these issues, a study of the simulated and measured infrared data by partial least-squares regression has been conducted. The simulated data sets were modeled by considering a range of undesired chemical and physical spectral anomalies and variations that can occur in a measured spectrum, such as baseline variations, noise, and scattering effects. The study has demonstrated the importance of the optimization of the SG parameters during the conversion of spectra into derivative form, specifically window size and polynomial order of the fitting curve. A specific optimal window size is associated with an exact component of the system being estimated, and this window size does not necessarily apply for some other component present in the system. Since the optimization procedure can be time-consuming, as a rough guideline spectral noise level can be used for assessment of window size. Moreover, it has been demonstrated that, when the extended multiplicative signal correction (EMSC) is used alongside the SG procedure, the derivative treatment of data by the SG algorithm must precede the EMSC normalization.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the optimization of EemD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect characteristics in the transmitted signals to ensure accurate bearing fault diagnosis.

Journal ArticleDOI
TL;DR: The analysis shows that when phase noise dominates mixer and quantization noise, full-duplex systems can use either active analog cancellation or baseband digital cancellation to achieve near-identical rate gain regions.
Abstract: In this paper, we analytically study the regime in which practical full-duplex systems can achieve larger rates than an equivalent half-duplex systems. The key challenge in practical full-duplex systems is uncancelled self-interference signal, which is caused by a combination of hardware and implementation imperfections. Thus, we first present a signal model which captures the effect of significant impairments such as oscillator phase noise, low-noise amplifier noise figure, mixer noise, and analog-to-digital converter quantization noise. Using the detailed signal model, we study the rate gain region, which is defined as the region of received signal-of-interest strength where full-duplex systems outperform half-duplex systems in terms of achievable rate. The rate gain region is derived as a piecewise linear approximation in log-domain, and numerical results show that the approximation closely matches the exact region. Our analysis shows that when phase noise dominates mixer and quantization noise, full-duplex systems can use either active analog cancellation or baseband digital cancellation to achieve near-identical rate gain regions. Finally, as a design example, we numerically investigate the full-duplex system performance and rate gain region in typical indoor environments for practical wireless applications.

Journal ArticleDOI
TL;DR: A new effective noise level estimation method is proposed on the basis of the study of singular values of noise-corrupted images, which can reliably infer noise levels and show robust behavior over a wide range of visual content and noise conditions.
Abstract: Accurate estimation of Gaussian noise level is of fundamental interest in a wide variety of vision and image processing applications as it is critical to the processing techniques that follow. In this paper, a new effective noise level estimation method is proposed on the basis of the study of singular values of noise-corrupted images. Two novel aspects of this paper address the major challenges in noise estimation: 1) the use of the tail of singular values for noise estimation to alleviate the influence of the signal on the data basis for the noise estimation process and 2) the addition of known noise to estimate the content-dependent parameter, so that the proposed scheme is adaptive to visual signals, thereby enabling a wider application scope of the proposed scheme. The analysis and experiment results demonstrate that the proposed algorithm can reliably infer noise levels and show robust behavior over a wide range of visual content and noise conditions, and that is outperforms relevant existing methods.

Journal ArticleDOI
TL;DR: High-frequency oscillations sampled with a rather T1-weighted contrast still contain specific information on these resting-state networks to consistently identify them, not consistent with the commonly held view that these networks operate on low-frequency fluctuations alone.
Abstract: Analysis of resting-state networks using fMRI usually ignores high-frequency fluctuations in the BOLD signal – be it because of low TR prohibiting the analysis of fluctuations with frequencies higher than 0.25 Hz (for a typical TR of 2 s), or because of the application of a bandpass filter (commonly restricting the signal to frequencies lower than 0.1 Hz). While the standard model of convolving neuronal activity with a hemodynamic response function suggests that the signal of interest in fMRI is characterized by slow fluctuation, it is in fact unclear whether the high-frequency dynamics of the signal consists of noise only. In this study, 10 subjects were scanned at 3 T during 6 minutes of rest using a multiband EPI sequence with a TR of 354 ms to critically sample fluctuations of up to 1.4 Hz. Preprocessed data were high-pass filtered to include only frequencies above 0.25 Hz, and voxelwise whole-brain temporal ICA (tICA) was used to identify consistent high-frequency signals. The resulting components include physiological background signal sources, most notably pulsation and heartbeat components, that can be specifically identified and localized with the method presented here. Perhaps more surprisingly, common resting-state networks like the default-mode network also emerge as separate tICA components. This means that high frequency oscillations sampled with a rather T1-weighted contrast still contain specific information on these resting-state networks to consistently identify them, not consistent with the commonly held view that these networks operate on low-frequency fluctuations alone. Consequently, the use of bandpass filters in resting-state data analysis should be reconsidered, since this step eliminates potentially relevant information. Instead, more specific methods for the elimination of physiological background signals, for example by regression of physiological noise components, might prove to be viable alternatives.

Journal ArticleDOI
TL;DR: This tutorial presents GPs for regression as a natural nonlinear extension to optimal Wiener filtering and discusses several important aspects and extensions, including recursive and adaptive algorithms for dealing with nonstationarity, low-complexity solutions, non-Gaussian noise models, and classification scenarios.
Abstract: Gaussian processes (GPs) are versatile tools that have been successfully employed to solve nonlinear estimation problems in machine learning, but that are rarely used in signal processing. In this tutorial, we present GPs for regression as a natural nonlinear extension to optimal Wiener filtering. After establishing their basic formulation, we discuss several important aspects and extensions, including recursive and adaptive algorithms for dealing with non-stationarity, low-complexity solutions, non-Gaussian noise models and classification scenarios. Furthermore, we provide a selection of relevant applications to wireless digital communications.

Journal ArticleDOI
TL;DR: In this article, the authors present an analysis of the publicly available HARPS radial velocity (RV) measurements for α Cen B, a star hosting an Earth-mass planet candidate in a 3.24 day orbit.
Abstract: We present an analysis of the publicly available HARPS radial velocity (RV) measurements for α Cen B, a star hosting an Earth-mass planet candidate in a 3.24 day orbit. The goal is to devise robust ways of extracting low-amplitude RV signals of low-mass planets in the presence of activity noise. Two approaches were used to remove the stellar activity signal which dominates the RV variations: (1) Fourier component analysis (pre-whitening), and (2) local trend filtering (LTF) of the activity using short time windows of the data. The Fourier procedure results in a signal at P = 3.236 days and K = 0.42 m s–1, which is consistent with the presence of an Earth-mass planet, but the false alarm probability for this signal is rather high at a few percent. The LTF results in no significant detection of the planet signal, although it is possible to detect a marginal planet signal with this method using a different choice of time windows and fitting functions. However, even in this case the significance of the 3.24 day signal depends on the details of how a time window containing only 10% of the data is filtered. Both methods should have detected the presence of α Cen Bb at a higher significance than is actually seen. We also investigated the influence of random noise with a standard deviation comparable to the HARPS data and sampled in the same way. The distribution of the noise peaks in the period range 2.8-3.3 days has a maximum of ≈3.2 days and amplitudes approximately one-half of the K-amplitude for the planet. The presence of the activity signal may boost the velocity amplitude of these signals to values comparable to the planet. It may be premature to attribute the 3.24 day RV variations to an Earth-mass planet. A better understanding of the noise characteristics in the RV data as well as more measurements with better sampling will be needed to confirm this exoplanet.

Journal ArticleDOI
TL;DR: To develop R2* mapping techniques corrected for confounding factors and optimized for noise performance, researchers at the Massachusetts Institute of Technology (MIT) used EMMARM, a state-of-the-art machine learning system, to solve the challenge of systematically cataloging individual neurons in the response to noise.
Abstract: R2* relaxometry has a number of important applications in MRI, including iron measurements in the liver (1–3), heart (3), pancreas (2), brain (4); BOLD imaging in functional MRI of the brain (5) and other organs such as kidney (6); tracking and detection of cells labeled with super-paramagnetic iron oxides (7). The utility of a quantitative imaging biomarker such as R2* mapping depends on its ability to measure a fundamental physical parameter (i.e., R2* at a given field strength) that correlates well with a meaningful physiological parameter such as local tissue oxygenation, iron concentration, etc. In addition, the method used to measure R2* should be accurate, precise (repeatable), reproducible across sites and robust to differences in imaging parameters, protocols, and scanner platforms. For these reasons, a thorough understanding of the factors that influence and potentially confound a biomarker such as R2* mapping is of considerable importance. Most R2* mapping methods use multiple magnitude images acquired at different echo times (TE) and model R2* as the monoexponential decay rate obtained by fitting the acquired signal at each voxel. The fitting can be performed from the magnitude of the signal measured in individual pixels or from larger regions of interest averaged over larger regions of tissue. Unfortunately, this method suffers from bias related to noise. In regions of high signal-to-noise ratio (SNR), the noise statistics of magnitude MR images are gaussian with zero mean (8). However, as the SNR decreases with R2* related signal decay, the noise statistics are altered and have a Rician distribution, which has a complicated dependence on the SNR with a nonzero mean (9,10). This leads to a TE-dependent and SNR-dependent bias in the signal and if not accounted for, leads to protocol-dependent bias in the estimation of R2*. In general, two approaches, truncation (11), and baseline fitting (12) have been used to address this limitation. However, these approaches discard potentially useful information and may retain some residual bias and reduced noise performance (13). Complex fitting, however, where the magnitude operation is not performed on the acquired images, has not been widely used. Using complex data, the noise distribution remains gaussian and is constant with zero mean for all TE. For this reason, complex R2* fitting does not suffer bias caused by nonzero mean noise in regions of low SNR. Further, the presence of fat in tissue can lead to dramatic alterations in the signal behavior of images acquired at increasing TE (14). This is especially important in organs such as the liver, where intracellular accumulation of liver triglycerides (hepatic steatosis) may occur in up to 30% of the US population (15), particularly in individuals suffering from obesity and/or type II diabetes. The pancreas is also well known to contain fat (16) and new reports are demonstrating an increasing role of intracellular accumulation of fat in muscle (17) and the heart (18). Past work has attempted to mitigate the effects of fat in R2* mapping by acquiring images at TEs where the water and main methylene resonance of fat (~−217 Hz from water peak at 1.5 T) are acquired “in-phase” (e.g., 4.6 ms, 9.2 ms, etc, at 1.5 T) (19,20). Increasing recognition that fat has multiple spectral peaks (21) had led to the realization that it is not possible to acquire images with water and all peaks of fat in-phase, except at a spin-echo or at TE = 0 for a free induction decay. The interference pattern of the fat peaks with themselves and the water peak leads to increased apparent signal decay, i.e., increased apparent R2*, even when echoes are acquired “in-phase”. Further, the use of relatively long TE such as 4.6, 9.2 ms, etc, greatly diminishes the noise performance and the upper limits of the dynamic range of R2* estimation methods needed to quantify signal decay in tissues with severe iron overload, where R2* values may be on the order of 1000 s−1 (T2* on the order of 1 ms) (1). Alternative techniques for fat-corrected R2* mapping are based on suppressing the fat signal. This can mainly be achieved by T1-based fat nulling using inversion-recovery sequences (22), or by frequency selective fat saturation (23,24). T1-based fat nulling can achieve nearly uniform fat suppression, but results in lengthened scan time and severely reduced signal-to-noise (SNR). Frequency selective fat saturation also lengthens the acquisition, and is problematic in the presence of: (a) B0 field inhomogeneities (because the peaks shift in frequency), or (b) high R2* values (because the peaks can broaden to the point that they overlap). Additionally, fat peaks near the water resonance will not be suppressed using conventional fat saturation. Water-selective R2* mapping is also expected to suffer from the same limitations. To address these challenges, in this work, we will describe the use of a multiecho chemical shift based R2* estimation method that simultaneously estimates, and therefore corrects for, the presence of fat. Signal modeling- based techniques for measuring R2* in the presence of fat were initially developed by Wehrli et al. (14). The method employed in this article is based on an extension of previously reported complex-based methods for R2*-corrected and spectrally modeled fat quantification (25,26). Here, we apply these complex signal estimation approaches to avoid the pitfalls associated with magnitude based relaxometry methods, while also correcting for the presence of tissue fat. We provide a detailed analysis of the noise performance and bias of these methods in comparison to magnitude-based methods. In addition, we will demonstrate that inclusion of fat in the signal model has minimal impact on the noise performance of complex R2* relaxometry. Further, the use of joint estimation of R2* will be used to maximize the noise performance of R2* fitting when signal decay is very rapid (i.e., T2* is very short). Finally, we formulate a Cramer-Rao Bound (CRB) analysis that can be used to optimize acquisition parameters to maximize the noise performance of fat-corrected R2* relaxometry for specific ranges of expected R2* values. Simulations and clinically relevant examples are used to demonstrate pitfalls in R2* relaxometry associated with the presence of fat and very high iron concentrations. Monte Carlo simulations and theoretical analysis based on CRB are also shown to provide a framework for acquisition parameter optimization.

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the publicly available HARPS radial velocity (RV) measurements for Alpha Cen B, a star hosting an Earth-mass planet candidate in a 3.24 day orbit.
Abstract: We present an analysis of the publicly available HARPS radial velocity (RV) measurements for Alpha Cen B, a star hosting an Earth-mass planet candidate in a 3.24 day orbit. The goal is to devise robust ways of extracting low-amplitude RV signals of low mass planets in the presence of activity noise. Two approaches were used to remove the stellar activity signal which dominates the RV variations: 1) Fourier component analysis (pre-whitening), and 2) local trend filtering (LTF) of the activity using short time windows of the data. The Fourier procedure results in a signal at P = 3.236 days and K = 0.42 m/s which is consistent with the presence of an Earth-mass planet, but the false alarm probability for this signal is rather high at a few percent. The LTF results in no significant detection of the planet signal, although it is possible to detect a marginal planet signal with this method using a different choice of time windows and fitting functions. However, even in this case the significance of the 3.24-d signal depends on the details of how a time window containing only 10% of the data is filtered. Both methods should have detected the presence of Alpha Cen Bb at a higher significance than is actually seen. We also investigated the influence of random noise with a standard deviation comparable to the HARPS data and sampled in the same way. The distribution of the noise peaks in the period range 2.8 - 3.3 days have a maximum of approximately 3.2 days and amplitudes approximately one-half of the K-amplitude for the planet. The presence of the activity signal may boost the velocity amplitude of these signals to values comparable to the planet. It may be premature to attribute the 3.24 day RV variations to an Earth-mass planet. A better understanding of the noise characteristics in the RV data as well as more measurements with better sampling will be needed to confirm this exoplanet.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a multi-stable stochastic resonance (SR) method for detecting rotating machine faults by analyzing the influence relationship between the resonance model and the resonance effect.

Journal ArticleDOI
TL;DR: In this article, Gaussian Processes (GPs) are used for regression as a natural nonlinear extension to optimal Wiener filtering, and several important aspects and extensions are discussed, including recursive and adaptive algorithms for dealing with nonstationarity, low-complexity solutions, non-Gaussian noise models, and classification scenarios.
Abstract: Gaussian processes (GPs) are versatile tools that have been successfully employed to solve nonlinear estimation problems in machine learning but are rarely used in signal processing. In this tutorial, we present GPs for regression as a natural nonlinear extension to optimal Wiener filtering. After establishing their basic formulation, we discuss several important aspects and extensions, including recursive and adaptive algorithms for dealing with nonstationarity, low-complexity solutions, non-Gaussian noise models, and classification scenarios. Furthermore, we provide a selection of relevant applications to wireless digital communications.

Journal ArticleDOI
Eric R. Fossum1
TL;DR: In this article, the performance metrics of single-bit and multi-bit photo-electron counting quanta image sensors (QIS) were analyzed using Poisson arrival statistics and signal and noise as a function of exposure were determined.
Abstract: Imaging performance metrics of single-bit and multi-bit photo-electron-counting quanta image sensors (QIS) are analyzed using Poisson arrival statistics. Signal and noise as a function of exposure are determined. The D-log H characteristic of single-bit sensors including overexposure latitude is quantified. Linearity and dynamic range are also investigated. Read-noise-induced bit-error rate is analyzed and a read-noise target of less than 0.15 e-rms is suggested.

Journal ArticleDOI
TL;DR: It is demonstrated that the average degree of neurons within the hybrid scale-free network significantly influences the optimal amount of noise for the occurrence of stochastic resonance, indicating that there also exists an optimal topology for the amplification of the response to the weak input signal.
Abstract: We study the phenomenon of stochastic resonance in a system of coupled neurons that are globally excited by a weak periodic input signal. We make the realistic assumption that the chemical and electrical synapses interact in the same neuronal network, hence constituting a hybrid network. By considering a hybrid coupling scheme embedded in the scale-free topology, we show that the electrical synapses are more efficient than chemical synapses in promoting the best correlation between the weak input signal and the response of the system. We also demonstrate that the average degree of neurons within the hybrid scale-free network significantly influences the optimal amount of noise for the occurrence of stochastic resonance, indicating that there also exists an optimal topology for the amplification of the response to the weak input signal. Lastly, we verify that the presented results are robust to variations of the system size.

Journal ArticleDOI
TL;DR: In this article, the TFM was modified to include the directional dependence of ultrasonic velocity in an anisotropic composite laminate, and practical procedures for measuring the direction-dependent velocity profile were described.
Abstract: As carbon fibre composite becomes more widely used for primary structural components in aerospace and other applications, the reliable detection of small defects in thick-sections is increasingly important. This article describes an experimental procedure for improving the detectability of such defects based on modifications to the Total Focusing Method (TFM) of processing ultrasonic array data to form an image. First the TFM is modified to include the directional dependence of ultrasonic velocity in an anisotropic composite laminate, and practical procedures for measuring the direction-dependent velocity profile are described. The performance of the TFM is then optimised in terms of the signal to noise ratio for Side-Drilled Holes (SDHs) by tuning both the frequency-domain filtering of data and the maximum aperture angle used in processing. Finally an attenuation correction is applied to the image so that the background structural noise level is uniform at all depths. The result is an image where the sensitivity (i.e. the signal to noise ratio) to a particular feature is independent of depth. Signals from 1.5 mm diameter SDHs in the final image at depths of 4, 10 and 16 mm are around 15 dB above the root-mean-square level of the surrounding structural noise. In a standard TFM image, the signals from the same SDHs are not visible above the structural noise.

Journal ArticleDOI
Jimeng Li1, Xuefeng Chen1, Zhaohui Du1, Zuowei Fang1, Zhengjia He1 
TL;DR: In this paper, a noise-controlled second-order enhanced stochastic resonance (SR) method based on the Morlet wavelet transform is proposed to extract fault feature for wind turbine vibration signals in the present study.

Proceedings ArticleDOI
06 Apr 2013
TL;DR: A survey of various types of noises corrupting ECG signal and various approaches based on Wavelet Transform, Fuzzy logic, FIR filtering, Empirical Mode Decomposition used in denoising the signal effectively are presented.
Abstract: Noise always degrades the quality of ECG signal. ECG noise removal is complicated due to time varying nature of ECG signal. As the ECG signal is used for the primary diagnosis and analysis of heart diseases, a good quality of ECG signal is necessary. A survey of various types of noises corrupting ECG signal and various approaches based on Wavelet Transform, Fuzzy logic, FIR filtering, Empirical Mode Decomposition used in denoising the signal effectively are presented in this paper. The result tables comparing the performances of various denoising techniques based on related parameters are included.

Journal ArticleDOI
TL;DR: It is shown that under some conditions on RIP and the minimum magnitude of the nonzero elements of the sparse signal, OMP with proper stopping rules can recover the support of the signal exactly from the noisy observation.
Abstract: Orthogonal matching pursuit (OMP) algorithm is a classical greedy algorithm in Compressed Sensing. In this letter, we study the performance of OMP in recovering the support of a sparse signal from a few noisy linear measurements. We consider two types of bounded noise and our analysis is in the framework of restricted isometry property (RIP). It is shown that under some conditions on RIP and the minimum magnitude of the nonzero elements of the sparse signal, OMP with proper stopping rules can recover the support of the signal exactly from the noisy observation. We also discuss the case of Gaussian noise. Our conditions on RIP improve some existing results.