scispace - formally typeset
Search or ask a question

Showing papers on "Noise published in 2013"


Journal ArticleDOI
TL;DR: A quantitative statistical method is developed to distinguish true biological variability from the high levels of technical noise in single-cell experiments and quantifies the statistical significance of observed cell-to-cell variability in expression strength on a gene-by-gene basis.
Abstract: A statistical method that uses spike-ins to model the dependence of technical noise on transcript abundance in single-cell RNA-seq experiments allows identification of genes wherein observed variability in read counts can be reliably interpreted as a signal of biological variability as opposed to the effect of technical noise. Single-cell RNA-seq can yield valuable insights about the variability within a population of seemingly homogeneous cells. We developed a quantitative statistical method to distinguish true biological variability from the high levels of technical noise in single-cell experiments. Our approach quantifies the statistical significance of observed cell-to-cell variability in expression strength on a gene-by-gene basis. We validate our approach using two independent data sets from Arabidopsis thaliana and Mus musculus.

949 citations


Journal ArticleDOI
05 Apr 2013-Science
TL;DR: This work developed decision-making tasks in which sensory evidence is delivered in randomly timed pulses, and analyzed the resulting data with models that use the richly detailed information of each trial’s pulse timing to distinguish between different decision- making mechanisms.
Abstract: The gradual and noisy accumulation of evidence is a fundamental component of decision-making, with noise playing a key role as the source of variability and errors. However, the origins of this noise have never been determined. We developed decision-making tasks in which sensory evidence is delivered in randomly timed pulses, and analyzed the resulting data with models that use the richly detailed information of each trial’s pulse timing to distinguish between different decision-making mechanisms. This analysis allowed measurement of the magnitude of noise in the accumulator’s memory, separately from noise associated with incoming sensory evidence. In our tasks, the accumulator’s memory was noiseless, for both rats and humans. In contrast, the addition of new sensory evidence was the primary source of variability. We suggest our task and modeling approach as a powerful method for revealing internal properties of decision-making processes.

561 citations


Posted Content
TL;DR: A different attack on the problem is proposed, which deals with arbitrary (but noisy enough) corruption, arbitrary reconstruction loss, handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise.
Abstract: Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).

439 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed approach presents the performance of defect recognition under the influence of the feature variations of the intra-class changes, the illumination and grayscale changes, and even in the toughest situation with additive Gaussian noise, the AECLBP can still achieve the moderate recognition accuracy.

433 citations


Journal ArticleDOI
TL;DR: DEMAND (Diverse Environments Multi-channel Acoustic Noise Database) is provided, providing a set of 16-channel noise files recorded in a variety of indoor and outdoor settings to encourage research into algorithms beyond the stereo setup.
Abstract: Multi-microphone arrays allow for the use of spatial filtering techniques that can greatly improve noise reduction and source separation. However, for speech and audio data, work on noise reduction or separation has focused primarily on one- or two-channel systems. Because of this, databases of multichannel environmental noise are not widely available. DEMAND (Diverse Environments Multi-channel Acoustic Noise Database) addresses this problem by providing a set of 16-channel noise files recorded in a variety of indoor and outdoor settings. The data was recorded using a planar microphone array consisting of four staggered rows, with the smallest distance between microphones being 5 cm and the largest being 21.8 cm. DEMAND is freely available under a Creative Commons license to encourage research into algorithms beyond the stereo setup.

413 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel speech enhancement method that is based on a Bayesian formulation of NMF (BNMF), and compares the performance of the developed algorithms with state-of-the-art speech enhancement schemes using various objective measures.
Abstract: Reducing the interference noise in a monaural noisy speech signal has been a challenging task for many years. Compared to traditional unsupervised speech enhancement methods, e.g., Wiener filtering, supervised approaches, such as algorithms based on hidden Markov models (HMM), lead to higher-quality enhanced speech signals. However, the main practical difficulty of these approaches is that for each noise type a model is required to be trained a priori. In this paper, we investigate a new class of supervised speech denoising algorithms using nonnegative matrix factorization (NMF). We propose a novel speech enhancement method that is based on a Bayesian formulation of NMF (BNMF). To circumvent the mismatch problem between the training and testing stages, we propose two solutions. First, we use an HMM in combination with BNMF (BNMF-HMM) to derive a minimum mean square error (MMSE) estimator for the speech signal with no information about the underlying noise type. Second, we suggest a scheme to learn the required noise BNMF model online, which is then used to develop an unsupervised speech enhancement system. Extensive experiments are carried out to investigate the performance of the proposed methods under different conditions. Moreover, we compare the performance of the developed algorithms with state-of-the-art speech enhancement schemes using various objective measures. Our simulations show that the proposed BNMF-based methods outperform the competing algorithms substantially.

399 citations


Journal ArticleDOI
TL;DR: A patch-based noise level estimation algorithm that selects low-rank patches without high frequency components from a single noisy image and estimates the noise level based on the gradients of the patches and their statistics is proposed.
Abstract: Noise level is an important parameter to many image processing applications. For example, the performance of an image denoising algorithm can be much degraded due to the poor noise level estimation. Most existing denoising algorithms simply assume the noise level is known that largely prevents them from practical use. Moreover, even with the given true noise level, these denoising algorithms still cannot achieve the best performance, especially for scenes with rich texture. In this paper, we propose a patch-based noise level estimation algorithm and suggest that the noise level parameter should be tuned according to the scene complexity. Our approach includes the process of selecting low-rank patches without high frequency components from a single noisy image. The selection is based on the gradients of the patches and their statistics. Then, the noise level is estimated from the selected patches using principal component analysis. Because the true noise level does not always provide the best performance for nonblind denoising algorithms, we further tune the noise level parameter for nonblind denoising. Experiments demonstrate that both the accuracy and stability are superior to the state of the art noise level estimation algorithm for various scenes and noise levels.

381 citations


Journal ArticleDOI
03 Sep 2013-PLOS ONE
TL;DR: This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach and is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters.
Abstract: Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters.

334 citations


Journal ArticleDOI
TL;DR: The results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.
Abstract: Speech recognition is remarkably robust to the listening background, even when the energy of background sounds strongly overlaps with that of speech. How the brain transforms the corrupted acoustic signal into a reliable neural representation suitable for speech recognition, however, remains elusive. Here, we hypothesize that this transformation is performed at the level of auditory cortex through adaptive neural encoding, and we test the hypothesis by recording, using MEG, the neural responses of human subjects listening to a narrated story. Spectrally matched stationary noise, which has maximal acoustic overlap with the speech, is mixed in at various intensity levels. Despite the severe acoustic interference caused by this noise, it is here demonstrated that low-frequency auditory cortical activity is reliably synchronized to the slow temporal modulations of speech, even when the noise is twice as strong as the speech. Such a reliable neural representation is maintained by intensity contrast gain control and by adaptive processing of temporal modulations at different time scales, corresponding to the neural δ and θ bands. Critically, the precision of this neural synchronization predicts how well a listener can recognize speech in noise, indicating that the precision of the auditory cortical representation limits the performance of speech recognition in noise. Together, these results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.

322 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine whether this view is consistent with the data and reach three main conclusions: structural estimation methods typically cannot recover news and noise shocks, if agents face a signal extraction problem, and are unable to separate news from noise, then the econometrician, faced with either the same data as the agents or a subset of these data, cannot do it either.
Abstract: A common view of the business cycle gives a central role to anticipations. Consumers and firms continuously receive information about the future, which is sometimes news and sometimes just noise. Based on this information, consumers and firms choose spending and, because of nominal rigidities, spending affects output in the short run. If ex post the information turns out to be news, the economy adjusts gradually to a new level of activity. If it turns out to be just noise, the economy returns to its initial state. Therefore, the dynamics of news and noise generate both short-run and long-run changes in aggregate activity. This view appears to capture many of the aspects often ascribed to fluctuations: the role of animal spirits in affecting demand—spirits coming here from a rational reaction to information about the future—the role of demand in affecting output in the short run, together with the notion that in the long run output follows a natural path determined by fundamentals. In this paper, we examine whether this view is consistent with the data. We reach three main conclusions, the first two methodological, the third substantive. Structural VARs typically cannot recover news and noise shocks. The reason is straightforward: if agents face a signal extraction problem, and are unable to separate news from noise, then the econometrician, faced with either the same data as the agents or a subset of these data, cannot do it either. While structural estimation methods cannot recover the actual time series for news and noise shocks either, they can recover underlying structural parameters, and thus the relative role and dynamic effects of news and noise shocks. Estimation of both a simple model, and then of a more elaborate DSGE model suggest that agents indeed solve such a signal extraction problem, and that noise shocks play an important role in determining short-run dynamics. Recent efforts to estimate business cycle models in which expectations about the future play an important role include Christiano et al. (2010) and Schmitt-Grohe and Uribe (2012). Those papers follow the approach of Jaimovich and Rebelo (2009) and model news as perfectly anticipated productivity changes that will occur at

285 citations


Journal ArticleDOI
TL;DR: An overview of research concerning both acute and chronic effects of exposure to noise on children's cognitive performance shows negative effects on speech perception and listening comprehension are more pronounced in children as compared to adults.
Abstract: The present paper provides an overview of research concerning both acute and chronic effects of exposure to noise on children's cognitive performance. Experimental studies addressing the impact of acute exposure showed negative effects on speech perception and listening comprehension. These effects are more pronounced in children as compared to adults. Children with language or attention disorders and second-language learners are still more impaired than age-matched controls. Noise-induced disruption was also found for non-auditory tasks, i.e., serial recall of visually presented lists and reading. The impact of chronic exposure to noise was examined in quasi-experimental studies. Indoor noise and reverberation in classroom settings were found to be associated with poorer performance of the children in verbal tasks. Regarding chronic exposure to aircraft noise, studies consistently found that high exposure is associated with lower reading performance. Even though the reported effects are usually small in magnitude, and confounding variables were not always sufficiently controlled, policy makers responsible for noise abatement should be aware of the potential impact of environmental noise on children's development.

Journal ArticleDOI
TL;DR: Three iterative algorithms with different complexity vs. performance trade-offs are proposed to mitigate asynchronous impulsive noise, exploit its sparsity in the time domain, and apply sparse Bayesian learning methods to estimate and subtract the noise impulses.
Abstract: Asynchronous impulsive noise and periodic impulsive noises limit communication performance in OFDM powerline communication systems. Conventional OFDM receivers that assume additive white Gaussian noise experience degradation in communication performance in impulsive noise. Alternate designs assume a statistical noise model and use the model parameters in mitigating impulsive noise. These receivers require training overhead for parameter estimation, and degrade due to model and parameter mismatch. To mitigate asynchronous impulsive noise, we exploit its sparsity in the time domain, and apply sparse Bayesian learning methods to estimate and subtract the noise impulses. We propose three iterative algorithms with different complexity vs. performance trade-offs: (1) we utilize the noise projection onto null and pilot tones; (2) we add the information in the date tones to perform joint noise estimation and symbol detection; (3) we use decision feedback from the decoder to further enhance the accuracy of noise estimation. These algorithms are also embedded in a time-domain block interleaving OFDM system to mitigate periodic impulsive noise. Compared to conventional OFDM receivers, the proposed methods achieve SNR gains of up to 9 dB in coded and 10 dB in uncoded systems in asynchronous impulsive noise, and up to 6 dB in coded systems in periodic impulsive noise.

Journal ArticleDOI
TL;DR: This paper provides concrete proof that participatory techniques, when implemented properly, can achieve the same accuracy as standard noise mapping techniques through a citizen science experiment for noise mapping a 1 km2 area in the city of Antwerp using NoiseTube.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A low-rank matrix factorization problem with a Mixture of Gaussians (MoG) noise, which is a universal approximator for any continuous distribution, and hence is able to model a wider range of real noise distributions.
Abstract: Many problems in computer vision can be posed as recovering a low-dimensional subspace from high-dimensional visual data. Factorization approaches to low-rank subspace estimation minimize a loss function between the observed measurement matrix and a bilinear factorization. Most popular loss functions include the L1 and L2 losses. While L1 is optimal for Laplacian distributed noise, L2 is optimal for Gaussian noise. However, real data is often corrupted by an unknown noise distribution, which is unlikely to be purely Gaussian or Laplacian. To address this problem, this paper proposes a low-rank matrix factorization problem with a Mixture of Gaussians (MoG) noise. The MoG model is a universal approximator for any continuous distribution, and hence is able to model a wider range of real noise distributions. The parameters of the MoG model can be estimated with a maximum likelihood method, while the subspace is computed with standard approaches. We illustrate the benefits of our approach in extensive synthetic, structure from motion, face modeling and background subtraction experiments.

Journal ArticleDOI
TL;DR: Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions, and increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs.
Abstract: Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%.

Journal ArticleDOI
TL;DR: A noise-resistant LBP (NRLBP) is proposed to preserve the image local structures in presence of noise and an error-correction mechanism to recover the distorted image patterns is developed.
Abstract: Local binary pattern (LBP) is sensitive to noise. Local ternary pattern (LTP) partially solves this problem. Both LBP and LTP, however, treat the corrupted image patterns as they are. In view of this, we propose a noise-resistant LBP (NRLBP) to preserve the image local structures in presence of noise. The small pixel difference is vulnerable to noise. Thus, we encode it as an uncertain state first, and then determine its value based on the other bits of the LBP code. It is widely accepted that most of the image local structures are represented by uniform codes and noise patterns most likely fall into the non-uniform codes. Therefore, we assign the value of an uncertain bit hence as to form possible uniform codes. Thus, we develop an error-correction mechanism to recover the distorted image patterns. In addition, we find that some image patterns such as lines are not captured in uniform codes. Those line patterns may appear less frequently than uniform codes, but they represent a set of important local primitives for pattern recognition. Thus, we propose an extended noise-resistant LBP (ENRLBP) to capture line patterns. The proposed NRLBP and ENRLBP are more resistant to noise compared with LBP, LTP, and many other variants. On various applications, the proposed NRLBP and ENRLBP demonstrate superior performance to LBP/LTP variants.

Journal ArticleDOI
TL;DR: In this paper, a simple time series method for bearing fault feature extraction using singular spectrum analysis (SSA) of the vibration signal is proposed, which is easy to implement and fault feature is noise immune.

Journal ArticleDOI
TL;DR: A quality analysis is presented using 1-week COMPASS measurements collected in Wuhan and the accuracy of GPS/COMPASS combination solutions is at least 20 % better than that of GPS alone.
Abstract: China completed a basic COMPASS navigation network with three Geostationary and three Inclined Geosynchronous satellites in orbit in April 2011. The network has been able to provide preliminary positioning and navigation functions. We first present a quality analysis using 1-week COMPASS measurements collected in Wuhan. Satellite visibility and validity of measurements, carrier-to-noise density ratio and code noise are analyzed. The analysis of multipath combinations shows that the noise level of COMPASS code measurements is higher than that of GPS collected using the same receiver. Second, the results of positioning are presented and analyzed. For the standalone COMPASS solutions, an accuracy of 20 m can be achieved. An accuracy of 3.0 m for the vertical, 1.5 m for the North and about 0.6---0.8 m for the East component is obtained using dual-frequency code only measurements for a short baseline. More importantly, code and phase measurements of the short baseline are processed together to obtain precise relative positioning. Kinematic solutions are then compared with the ground truth. The precision of COMPASS only solutions is better than 2 cm for the North component and 4 cm for the vertical. The standard deviation of the East component is smaller than 1 cm, which is even better than that of the East component of GPS solutions. The accuracy of GPS/COMPASS combination solutions is at least 20 % better than that of GPS alone. Furthermore, the geometry-based residuals of double differenced phase and code measurements are analyzed. The analysis shows that the noise level of un-differenced phase measurements is about 2---4 mm on both B1 and B2 frequencies. For the code measurements, the noise level is less than 0.45 m for B1 CA and about 0.35 m for B2 P code. Many of the COMPASS results presented are very promising and have been obtained for the first time.

Journal ArticleDOI
TL;DR: In this article, the diffusion-based smoothing algorithm of Lind et al. has been applied to body-water slam and wave-body impact problems and discover that temporal pressure noise can occur for these applications (while spatial noise is effectively eliminated).

Journal ArticleDOI
TL;DR: In this paper, a robust penalty function involving the sum of unsquared deviations and a relaxation that leads to a convex optimization problem is introduced. And the alternating direction method is applied to minimize the penalty function.
Abstract: The problem has found applications in computer vision, computer graphics, and sensor network localization, among others. Its least squares solution can be approximated by either spectral relaxation or semidefinite programming followed by a rounding procedure, analogous to the approximation algorithms of MAX-CUT. The contribution of this paper is three-fold: First, we introduce a robust penalty function involving the sum of unsquared deviations and derive a relaxation that leads to a convex optimization problem; Second, we apply the alternating direction method to minimize the penalty function; Finally, under a specific model of the measurement noise and for both complete and random measurement graphs, we prove that the rotations are exactly and stably recovered, exhibiting a phase transition behavior in terms of the proportion of noisy measurements. Numerical simulations confirm the phase transition behavior for our method as well as its improved accuracy compared to existing methods.

Journal ArticleDOI
TL;DR: The multiresolution structure and sparsity of wavelets are employed by nonlocal dictionary learning in each decomposition level of the wavelets in a manner that outperforms two state-of-the-art image denoising algorithms on higher noise levels.
Abstract: Exploiting the sparsity within representation models for images is critical for image denoising. The best currently available denoising methods take advantage of the sparsity from image self-similarity, pre-learned, and fixed representations. Most of these methods, however, still have difficulties in tackling high noise levels or noise models other than Gaussian. In this paper, the multiresolution structure and sparsity of wavelets are employed by nonlocal dictionary learning in each decomposition level of the wavelets. Experimental results show that our proposed method outperforms two state-of-the-art image denoising algorithms on higher noise levels. Furthermore, our approach is more adaptive to the less extensively researched uniform noise.

Journal ArticleDOI
21 Jul 2013
TL;DR: This work builds a discrete differential operator for arbitrary triangle meshes that is robust with respect to degenerate triangulations and compares its method versus other anisotropic denoising algorithms to demonstrate that it is more robust and produces good results even in the presence of high noise.
Abstract: We present an algorithm for denoising triangulated models based on L0 minimization. Our method maximizes the flat regions of the model and gradually removes noise while preserving sharp features. As part of this process, we build a discrete differential operator for arbitrary triangle meshes that is robust with respect to degenerate triangulations. We compare our method versus other anisotropic denoising algorithms and demonstrate that our method is more robust and produces good results even in the presence of high noise.

Journal ArticleDOI
TL;DR: This study presents GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data, and presents the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods.
Abstract: In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested.

Journal ArticleDOI
TL;DR: The findings suggest that anthropogenic noise has the potential to increase the risks of starvation and predation, and showcases that the behaviour of invertebrates, and not just vertebrates, is susceptible to the impact of this pervasive global pollutant.

Journal ArticleDOI
TL;DR: This Methods Article will provide a practical introduction to the techniques used to correct for the presence of physiological noise in time series fMRI data, and advice on modeling noise sources is given.
Abstract: The brainstem is directly involved in controlling blood pressure, respiration, sleep/wake cycles, pain modulation, motor and cardiac output. As such it is of significant basic science and clinical interest. However, the brainstem’s location close to major arteries and adjacent pulsatile cerebrospinal fluid filled spaces, means that it is difficult to reliably record functional magnetic resonance imaging (fMRI) data from. These physiological sources of noise generate time varying signals in fMRI data, which if left uncorrected can obscure signals of interest. In this Methods Article we will provide a practical introduction to the techniques used to correct for the presence of physiological noise in time series fMRI data. Techniques based on independent measurement of the cardiac and respiratory cycles, such as retrospective image correction (RETROICOR, Glover et al., 2000), will be described and their application and limitations discussed. The impact of a physiological noise model, implemented in the framework of the general linear model, on resting fMRI data acquired at 3T and 7T is presented. Data driven approaches based such as independent component analysis (ICA) are described. MR acquisition strategies that attempt to either minimise the influence of physiological fluctuations on recorded fMRI data, or provide additional information to correct for their presence, will be mentioned. General advice on modelling noise sources, and its effect on statistical inference via loss of degrees of freedom, and non-orthogonality of regressors, is given. Lastly, different strategies for assessing the benefit of different approaches to physiological noise modelling are presented.

Journal ArticleDOI
TL;DR: Noise reduction can reduce the adverse effect of noise on memory for speech for persons with good working memory capacity and it is argued that the mechanism behind this is faster word identification that enhances encoding into working memory.
Abstract: Objectives: It has been shown that noise reduction algorithms can reduce the negative effects of noise on memory processing in persons with normal hearing. The objective of the present study was to ...

Journal ArticleDOI
TL;DR: The nonlinear-gain observer presented in this paper is shown to surpass the system performance achieved when using comparable linear-gain observers, and the proof argues boundedness and ultimate boundedness of the closed-loop system under the proposed output feedback.
Abstract: We address the problem of state estimation for a class of nonlinear systems with measurement noise in the context of feedback control. It is well-known that high-gain observers are robust against model uncertainty and disturbances, but sensitive to measurement noise when implemented in a feedback loop. This work presents the benefits of a nonlinear-gain structure in the innovation process of the high-gain observer, in order to overcome the tradeoff between fast state reconstruction and measurement noise attenuation. The goal is to generate a larger observer gain during the transient response than in the steady-state response. Thus, by reducing the observer gain after achieving satisfactory state estimates, the effect of noise on the steady-state performance is reduced. Moreover, the nonlinear-gain observer presented in this paper is shown to surpass the system performance achieved when using comparable linear-gain observers. The proof argues boundedness and ultimate boundedness of the closed-loop system under the proposed output feedback.

Journal ArticleDOI
TL;DR: It turns out that the local delay behaves rather differently in the two cases of high mobility and no mobility, and the low- and high-rate asymptotic behavior of the minimum achievable delay in each case is provided.
Abstract: Communication between two neighboring nodes is a very basic operation in wireless networks. Yet very little research has focused on the local delay in networks with randomly placed nodes, defined as the mean time it takes a node to connect to its nearest neighbor. We study this problem for Poisson networks, first considering interference only, then noise only, and finally and briefly, interference plus noise. In the noiseless case, we analyze four different types of nearest-neighbor communication and compare the extreme cases of high mobility, where a new Poisson process is drawn in each time slot, and no mobility, where only a single realization exists and nodes stay put forever. It turns out that the local delay behaves rather differently in the two cases. We also provide the low- and high-rate asymptotic behavior of the minimum achievable delay in each case. In the cases with noise, power control is essential to keep the delay finite, and randomized power control can drastically reduce the required (mean) power for finite local delay.

Proceedings Article
05 Dec 2013
TL;DR: In this paper, the authors propose a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise.
Abstract: Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).