scispace - formally typeset
Search or ask a question

Showing papers on "Noise measurement published in 2016"


Journal ArticleDOI
TL;DR: It is proved that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample.
Abstract: In this paper, we study a classification problem in which sample labels are randomly corrupted. In this scenario, there is an unobservable sample with noise-free labels. However, before being observed, the true labels are independently flipped with a probability $\rho \in [0,0.5)$ , and the random label noise can be class-conditional. Here, we address two fundamental problems raised by this scenario. The first is how to best use the abundant surrogate loss functions designed for the traditional classification problem when there is label noise. We prove that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample. The other is the open problem of how to obtain the noise rate $\rho$ . We show that the rate is upper bounded by the conditional probability $P(\hat{Y}|X)$ of the noisy sample. Consequently, the rate can be estimated, because the upper bound can be easily reached in classification problems. Experimental results on synthetic and real datasets confirm the efficiency of our methods.

744 citations


Proceedings ArticleDOI
27 Jun 2016
TL;DR: A data-driven approach for determining the parameters of the new noise model is introduced as well as its application to image denoising and the experiments show that the noise model represents the noise in regular JPEG images more accurately compared to the previous models and is advantageous in image Denoising.
Abstract: Modelling and analyzing noise in images is a fundamental task in many computer vision systems. Traditionally, noise has been modelled per color channel assuming that the color channels are independent. Although the color channels can be considered as mutually independent in camera RAW images, signals from different color channels get mixed during the imaging process inside the camera due to gamut mapping, tone-mapping, and compression. We show the influence of the in-camera imaging pipeline on noise and propose a new noise model in the 3D RGB space to accounts for the color channel mix-ups. A data-driven approach for determining the parameters of the new noise model is introduced as well as its application to image denoising. The experiments show that our noise model represents the noise in regular JPEG images more accurately compared to the previous models and is advantageous in image denoising.

197 citations


Proceedings ArticleDOI
17 Jul 2016
TL;DR: In this article, the probability distribution of measurement noise and its typical power are identified for voltage, current and frequency data recorded at three different voltage levels, and the PMU noise quantification can help in generation of experimental PMU data in close conformity with field PMUs.
Abstract: Data recorded by Phasor Measurement Units (PMUs) contains noise. This paper characterizes and quantifies this noise for voltage, current and frequency data recorded at three different voltage levels. The probability distribution of the measurement noise and its typical power are identified. The PMU noise quantification can help in generation of experimental PMU data in close conformity with field PMU data, bad data removal, missing data prediction, and effective design of statistical filters for noise rejection.

193 citations


Journal ArticleDOI
TL;DR: The new method is applied to continuous wave electron spin resonance spectra and it is found that it increases the signal-to-noise ratio (SNR) by more than 32 dB without distorting the signal, whereas standard denoising methods improve the SNR by less than 10 dB and with some distortion.
Abstract: A new method is presented to denoise 1-D experimental signals using wavelet transforms. Although the state-of-the-art wavelet denoising methods perform better than other denoising methods, they are not very effective for experimental signals. Unlike images and other signals, experimental signals in chemical and biophysical applications, for example, are less tolerant to signal distortion and under-denoising caused by the standard wavelet denoising methods. The new method: 1) provides a method to select the number of decomposition levels to denoise; 2) uses a new formula to calculate noise thresholds that does not require noise estimation; 3) uses separate noise thresholds for positive and negative wavelet coefficients; 4) applies denoising to the approximation component; and 5) allows the flexibility to adjust the noise thresholds. The new method is applied to continuous wave electron spin resonance spectra and it is found that it increases the signal-to-noise ratio (SNR) by more than 32 dB without distorting the signal, whereas standard denoising methods improve the SNR by less than 10 dB and with some distortion. In addition, its computation time is more than six times faster.

178 citations


Journal ArticleDOI
TL;DR: This paper proposes a modified Zhang neural network (MZNN) model for the solution of TVQP and shows that, without measurement noise, the proposed MZNN model globally converges to the exact real-time solution of theTVQP problem in an exponential manner and that, in the presence of measurement noises, the proposal has a satisfactory performance.
Abstract: For quadratic programming (QP), it is usually assumed that the solving process is free of measurement noises or that the denoising has been conducted before the computation. However, time is precious for time-varying QP (TVQP) in practice. Preprocessing for denoising may consume extra time, and consequently violates real-time requirements. Therefore, a model with inherent noise tolerance is urgently needed to solve TVQP problems in real time. In this paper, we make progress along this direction by proposing a modified Zhang neural network (MZNN) model for the solution of TVQP. The original Zhang neural network model and the gradient neural network model are employed for comparisons with the MZNN model. In addition, theoretical analyses show that, without measurement noise, the proposed MZNN model globally converges to the exact real-time solution of the TVQP problem in an exponential manner and that, in the presence of measurement noises, the proposed MZNN model has a satisfactory performance. Finally, two illustrative simulation examples as well as a physical experiment are provided and analyzed to substantiate the efficacy and superiority of the proposed MZNN model for TVQP problem solving.

177 citations


Journal ArticleDOI
TL;DR: This paper derives the closed form of the Fisher information matrix with respect to sensor selection variables that is valid for any arbitrary noise correlation regime and develops both a convex relaxation approach and a greedy algorithm to find near-optimal solutions.
Abstract: In this paper, we consider the problem of sensor selection for parameter estimation with correlated measurement noise. We seek optimal sensor activations by formulating an optimization problem, in which the estimation error, given by the trace of the inverse of the Bayesian Fisher information matrix, is minimized subject to energy constraints. Fisher information has been widely used as an effective sensor selection criterion. However, existing information-based sensor selection methods are limited to the case of uncorrelated noise or weakly correlated noise due to the use of approximate metrics. By contrast, here we derive the closed form of the Fisher information matrix with respect to sensor selection variables that is valid for any arbitrary noise correlation regime and develop both a convex relaxation approach and a greedy algorithm to find near-optimal solutions. We further extend our framework of sensor selection to solve the problem of sensor scheduling, where a greedy algorithm is proposed to determine non-myopic (multi-time step ahead) sensor schedules. Lastly, numerical results are provided to illustrate the effectiveness of our approach, and to reveal the effect of noise correlation on estimation performance.

172 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.
Abstract: Large datasets often have unreliable labels—such as those obtained from Amazon's Mechanical Turk or social media platforms—and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is underdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.

163 citations


Proceedings ArticleDOI
01 Jun 2016
TL;DR: This work introduces triplets of patches with geometric constraints to improve the accuracy of patch localization, and automatically mine discriminative geometrically-constrained triplets for classification in a patch-based framework that only requires object bounding boxes.
Abstract: Fine-grained classification involves distinguishing between similar sub-categories based on subtle differences in highly localized regions, therefore, accurate localization of discriminative regions remains a major challenge. We describe a patch-based framework to address this problem. We introduce triplets of patches with geometric constraints to improve the accuracy of patch localization, and automatically mine discriminative geometrically-constrained triplets for classification. The resulting approach only requires object bounding boxes. Its effectiveness is demonstrated using four publicly available fine-grained datasets, on which it outperforms or achieves comparable performance to the state-of-the-art in classification.

131 citations


Journal ArticleDOI
TL;DR: Simulation results show that compared with conventional methods, the proposed robust scheme achieves much better bit error rate performance along desired directions for a given signal-to-noise ratio (SNR).
Abstract: Recently, directional modulation has become an active research area in wireless communications due to its security. Unlike existing research work, we consider a multi-beam directional modulation (MBDM) scenario with imperfect desired direction knowledge. In such a setting, a robust synthesis scheme is proposed for MBDM in broadcasting systems. In order to implement the secure transmission of a confidential message, the beamforming vector of the confidential message is designed to preserve its power as possible in the desired directions by minimizing its leakage to the eavesdropper directions while the projection matrix of artificial noise (AN) is to minimize the effect on the desired directions and force AN to the eavesdropper directions by maximizing the average receive signal-to-artificial-noise ratio at desired receivers. Simulation results show that compared with conventional methods, the proposed robust scheme achieves much better bit error rate performance along desired directions for a given signal-to-noise ratio (SNR). From the secrecy-rate aspect, the proposed scheme performs better than conventional methods for almost all SNR regions. In particular, in the medium and high SNR regions, the rate improvement of the proposed scheme over conventional methods is significant.

116 citations



Journal ArticleDOI
TL;DR: A novel method to suppress low-frequency noise in microseismic data based on mathematical morphology theory that aims at distinguishing useful signals and noise according to their tiny differences of waveform is developed.
Abstract: The frequency of microseismic data is higher than that of conventional seismic data. The range of effective frequency is usually from 100 to 500 Hz, and low-frequency noise is a common disturbance in downhole monitoring. Conventional signal analysis techniques, such as band-pass filters, have their limitation in microseismic data processing when the useful signals and noise share the same frequency band. We have developed a novel method to suppress low-frequency noise in microseismic data based on mathematical morphology theory that aims at distinguishing useful signals and noise according to their tiny differences of waveform. By choosing suitable structure elements, we have extracted low-frequency noise from a original data set. We first developed the fundamental principle of mathematical morphology and the formulation of our approach. Then, we used a synthetic data example that was composed of a Ricker wavelet and low-frequency noise to test the feasibility and performance of the proposed appro...

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This work presents a simple and effective method for removing noise and outliers from point sets generated by image-based 3D reconstruction techniques, which allows standard surface reconstruction methods to perform less smoothing and thus achieve higher quality surfaces with more features.
Abstract: Point sets generated by image-based 3D reconstruction techniques are often much noisier than those obtained using active techniques like laser scanning. Therefore, they pose greater challenges to the subsequent surface reconstruction (meshing) stage. We present a simple and effective method for removing noise and outliers from such point sets. Our algorithm uses the input images and corresponding depth maps to remove pixels which are geometrically or photometrically inconsistent with the colored surface implied by the input. This allows standard surface reconstruction methods (such as Poisson surface reconstruction) to perform less smoothing and thus achieve higher quality surfaces with more features. Our algorithm is efficient, easy to implement, and robust to varying amounts of noise. We demonstrate the benefits of our algorithm in combination with a variety of state-of-the-art depth and surface reconstruction methods.

Journal ArticleDOI
TL;DR: A noise reduction DCSK system as a solution to reduce the noise variance present in the received signal in order to improve performance, and computer simulation results are compared to relevant theoretical findings to validate the accuracy of the proposed system and demonstrate the performance improvement.
Abstract: One of the major drawbacks of the conventional differential chaos shift keying (DCSK) system is the addition of channel noise to both the reference signal and the data-bearing signal, which deteriorates its performance. In this brief, we propose a noise reduction DCSK system as a solution to reduce the noise variance present in the received signal in order to improve performance. For each transmitted bit, instead of generating $\beta$ different chaotic samples to be used as a reference sequence, $\beta/P$ chaotic samples are generated and then duplicated $P$ times in the signal. At the receiver, $P$ identical samples are averaged, and the resultant filtered signal is correlated to its time-delayed replica to recover the transmitted bit. This averaging operation of size $P$ reduces the noise variance and enhances the performance of the system. Theoretical bit error rate expressions for additive white Gaussian noise and multipath fading channels are analytically studied and derived. Computer simulation results are compared to relevant theoretical findings to validate the accuracy of the proposed system and to demonstrate the performance improvement compared to the conventional DCSK, the improved DCSK, and the differential-phase-shift-keying systems.

Journal ArticleDOI
TL;DR: A robust Gaussian approximate fixed-interval smoother for nonlinear systems with heavy-tailed process and measurement noises is proposed and results show the efficiency and superiority of the proposed smoother as compared with existing smoothers.
Abstract: In this letter, a robust Gaussian approximate (GA) fixed-interval smoother for nonlinear systems with heavy-tailed process and measurement noises is proposed. The process and measurement noises are modeled as stationary Student’s t distributions, and the state trajectory and noise parameters are inferred approximately based on the variational Bayesian (VB) approach. Simulation results show the efficiency and superiority of the proposed smoother as compared with existing smoothers.

Journal ArticleDOI
TL;DR: A theoretical analysis of the BLCMV beamformer is presented and several decompositions are introduced that reveal its capabilities in terms of interference and noise reduction, while controlling the binaural cues of the desired and the interfering sources.
Abstract: The recently proposed binaural linearly constrained minimum variance (BLCMV) beamformer is an extension of the well-known binaural minimum variance distortionless response (MVDR) beamformer, imposing constraints for both the desired and the interfering sources. Besides its capabilities to reduce interference and noise, it also enables to preserve the binaural cues of both the desired and interfering sources, hence making it particularly suitable for binaural hearing aid applications. In this paper, a theoretical analysis of the BLCMV beamformer is presented. In order to gain insights into the performance of the BLCMV beamformer, several decompositions are introduced that reveal its capabilities in terms of interference and noise reduction, while controlling the binaural cues of the desired and the interfering sources. When setting the parameters of the BLCMV beamformer, various considerations need to be taken into account, e.g. based on the amount of interference and noise reduction and the presence of estimation errors of the required relative transfer functions (RTFs). Analytical expressions for the performance of the BLCMV beamformer in terms of noise reduction, interference reduction, and cue preservation are derived. Comprehensive simulation experiments, using measured acoustic transfer functions as well as real recordings on binaural hearing aids, demonstrate the capabilities of the BLCMV beamformer in various noise environments.

Proceedings ArticleDOI
06 Jul 2016
TL;DR: In this article, the authors consider a linear time-variant system that is corrupted with process and measurement noise, and study how the selection of its sensors affects the estimation error of the corresponding Kalman filter over a finite observation interval.
Abstract: In this paper, we focus on sensor placement in linear dynamic estimation, where the objective is to place a small number of sensors in a system of interdependent states so to design an estimator with a desired estimation performance. In particular, we consider a linear time-variant system that is corrupted with process and measurement noise, and study how the selection of its sensors affects the estimation error of the corresponding Kalman filter over a finite observation interval. Our contributions are threefold: First, we prove that the minimum mean square error of the Kalman filter decreases only linearly as the number of sensors increases. That is, adding extra sensors so to reduce this estimation error is ineffective, a fundamental design limit. Similarly, we prove that the number of sensors grows linearly with the system's size for fixed minimum mean square error and number of output measurements over an observation interval; this is another fundamental limit, especially for systems where the system's size is large. Second, we prove that the log det of the error covariance of the Kalman filter, which captures the volume of the corresponding confidence ellipsoid, with respect to the system's initial condition and process noise is a supermodular and non-increasing set function in the choice of the sensor set. Therefore, it exhibits the diminishing returns property. Third, we provide an efficient approximation algorithm that selects a small number sensors so to optimize the Kalman filter with respect to this estimation error —the worst-case performance guarantees of this algorithm are provided as well.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: A novel blind image denoising algorithm which can cope with real-world noisy images even when the noise model is not provided is proposed, realized by modeling image noise with mixture of Gaussian distribution (MoG) which can approximate large varieties of continuous distributions.
Abstract: Traditional image denoising algorithms always assume the noise to be homogeneous white Gaussian distributed. However, the noise on real images can be much more complex empirically. This paper addresses this problem and proposes a novel blind image denoising algorithm which can cope with real-world noisy images even when the noise model is not provided. It is realized by modeling image noise with mixture of Gaussian distribution (MoG) which can approximate large varieties of continuous distributions. As the number of components for MoG is unknown practically, this work adopts Bayesian nonparametric technique and proposes a novel Low-rank MoG filter (LR-MoG) to recover clean signals (patches) from noisy ones contaminated by MoG noise. Based on LR-MoG, a novel blind image denoising approach is developed. To test the proposed method, this study conducts extensive experiments on synthesis and real images. Our method achieves the state-of the-art performance consistently.

Proceedings ArticleDOI
W Williem1, In Kyu Park1
01 Jun 2016
TL;DR: The proposed method is more robust to occlusion and less sensitive to noise, and outperforms the state-of-the-art light field depth estimation methods in qualitative and quantitative evaluation.
Abstract: Light field depth estimation is an essential part of many light field applications. Numerous algorithms have been developed using various light field characteristics. However, conventional methods fail when handling noisy scene with occlusion. To remedy this problem, we present a light field depth estimation method which is more robust to occlusion and less sensitive to noise. Novel data costs using angular entropy metric and adaptive defocus response are introduced. Integration of both data costs improves the occlusion and noise invariant capability significantly. Cost volume filtering and graph cut optimization are utilized to improve the accuracy of the depth map. Experimental results confirm that the proposed method is robust and achieves high quality depth maps in various scenes. The proposed method outperforms the state-of-the-art light field depth estimation methods in qualitative and quantitative evaluation.

Journal ArticleDOI
TL;DR: In this article, a noise map of the Isparta city center and its periphery was produced using inverse distance weighted (IDW), Kriging and multiquadric interpolation methods with different parameters and four grid resolution.

Journal ArticleDOI
TL;DR: Performance results based on extensive simulations and collected data sets demonstrate that the proposed receivers effectively mitigate impulsive noise for UWA OFDM systems.
Abstract: Mitigation of impulsive noise has been extensively studied in wireline, wireless radio, and powerline communication systems. However, its study in underwater acoustic (UWA) systems is quite limited. This paper considers impulsive noise mitigation for underwater orthogonal frequency-division multiplexing (OFDM) systems, where the system performance is severely impacted by the channel Doppler effect. We propose a practical approach based on a least squares formulation: First, the positions of impulsive noise are determined in the time domain based on the signal amplitude, and second, impulsive noise samples are jointly estimated with the Doppler shift based on the measurements of the OFDM null subcarriers. Based on the available channel estimate and tentative data symbol decisions, an iterative receiver is further developed. Data sets have been acquired in a recent sea experiment near Kaohsiung city, Taiwan, in May 2013. Performance results based on extensive simulations and collected data sets demonstrate that the proposed receivers effectively mitigate impulsive noise for UWA OFDM systems.

Journal ArticleDOI
TL;DR: The use of the Allan deviation plot is reported on to analyze the long-term stability of a quartz-enhanced photoacoustic (QEPAS) gas sensor and the prediction of its ultimate detection limit.
Abstract: We report here on the use of the Allan deviation plot to analyze the long-term stability of a quartz-enhanced photoacoustic (QEPAS) gas sensor. The Allan plot provides information about the optimum averaging time for the QEPAS signal and allows the prediction of its ultimate detection limit. The Allan deviation can also be used to determine the main sources of noise coming from the individual components of the sensor. Quartz tuning fork thermal noise dominates for integration times up to 275 s, whereas at longer averaging times, the main contribution to the sensor noise originates from laser power instabilities.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method to dynamically determine coherent generators and electrical areas of an interconnected power system based on dynamic frequency deviations of both generator and non-generator buses, with respect to system nominal frequency.
Abstract: This paper presents a new method to dynamically determine coherent generators and electrical areas of an interconnected power system. The proposed method is based on dynamic frequency deviations of both generator and non-generator buses, with respect to the system nominal frequency. The proposed method 1) largely overcomes the limitations of the existing model-based and measurement-based coherency identification methods, 2) enables dynamic tracking of the coherency time-evolution, and 3) provides noise immunity which is imperative in practical implementation. The method also promises the potential for real-time coherency calculation. The proposed method is applied to the 16-machine/68-bus NPCC system based on time-domain simulation studies in the PSS/E platform and the results are compared with those of the classical slow-coherency (model-based) method and a measurement-based method.

Journal ArticleDOI
TL;DR: This work introduces a general framework for estimation of a circular state based on different circular distributions, specifically the wrapped normal (WN) distribution and the von Mises distribution, and proposes an estimation method for circular systems with nonlinear system and measurement functions.
Abstract: To facilitate recursive state estimation in the circular domain based on circular statistics, we introduce a general framework for estimation of a circular state based on different circular distributions. Specifically, we consider the wrapped normal (WN) distribution and the von Mises distribution. We propose an estimation method for circular systems with nonlinear system and measurement functions. This is achieved by relying on efficient deterministic sampling techniques. Furthermore, we show how the calculations can be simplified in a variety of important special cases, such as systems with additive noise, as well as identity system or measurement functions, which are illustrated using an example from aeronautics. We introduce several novel key components, particularly a distribution-free prediction algorithm, a new and superior formula for the multiplication of WN densities, and the ability to deal with nonadditive system noise. All proposed methods are thoroughly evaluated and compared with several state-of-the-art approaches.

Journal ArticleDOI
TL;DR: The main aim of the method is to transform the original localization problem to other problems such that the number of unknown parameters is as small as possible.
Abstract: In this paper, we derive closed-form and near closed-form solutions for joint source and sensor localization from time-difference-of-arrival (TDOA) measurements. In our previous works, we derived closed-form and near closed-form solutions for joint source and sensor localization from time-of-arrival (TOA) measurements. On the basis of these results, the main idea in this paper is to recover the TOA information only from the given TDOA measurements. We show that the TOA information can be recovered by using the low-rank property of the difference of square TOA-distance matrix in a closed-form or a near closed-form based on the linear method of solving polynomial equations. Since the low-rank property is reliable even in noisy cases, the TOA recovery works well under both small and large amounts of noise. The root-mean-squared errors achieved by our proposed algorithms are compared with the Cramer–Rao lower bound in synthetic experiments. The results show that the proposed methods work well for both small and large amounts of noise and for small and large numbers of sources and sensors.

Journal ArticleDOI
TL;DR: The above proposed technique is called separable deep auto encoder (SDAE), and given the under-determined nature of the above optimization problem, the clean speech reconstruction is confined in the convex hull spanned by a pre-trained speech dictionary.
Abstract: Unseen noise estimation is a key yet challenging step to make a speech enhancement algorithm work in adverse environments. At worst, the only prior knowledge we know about the encountered noise is that it is different from the involved speech. Therefore, by subtracting the components which cannot be adequately represented by a well defined speech model, the noises can be estimated and removed. Given the good performance of deep learning in signal representation, a deep auto encoder (DAE) is employed in this work for accurately modeling the clean speech spectrum. In the subsequent stage of speech enhancement, an extra DAE is introduced to represent the residual part obtained by subtracting the estimated clean speech spectrum (by using the pre-trained DAE) from the noisy speech spectrum. By adjusting the estimated clean speech spectrum and the unknown parameters of the noise DAE, one can reach a stationary point to minimize the total reconstruction error of the noisy speech spectrum. The enhanced speech signal is thus obtained by transforming the estimated clean speech spectrum back into time domain. The above proposed technique is called separable deep auto encoder (SDAE). Given the under-determined nature of the above optimization problem, the clean speech reconstruction is confined in the convex hull spanned by a pre-trained speech dictionary. New learning algorithms are investigated to respect the non-negativity of the parameters in the SDAE. Experimental results on TIMIT with 20 noise types at various noise levels demonstrate the superiority of the proposed method over the conventional baselines.

Journal ArticleDOI
TL;DR: A new noise filtering method that combines several filtering strategies in order to increase the accuracy of the classification algorithms used after the filtering process and introduces a noisy score to control the filtering sensitivity.

Journal ArticleDOI
TL;DR: Performance of regularized least-squares estimation in noisy compressed sensing is studied in the limit when the problem dimensions grow large and it is shown that the standard IID ensemble is a suboptimal choice for the measurement matrix.
Abstract: The performance of regularized least-squares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, row-orthogonal, geometric, and so-called $T$ -orthogonal constructions. Source vectors that have non-uniform sparsity are included in the system model. Regularization based on $\ell _{1}$ -norm and leading to LASSO estimation, or basis pursuit denoising, is given the main emphasis in the analysis. Extensions to $\ell _{2}$ -norm and zero-norm regularization are also briefly discussed. The analysis is carried out using the replica method in conjunction with some novel matrix integration results. Numerical experiments for LASSO are provided to verify the accuracy of the analytical results. The numerical experiments show that for noisy compressed sensing, the standard Gaussian ensemble is a suboptimal choice for the measurement matrix. Orthogonal constructions provide a superior performance in all considered scenarios and are easier to implement in practical applications. It is also discovered that for non-uniform sparsity patterns, the $T$ -orthogonal matrices can further improve the mean square error behavior of the reconstruction when the noise level is not too high. However, as the additive noise becomes more prominent in the system, the simple row-orthogonal measurement matrix appears to be the best choice out of the considered ensembles.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the joint use of source and filter-based features, and proposed two strategies are proposed to merge source and filtering information: feature and decision fusion.
Abstract: Voice Activity Detection (VAD) refers to the problem of distinguishing speech segments from background noise. Numerous approaches have been proposed for this purpose. Some are based on features derived from the power spectral density, others exploit the periodicity of the signal. The goal of this letter is to investigate the joint use of source and filter-based features. Interestingly, a mutual information-based assessment shows superior discrimination power for the source-related features, especially the proposed ones. The features are further the input of an artificial neural network-based classifier trained on a multi-condition database. Two strategies are proposed to merge source and filter information: feature and decision fusion. Our experiments indicate an absolute reduction of 3% of the equal error rate when using decision fusion. The final proposed system is compared to four state-of-the-art methods on 150 minutes of data recorded in real environments. Thanks to the robustness of its source-related features, its multi-condition training and its efficient information fusion, the proposed system yields over the best state-of-the-art VAD a substantial increase of accuracy across all conditions (24% absolute on average).

Journal ArticleDOI
TL;DR: In this paper, two laboratory experiments were undertaken to evaluate the annoyance of urban road vehicle pass-by noises in the presence of industrial noise in order to evaluate spectral and temporal features.

Journal ArticleDOI
TL;DR: In this article, a model to dynamically update a noise map based on measurements is proposed, which relies on reasonable good source and propagation models and a not-very-dense measurement network.