scispace - formally typeset
Search or ask a question

Showing papers on "Noise measurement published in 2017"


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this article, a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise is presented, and two procedures for loss correction that are agnostic to both application domain and network architecture are proposed.
Abstract: We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures — stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers — demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.

1,171 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this paper, the authors proposed a methodology for benchmarking denoising techniques on real photographs by capturing pairs of images with different ISO values and appropriately adjusted exposure times, where the nearly noise-free low-ISO image serves as reference.
Abstract: Lacking realistic ground truth data, image denoising techniques are traditionally evaluated on images corrupted by synthesized i.i.d. Gaussian noise. We aim to obviate this unrealistic setting by developing a methodology for benchmarking denoising techniques on real photographs. We capture pairs of images with different ISO values and appropriately adjusted exposure times, where the nearly noise-free low-ISO image serves as reference. To derive the ground truth, careful post-processing is needed. We correct spatial misalignment, cope with inaccuracies in the exposure parameters through a linear intensity transform based on a novel heteroscedastic Tobit regression model, and remove residual low-frequency bias that stems, e.g., from minor illumination changes. We then capture a novel benchmark dataset, the Darmstadt Noise Dataset (DND), with consumer cameras of differing sensor sizes. One interesting finding is that various recent techniques that perform well on synthetic noise are clearly outperformed by BM3D on photographs with real noise. Our benchmark delineates realistic evaluation scenarios that deviate strongly from those commonly used in the scientific literature.

540 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes a multi-channel (MC) optimization model for real color image denoising under the weighted nuclear norm minimization (WNNM) framework, concatenate the RGB patches to make use of the channel redundancy, and introduces a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics.
Abstract: Most of the existing denoising algorithms are developed for grayscale images. It is not trivial to extend them for color image denoising since the noise statistics in R, G, and B channels can be very different for real noisy images. In this paper, we propose a multi-channel (MC) optimization model for real color image denoising under the weighted nuclear norm minimization (WNNM) framework. We concatenate the RGB patches to make use of the channel redundancy, and introduce a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics. The proposed MC-WNNM model does not have an analytical solution. We reformulate it into a linear equality-constrained problem and solve it via alternating direction method of multipliers. Each alternative updating step has a closed-form solution and the convergence can be guaranteed. Experiments on both synthetic and real noisy image datasets demonstrate the superiority of the proposed MC-WNNM over state-of-the-art denoising methods.

226 citations


Journal ArticleDOI
TL;DR: It is shown that DNN-based SE systems, when trained specifically to handle certain speakers, noise types and SNRs, are capable of achieving large improvements in estimated speech quality (SQ) and speech intelligibility (SI), when tested in matched conditions.
Abstract: In this paper, we study aspects of single microphone speech enhancement (SE) based on deep neural networks (DNNs). Specifically, we explore the generalizability capabilities of state-of-the-art DNN-based SE systems with respect to the background noise type, the gender of the target speaker, and the signal-to-noise ratio (SNR). Furthermore, we investigate how specialized DNN-based SE systems, which have been trained to be either noise type specific, speaker specific or SNR specific, perform relative to DNN based SE systems that have been trained to be noise type general, speaker general, and SNR general. Finally, we compare how a DNN-based SE system trained to be noise type general, speaker general, and SNR general performs relative to a state-of-the-art short-time spectral amplitude minimum mean square error (STSA-MMSE) based SE algorithm. We show that DNN-based SE systems, when trained specifically to handle certain speakers, noise types and SNRs, are capable of achieving large improvements in estimated speech quality (SQ) and speech intelligibility (SI), when tested in matched conditions. Furthermore, we show that improvements in estimated SQ and SI can be achieved by a DNN-based SE system when exposed to unseen speakers, genders and noise types, given a large number of speakers and noise types have been used in the training of the system. In addition, we show that a DNN-based SE system that has been trained using a large number of speakers and a wide range of noise types outperforms a state-of-the-art STSA-MMSE based SE method, when tested using a range of unseen speakers and noise types. Finally, a listening test using several DNN-based SE systems tested in unseen speaker conditions show that these systems can improve SI for some SNR and noise type configurations but degrade SI for others.

191 citations


Journal ArticleDOI
TL;DR: This paper focuses on the most general model for sensor attacks where any signal can be injected via compromised sensors, and presents an attack-resilient state estimator that can be formulated as a mixed-integer linear program and its convex relaxation based on the LaTeX norm.
Abstract: Several recent incidents have clearly illustrated the susceptibility of cyberphysical systems (CPS) to attacks, raising attention to security challenges in these systems. The tight interaction between information technology and the physical world has introduced new vulnerabilities that cannot be addressed with the use of standard cryptographic security techniques. Accordingly, the problem of state estimation in the presence of sensor and actuator attacks has attracted significant attention in the past. Unlike the existing work, in this paper, we consider the problem of attack-resilient state estimation in the presence of bounded-size noise. We focus on the most general model for sensor attacks where any signal can be injected via compromised sensors. Specifically, we present an $l_0$ -based state estimator that can be formulated as a mixed-integer linear program and its convex relaxation based on the $l_1$ norm. For both attack-resilient state estimators, we derive rigorous analytic bounds on the state-estimation errors caused by the presence of noise. Our analysis shows that the worst-case error is linear with the size of the noise and, thus, the attacker cannot exploit the noise to introduce unbounded state-estimation errors. Finally, we show how the $l_0$ and $l_1$ -based attack-resilient state estimators can be used for sound attack detection and identification; we provide conditions on the size of attack vectors that ensure correct identification of compromised sensors.

187 citations


Journal ArticleDOI
TL;DR: In this paper, a random frequency diverse array-based directional modulation with artificial noise (RFDA-DM-AN) scheme was proposed to enhance physical layer security of wireless communications.
Abstract: In this paper, a random frequency diverse array-based directional modulation with artificial noise (RFDA-DM-AN) scheme is proposed to enhance physical layer security of wireless communications. Specifically, we first design the RFDA-DM-AN scheme by randomly allocating frequencies to transmit antennas, thereby achieving 2-D (i.e., angle and range) secure transmissions, and outperforming the state-of-the-art 1-D (i.e., angle) phase array (PA)-based DM scheme. Then we derive the closed-form expression of a lower bound on the ergodic secrecy capacity (ESC) of our RFDA-DM-AN scheme. Based on the theoretical lower bound derived, we further optimize the transmission power allocation between the useful signal and artificial noise (AN) in order to improve the ESC. Simulation results show that: 1) our RFDA-DM-AN scheme achieves a higher secrecy capacity than that of the PA-based DM scheme; 2) the lower bound derived is shown to approach the ESC as the number of transmit antennas $N$ increases and precisely matches the ESC when $N$ is sufficiently large; and 3) the proposed optimum power allocation achieves the highest ESC of all power allocations schemes in the RFDA-DM-AN.

186 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, and -mean.
Abstract: Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, ${F}$ -measure, and ${G}$ -mean.

172 citations


Journal ArticleDOI
TL;DR: In this paper, a Kalman filter (KF) with the same order as the system provides optimal state estimates in a way that is simple and fast and uses little memory; however, the KF is an infinite impulse response (IIR) filter and performance may be poor if operational conditions are far from ideal.
Abstract: If a system and its observation are both represented in state space with linear equations, the system noise and the measurement noise are white, Gaussian, and mutually uncorrelated, and the system and measurement noise statistics are known exactly; then, a Kalman filter (KF) [1] with the same order as the system provides optimal state estimates in a way that is simple and fast and uses little memory. Because such estimators are of interest for designers, numerous linear and nonlinear problems have been solved using the KF, and many articles about KF applications appear every year. However, the KF is an infinite impulse response (IIR) filter [2]. Therefore, the KF performance may be poor if operational conditions are far from ideal [3]. Researchers working in the field of statistical signal processing and control are aware of the numerous issues facing the use of the KF in practice: insufficient robustness against mismodeling [4] and temporary uncertainties [2], the strong effect of the initial values [1], and high vulnerability to errors in the noise statistics [5]-[7].

147 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper addresses the problem of line pattern noise removal from a single image, such as rain streak, hyperspectral stripe and so on, and proposes a compositional directional total variational and low-rank prior for the image layer, thus to simultaneously accommodate both types of noise.
Abstract: This paper addresses the problem of line pattern noise removal from a single image, such as rain streak, hyperspectral stripe and so on. Most of the previous methods model the line pattern noise in original image domain, which fail to explicitly exploit the directional characteristic, thus resulting in a redundant subspace with poor representation ability for those line pattern noise. To achieve a compact subspace for the line pattern structure, in this work, we incorporate a transformation into the image decomposition model so that maps the input image to a domain where the line pattern appearance has an extremely distinct low-rank structure, which naturally allows us to enforce a low-rank prior to extract the line pattern streak/stripe from the noisy image. Moreover, the random noise is usually mixed up with the line pattern noise, which makes the challenging problem much more difficult. While previous methods resort to the spectral or temporal correlation of the multi-images, we give a detailed analysis between the noisy and clean image in both local gradient and nonlocal domain, and propose a compositional directional total variational and low-rank prior for the image layer, thus to simultaneously accommodate both types of noise. The proposed method has been evaluated on two different tasks, including remote sensing image mixed random-stripe noise removal and rain streak removal, all of which obtain very impressive performances.

140 citations


Journal ArticleDOI
TL;DR: A new noise suppression algorithm for seismic data denoising that is visually and quantitatively superior to the other well-established noise reduction methods is described.
Abstract: Random noise elimination acts as an important role in the seismic signal processing. Generally, noise in seismic data can be divided into two categories of coherent and incoherent or random noise. Suppression of wide-band noise which is characterized by random oscillation in seismic data over time is one of the challenging issues in the seismic data processing. This paper describes a new noise suppression algorithm for seismic data denoising. The seismic data, trace-by-trace are transformed into sparse subspace using the synchrosqueezed wavelet transform, then the obtained sparse time-frequency representation is decomposed into semilow-rank and sparse components using the Optshrink algorithm. Finally, the denoised seismic trace can be recovered by back-transforming the semilow-rank component to the time domain using inverse synchrosqueezed wavelet transform. The proposed method is assessed using a single synthetic seismic trace and a synthetic seismic section with two crossover linear and curve events with two discontinuities that are buried in the random noise. We have also evaluated the method using a prestack real seismic data set from an oil field in the southwest of Iran. A comparison is performed between the proposed method and the semisoft GoDec algorithm, classical f-x singular spectrum analysis, and prediction Wiener filter. The results visually and quantitatively confirmed the superiority of the proposed method in contrast to the other well-established noise reduction methods.

132 citations


Journal ArticleDOI
TL;DR: An effective mixture noise removal method based on Laplacian scale mixture (LSM) modeling and nonlocal low-rank regularization and Experimental results on synthetic noisy images show that the proposed method outperforms existing mixture Noise removal methods.
Abstract: Recovering the image corrupted by additive white Gaussian noise (AWGN) and impulse noise is a challenging problem due to its difficulties in an accurate modeling of the distributions of the mixture noise. Many efforts have been made to first detect the locations of the impulse noise and then recover the clean image with image in painting techniques from an incomplete image corrupted by AWGN. However, it is quite challenging to accurately detect the locations of the impulse noise when the mixture noise is strong. In this paper, we propose an effective mixture noise removal method based on Laplacian scale mixture (LSM) modeling and nonlocal low-rank regularization. The impulse noise is modeled with LSM distributions, and both the hidden scale parameters and the impulse noise are jointly estimated to adaptively characterize the real noise. To exploit the nonlocal self-similarity and low-rank nature of natural image, a nonlocal low-rank regularization is adopted to regularize the denoising process. Experimental results on synthetic noisy images show that the proposed method outperforms existing mixture noise removal methods.

Journal ArticleDOI
TL;DR: The A-optimality criterion is adopted to determine optimal sensor placements under the Gaussian measurement noise and a comprehensive analysis of optimal sensor-target geometries is provided with no restriction on the number of AOA sensors, sensor- target range, and noise variances.
Abstract: This paper investigates optimal sensor placement strategies for angle-of-arrival (AOA) localization in three-dimensional space. We adopt the A-optimality criterion to determine optimal sensor placements under the Gaussian measurement noise. A comprehensive analysis of optimal sensor-target geometries is provided with no restriction on the number of AOA sensors, sensor-target range, and noise variances. A resistor network analogy is also presented to enable quick determination of optimal sensor-target geometries. The analytical results are verified by extensive simulation studies.

Journal ArticleDOI
Abstract: Recorded seismic signals are often corrupted by noise. We have developed an automatic noise-attenuation method for single-channel seismic data, based upon high-resolution time-frequency analysis. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and attenuated more easily in this reassigned domain. The threshold level is estimated using a general cross-validation approach that does not rely on any prior knowledge about the noise level. The efficiency of the thresholding has been improved by adding a preprocessing step based on kurtosis measurement and a postprocessing step based on adaptive hard thresholding. The proposed algorithm can either attenuate the noise (either white or colored) and keep the signal or remove the signal and keep the noise. Hence, it can be used in either normal denoising applications or preprocessing in ambient noise studies. We tested the performance of the proposed method on s...

Journal ArticleDOI
TL;DR: The concept of graceful scaling is introduced in which the run time of an algorithm scales polynomially with noise intensity, and it is shown that a simple EDA called the compact genetic algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise.
Abstract: Practical optimization problems frequently include uncertainty about the quality measure, for example, due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noise-handling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noise-handling. Furthermore, we prove that using a population does not help. Finally, we show that a simple EDA called the compact genetic algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Journal ArticleDOI
TL;DR: The positioning accuracy of the proposed algorithm is shown to outperform the existing closed-form methods under this distance-dependent noise model and the Cramer–Rao lower bound under the small Gaussian noise assumption.
Abstract: In this letter, the problem of target localization from bistatic range (BR) measurements in distributed multiple-input multiple-output radar systems is investigated. By introducing nuisance parameters, a pseudolinear set of BR equations is established. Then, a closed-form localization algorithm is developed, based on this pseudolinear set of equations and multistage weighted least squares estimation. This solution is shown analytically to attain the Cramer–Rao lower bound under the small Gaussian noise assumption. Simulations are included to support the theoretical studies. Unlike the existing studies where the variance of BR measurements is assumed to be independent of the corresponding transmitter-to-target and target-to-receiver distances, we consider the more realistic distance-dependent noise model. The positioning accuracy of the proposed algorithm is shown to outperform the existing closed-form methods under this distance-dependent noise model.

Journal ArticleDOI
TL;DR: An effective variation model for multimodality medical image fusion and denoising is proposed, which performs well with both noisy and normal medical images, outperforming conventional methods in terms of fusion quality and noise reduction.
Abstract: Medical image fusion aims at integrating information from multimodality medical images to obtain a more complete and accurate description of the same object, which provides an easy access for image-guided medical diagnostic and treatment Unfortunately, medical images are often corrupted by noise in acquisition or transmission, and the noise signal is easily mistaken for a useful characterization of the image, making the fusion effect drop significantly Thus, the existence of noise presents a great challenge for most of traditional image fusion methods To address this problem, an effective variation model for multimodality medical image fusion and denoising is proposed First, a multiscale alternating sequential filter is exploited to extract the useful characterizations (eg, details and edges) from noisy input medical images Then, a recursive filtering-based weight map is constructed to guide the fusion of main features of input images Additionally, total variation (TV) constraint is developed by constructing an adaptive fractional order $p$ based on the local contrast of fused image, further effectively suppressing noise while avoiding the staircase effect of the TV The experimental results indicate that the proposed method performs well with both noisy and normal medical images, outperforming conventional methods in terms of fusion quality and noise reduction

Proceedings ArticleDOI
01 Dec 2017
TL;DR: By adding sufficient noise to the image, the Google Cloud Vision API generates completely different outputs for the noisy image, while a human observer would perceive its original content, suggesting that cloud vision API can readily benefit from noise filtering, without the need for updating image analysis algorithms.
Abstract: Google has recently introduced the Cloud Vision API for image analysis. According to the demonstration website, the API "quickly classifies images into thousands of categories, detects individual objects and faces within images, and finds and reads printed words contained within images." It can be also used to "detect different types of inappropriate content from adult to violent content." In this paper, we evaluate the robustness of Google Cloud Vision API to input perturbation. In particular, we show that by adding sufficient noise to the image, the API generates completely different outputs for the noisy image, while a human observer would perceive its original content. We show that the attack is consistently successful, by performing extensive experiments on different image types, including natural images, images containing faces and images with texts. For instance, using images from ImageNet dataset, we found that adding an average of 14.25% impulse noise is enough to deceive the API. Our findings indicate the vulnerability of the API in adversarial environments. For example, an adversary can bypass an image filtering system by adding noise to inappropriate images. We then show that when a noise filter is applied on input images, the API generates mostly the same outputs for restored images as for original images. This observation suggests that cloud vision API can readily benefit from noise filtering, without the need for updating image analysis algorithms.

Journal ArticleDOI
TL;DR: In this article, a desired compensation adaptive controller is proposed for precision motion control of electro-hydraulic servo systems, with consideration of the nonlinearity, modeling uncertainty, and especially the severe measurement noise arising from actual state feedbacks and deteriorating the control performance significantly.
Abstract: A desired compensation adaptive controller is proposed in this paper for precision motion control of electro-hydraulic servo systems, with consideration of the nonlinearity, modeling uncertainty, and especially the severe measurement noise arising from actual state feedbacks and deteriorating the control performance significantly. To alleviate the noise, actual states in the model-based compensation design are replaced with corresponding desired values. Considering that the general hydraulic system model contains unmatched modeling uncertainties (e.g., unmodeled nonlinear friction), an innovative approach to construct the desired values of the intermediate state variables is proposed in backstepping design procedure. It is then applied especially to the load pressure state, which appears in the system in a nonlinear way, and the discontinuous sign function is approximated by a continuous function to facilitate the controller design. As a result, the adaptive compensation and the regressor in the proposed controller depend on the desired trajectory and online parameter estimates only. Hence, the effect of measurement noise can be reduced and then high control performance is expected. Theoretical analysis reveals that the proposed controller can guarantee a prescribed transient performance and final tracking accuracy in the presence of both parametric uncertainties and uncertain nonlinearities. Moreover, it can guarantee the asymptotic tracking performance when subjected to parametric uncertainties only. Extensively comparative experimental results are obtained to verify the effectiveness of the proposed control strategy.

Journal ArticleDOI
TL;DR: Noise adaptive variational Bayesian cubature information filter based on Wishart distribution propagating the information matrix and information state is derived and the integration of recursive Bayesian estimation is approximated by cubature integration rule.
Abstract: This paper presents a noise adaptive variational Bayesian cubature information filter based on Wishart distribution In the frame of recursive Bayesian estimation, the noise adaptive information filter propagating the information matrix and information state is derived And the integration of recursive Bayesian estimation is approximated by cubature integration rule Then, the inverse of measurement noise matrix is modeled as a Wishart distribution, so the joint distribution of posterior state and measurement noise can be approximated by the product of independent Gaussian and Wishart Furthermore, the corresponding square root version is also derived to improve numerical characteristics Simulation results with unknown and correlated measurement noise demonstrate the effectiveness of the proposed algorithms

Journal ArticleDOI
TL;DR: This paper presents a novel map-matching solution that combines the widely used approach based on a hidden Markov model (HMM) with the concept of drivers’ route choice, which uses an HMM tailored for noisy and sparse data to generate partial map-matched paths in an online manner.
Abstract: With the growing use of crowdsourced location data from smartphones for transportation applications, the task of map-matching raw location sequence data to travel paths in the road network becomes more important. High-frequency sampling of smartphone locations using accurate but power-hungry positioning technologies is not practically feasible as it consumes an undue amount of the smartphone’s bandwidth and battery power. Hence, there exists a need to develop robust algorithms for map-matching inaccurate and sparse location data in an accurate and timely manner. This paper addresses the above-mentioned need by presenting a novel map-matching solution that combines the widely used approach based on a hidden Markov model (HMM) with the concept of drivers’ route choice. Our algorithm uses an HMM tailored for noisy and sparse data to generate partial map-matched paths in an online manner. We use a route choice model, estimated from real drive data, to reassess each HMM-generated partial path along with a set of feasible alternative paths. We evaluated the proposed algorithm with real world as well as synthetic location data under varying levels of measurement noise and temporal sparsity. The results show that the map-matching accuracy of our algorithm is significantly higher than that of the state of the art, especially at high levels of noise.

Journal ArticleDOI
TL;DR: A deep stage convolutional neural network with trainable nonlinearity functions is applied for the first time to remove noise in HSIs and the experimental results confirm that the proposed method can obtain a more effective and efficient performance.
Abstract: Hyperspectral images (HSIs) can describe subtle differences in the spectral signatures of objects, and thus they are effective in a wide array of applications. However, an HSI is inevitably contaminated with some unwanted components like noise resulting in spectral distortion, which significantly decreases the performance of postprocessing. In this letter, a deep stage convolutional neural network (CNN) with trainable nonlinearity functions is applied for the first time to remove noise in HSIs. Besides the fact that the weight and bias matrices are learned from cubic training clean-noisy HSI patches, the nonlinearity functions in each stage are also trainable, which differ from the conventional CNN with a fixed nonlinearity function. Compared with the state-of-the-art HSI denoising methods, the experimental results on both synthetic and real HSIs confirm that the proposed method can obtain a more effective and efficient performance.

Journal ArticleDOI
TL;DR: To solve the correntropy-based joint sparsity model, a half-quadratic optimization technique is developed to convert the original nonconvex and nonlinear optimization problem into an iteratively reweighted JSR problem.
Abstract: Joint sparse representation (JSR) has been a popular technique for hyperspectral image classification, where a testing pixel and its spatial neighbors are simultaneously approximated by a sparse linear combination of all training samples, and the testing pixel is classified based on the joint reconstruction residual of each class. Due to the least-squares representation of the approximation error, the JSR model is usually sensitive to outliers, such as background, noisy pixels, and outlying bands. In order to eliminate such effects, we propose three correntropy-based robust JSR (RJSR) models, i.e., RJSR for handling pixel noise, RJSR for handling band noise, and RJSR for handling both pixel and band noise. The proposed RJSR models replace the traditional square of the Euclidean distance with the correntropy-based metric in measuring the joint approximation error. To solve the correntropy-based joint sparsity model, a half-quadratic optimization technique is developed to convert the original nonconvex and nonlinear optimization problem into an iteratively reweighted JSR problem. As a result, the optimization of our models can handle the noise in neighboring pixels and the noise in spectral bands. It can adaptively assign small weights to noisy pixels or bands and put more emphasis on noise-free pixels or bands. The experimental results using real and simulated data demonstrate the effectiveness of our models in comparison with the related state-of-the-art JSR models.

Journal ArticleDOI
TL;DR: This consensus paper was prepared by the Impacts of Science Group of the Committee for Aviation Environmental Protection of the International Civil Aviation Organization and summarizes the state of the science of noise effects research in the areas of noise measurement and prediction, community annoyance, children’s learning, sleep disturbance, and health.
Abstract: Noise is defined as 'unwanted sound.' Aircraft noise is one, if not the most detrimental environmental effect of aviation. It can cause community annoyance, disrupt sleep, adversely affect academic performance of children, and could increase the risk for cardiovascular disease of people living in the vicinity of airports. In some airports, noise constrains air traffic growth. This consensus paper was prepared by the Impacts of Science Group of the Committee for Aviation Environmental Protection of the International Civil Aviation Organization and summarizes the state of the science of noise effects research in the areas of noise measurement and prediction, community annoyance, children's learning, sleep disturbance, and health. It also briefly discusses civilian supersonic aircraft as a future source of aviation noise. © 2017 Noise & Health | Published by Wolters Kluwer - Medknow.

Journal ArticleDOI
TL;DR: An algebraic closed-form method for locating a single target from BR measurements using a distributed multiple-input multiple-output (MIMO) radar system is proposed, which is able to reach the Cramer–Rao lower bound accuracy under mild noise conditions.
Abstract: Elliptic localization is an active range-based positioning technique that employs multiple transmitter–receiver pairs, each of which is able to provide separate bistatic range (BR) measurement. In this letter, an algebraic closed-form method for locating a single target from BR measurements using a distributed multiple-input multiple-output (MIMO) radar system is proposed. First, a set of linear equations is established by eliminating the nuisance parameters, and then, a weighted least squares estimator is employed to obtain the target position estimate. To refine the localization performance, the error in the initial solution is estimated in the sequence. The proposed method is shown analytically and corroborated by simulations to be an approximately unbiased estimation, which is able to reach the Cramer–Rao lower bound accuracy under mild noise conditions. Numerical simulations demonstrate the superiority of this algorithm over the existing methods.

Journal ArticleDOI
TL;DR: In this paper, a randomization operator is proposed to disperse the energy of the coherent noise in the time-space domain, which can be used for simultaneous random and coherent noise attenuation.
Abstract: Multichannel singular spectrum analysis (MSSA) is an effective algorithm for random noise attenuation; however, it cannot be used to suppress coherent noise. This limitation results from the fact that the conventional MSSA method cannot distinguish between useful signals and coherent noise in the singular spectrum. We have developed a randomization operator to disperse the energy of the coherent noise in the time-space domain. Furthermore, we have developed a novel algorithm for the extraction of useful signals, i.e., for simultaneous random and coherent noise attenuation, by introducing a randomization operator into the conventional MSSA algorithm. In this method, which we call randomized-order MSSA, the traces along the trajectory of each signal component are randomly rearranged. Two ways to extract the trajectories of different signal components are investigated. The first is based on picking the extrema of the upper envelopes, a method that is also constrained by local and global gradients. Th...

Journal ArticleDOI
TL;DR: Results show that two statistical clusters differentiated by rush hour traffic flow are sufficient and better for categorization than the road types provided by Italian road regulation.

Journal ArticleDOI
TL;DR: A double least-squares projections (DLSPs) method to estimate a signal from the noisy data, which implements projection operation twice to obtain an almost perfect estimation of a signal.
Abstract: A real-world signal is always corrupted with noise. The separation between a signal and noise is an indispensable step in a variety of signal-analysis applications across different scientific domains. In this paper, we propose a double least-squares projections (DLSPs) method to estimate a signal from the noisy data. The first least-squares projection is to find a signal-dimensional optimal approximation of the noisy data in the least-squares sense. In this step, a rough estimation of the signal is obtained. The second least-squares projection is to find an approximation of a signal in another crossed signal-dimensional space in the least-squares sense. In this step, a much improved signal estimation that is close to orthogonal to the separated noise subspace can be obtained. The DLSP implements projection operation twice to obtain an almost perfect estimation of a signal. The application of the DLSP method in seismic random noise attenuation and signal reconstruction demonstrates the successful performance in seismic data processing.

Journal ArticleDOI
TL;DR: This work votes on surface normal tensors from robust statistics to guide the creation of consistent subneighborhoods subsequently used by moving least squares (MLS) to give a unified mesh-denoising framework for not only handling noise but also enabling the recovering of surfaces with both sharp and small-scale features.
Abstract: Mesh denoising is imperative for improving imperfect surfaces acquired by scanning devices The main challenge is to faithfully retain geometric features and avoid introducing additional artifacts when removing noise Unlike the existing mesh denoising techniques that focus only on either the first-order features or high-order differential properties, our approach exploits the synergy when facet normals and quadric surfaces are integrated to recover a piecewise smooth surface In specific, we vote on surface normal tensors from robust statistics to guide the creation of consistent subneighborhoods subsequently used by moving least squares (MLS) This voting naturally leads to a conceptually simple way that gives a unified mesh-denoising framework for not only handling noise but also enabling the recovering of surfaces with both sharp and small-scale features The effectiveness of our framework stems from: 1) the multiscale tensor voting that avoids the influence from noise; 2) the effective energy minimization strategy to searching the consistent subneighborhoods; and 3) the piecewise MLS that fully prevents the side effects from different subneighborhoods during surface fitting Our framework is direct, practical, and easy to understand Comparisons with the state-of-the-art methods demonstrate its outstanding performance on feature preservation and artifact suppression

Journal ArticleDOI
TL;DR: Experimental results illustrate that the proposed process uncertainty robust Student’s t-based Kalman filter has significantly better robustness for the suppression of the process uncertainty but slightly higher computational complexity than the existing state-of-the-art methods.
Abstract: Motivated by the problem that the Gaussian assumption of process noise may be violated and the statistics of process noise may be inaccurate when the carrier maneuvers severely, a new process uncertainty robust Student’s t-based Kalman filter is proposed to integrate the strap-down inertial navigation system (SINS) and global positioning system (GPS). To better address the heavy-tailed process noise induced by severe maneuvering, the one-step predicted probability density function is modeled as a Student’s t distribution, and the conjugate prior distributions of inaccurate mean vector, scale matrix, and degrees of freedom (dofs) parameter are, respectively, selected as Gaussian, inverse Wishart, and Gamma distributions, based on which a new Student’s t-based hierarchical Gaussian state-space model for SINS/GPS integration is constructed. The state vector, auxiliary random variable, mean vector, scale matrix, and dof parameter are jointly estimated based on the constructed hierarchical Gaussian state-space model using the variational Bayesian approach. Experimental results illustrate that the proposed method has significantly better robustness for the suppression of the process uncertainty but slightly higher computational complexity than the existing state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this paper, a physics-based dipole moment source reconstruction was proposed to estimate the near-field coupling between a liquid crystal display panel to a cellphone's cellular antenna, based on the understanding of the current distribution on the source.
Abstract: A physics-based dipole moment source reconstruction is proposed to estimate the near-field coupling between a liquid crystal display panel to a cellphone's cellular antenna. Based on the understanding of the current distribution on the source, a magnetic dipole moment source is reconstructed to replace the real radiation source that is located at the edge of the flexible printed circuit board. To characterize the coupling from the equivalent dipole moment source to the victim antenna, the noise transfer coefficient is proposed. The noise transfer coefficient can be calculated from the near-field scanning and the direct coupling measurements using a wideband source. The proposed physics-based dipole moment source reconstruction and noise transfer coefficient are successfully validated through the measured near-field coupling in a practical cellphone.