scispace - formally typeset
Search or ask a question

Answers from top 9 papers

More filters
Papers (9)Insight
The results show that it also has a good noise-robustness.
As a result, the proposed method obtains a good performance in most of the different types of noise.
It is also relatively robust to noise.
The technique also shows good noise tolerance characteristics.
Proceedings ArticleDOI
Feiyan Mu, Jiafen Zhang, Jing Du 
27 Jun 2011
6 Citations
Simulation results show that this technology can reach good test accuracy, against both white noise and colored noise.
Good performance and valuable noise characteristics are achieved.
It is also robust to additive noise.
Benefits of the proposed work are simple noise prediction mechanism, good visual quality and less execution time.
Given that blue noise is a limiting case of green noise, this new technique can even create blue-noise masks.

See what other people are reading

What is the mathematical formula for amplitude demodulation in Matlab?
4 answers
The mathematical formula for amplitude demodulation in Matlab involves constructing an M × N Hankel matrix based on discrete samples, performing Singular Value Decomposition (SVD) on the matrix, and then using the first two singular values to estimate the amplitude information of the acquired signal. This method eliminates the need for reference signals, ensuring synchronization is unnecessary, and provides high Signal-to-Noise Ratio (SNR) even with non-integrity period sampling. Additionally, SVD itself acts as a filter, eliminating the need for extra low-pass filters in the signal conditioning module. The proposed approach demonstrates excellent demodulation accuracy and robust performance through numerical simulations and experiments, showcasing its effectiveness in providing reliable amplitude demodulation in Matlab.
Does constant motion uncertainty in Kalman Filter effect proximity estimation?
4 answers
Constant motion uncertainty in Kalman Filter (KF) indeed affects proximity estimation, as evidenced by various studies focusing on the precision and reliability of estimation and tracking systems. The traditional KF assumes constant measurement noise, which is often unrealistic in dynamic environments, leading to incorrect estimations and increased system integrity risk. This is particularly problematic in applications requiring high precision, such as dynamic positioning and navigation systems, where the quality of GNSS signal and measurement noise levels can be unpredictable. Adaptive Kalman filtering techniques have been developed to address the limitations posed by constant motion uncertainty. For instance, the measurement sequence adaptive KF (MSAKF) estimates unknown parameters of noise statistics adaptively, enhancing computational efficiency and ensuring stability and high precision in complex environments. Similarly, an innovative adaptive KF based on integer ambiguity validation improves the efficiency of ambiguity resolution and positioning accuracy by dynamically adjusting the measurement noise matrix and variance-covariance matrix. The impact of constant motion uncertainty extends to the estimation of proximity relations among mobile objects. The inaccuracy of dynamically obtained position data and the simplification of unknown positions to a probability distribution around the last known position pose significant challenges. Moreover, the use of uncertain Gauss-Markov noise models and their adaptation for continuous-time and discrete-time systems highlight the importance of overbounding estimate error covariance to ensure reliability in safety-critical applications. In the context of real-time motion management, such as in radiotherapy, adaptive uncertainty estimates derived from surrogate behavior significantly improve the accuracy of predicted confidence regions, enabling the detection of large prediction errors. This adaptability is crucial for managing the uncertainties inherent in dynamic systems and ensuring the reliability of proximity estimations. In summary, constant motion uncertainty in Kalman Filters significantly affects proximity estimation, necessitating adaptive and robust filtering approaches to improve accuracy and reliability across various applications.
What are the most commonly used metrics for evaluating search task performance in academic papers?
5 answers
The most commonly used metrics for evaluating search task performance in academic papers include hypervolume, inverted generational distance, generational distance, and hypercube-based diversity metrics. Additionally, relevance, publication age, and impact are crucial factors considered in evaluating the usefulness of academic papers. Furthermore, the influence of search intent on user satisfaction and evaluation metrics in web image search has been studied, highlighting the importance of understanding user intent for improving search processes. Moreover, the performance of similarity metrics is essential in tasks like image processing, where robustness of metrics significantly impacts search performance, especially in the presence of noise. These diverse metrics and factors play key roles in assessing the effectiveness and efficiency of search tasks in academic papers and other domains.
What are noise levels ?
5 answers
Noise levels refer to the intensity of unwanted sound present in various environments. They can be measured and quantified using different parameters such as Sound Pressure Level (SPL) in dB(A), Time Weighted Average (TWA), Equivalent Continuous Noise Level (ECNL), and octave band analysis. In image processing, noise levels can be determined in subbands by analyzing wavelet coefficients and computing correlations to assess noise components. Methods for estimating noise levels involve segmenting waveform data to identify positive and negative values, determining segment widths, and calculating noise levels based on segment heights or areas. In digital TV signal processing, noise levels are crucial for controlling noise suppression filters and enhancing image quality in modern TV equipment. Noise assessment is also vital in image quality evaluation, where noise level estimation aids in iterative noise reduction processes by separating featureless regions and calculating pseudo-standard deviations for accurate noise measures.
What are the important variables to be considered when obtaining satellite images?
5 answers
When obtaining satellite images, important variables to consider include biophysical variables for various applications like agriculture, environment management, and climate studies. Factors affecting image quality are noise, which can be caused by environmental conditions, instrument accuracy, and data quantization, and can be mitigated through image restoration techniques like median, arithmetic mean, and geometric mean filters. Additionally, the image quality of very high-resolution satellite telescopes is influenced by optics, detector quality, pointing accuracy errors, attitude stability errors, altitude variations, and the use of charge-coupled devices with time delay and integration features. Furthermore, factors like illumination angle, Sun-Earth distance, environmental conditions, and weather effects must be considered during image rectification and registration processes for accurate interpretation and information extraction from satellite images.
How environmental sound classification helps in hearing aid?
5 answers
Environmental sound classification plays a crucial role in enhancing hearing aid functionality. By utilizing advanced algorithms like spectral entropy-based features with random forest classifiers, convolutional neural networks for noise classification, and entropy metrics to quantify diversity in auditory environments, hearing aids can accurately identify and adapt to various acoustic scenes. This classification information aids in customizing signal processing for different users based on the environments they encounter daily. Moreover, environmental classification data is valuable for clinicians in fitting, counseling, and troubleshooting hearing aids. Overall, the classification of environmental sounds enables hearing aids to optimize performance in different listening scenarios, ultimately improving the user experience and quality of sound processing.
What are the examples of GANs applications for the deconvolution (deblurring) of fluorescence microscopy images?
4 answers
Generative Adversarial Networks (GANs) have been applied in various ways for deconvolution (deblurring) of fluorescence microscopy images. Examples include FluoGAN, which combines physical modeling with GANs to deconvolve microscopy images effectively. Another instance is the Richardson-Lucy Network (RLN), a lightweight deep learning method that integrates traditional deconvolution techniques with a convolutional network structure for improved interpretability and robustness. Additionally, a blind global noise modeling denoiser (GNMD) utilizes GANs to simulate image noise globally, enhancing denoising performance for fluorescence microscopy images. These applications showcase the versatility and effectiveness of GANs in addressing challenges related to deblurring and denoising in fluorescence microscopy.
What causes noise in seismic data?
5 answers
Noise in seismic data can stem from various sources. Weather-related mechanisms, like hydrostatic-pressure fluctuations caused by wave motion and dynamic fluctuations from turbulent boundary layers, contribute to noise during marine seismic acquisition. Additionally, seismic data can be affected by random noise generated by natural sources such as wind friction, tree vibrations, and cultural sources like traffic and human activities. During seismic data acquisition, unwanted waves, termed noise, are inevitable, impacting the quality of seismic records. The noise can manifest as ground rolls, multiples, and random oscillations, affecting the signal-to-noise ratio across time and frequency domains. Understanding and mitigating these diverse noise sources are crucial to enhancing the quality and interpretability of seismic data.
How do high-pass filters contribute to reducing noise in Doppler ultrasound images?
4 answers
High-pass filters play a crucial role in reducing noise in Doppler ultrasound images by suppressing thermal noise and tissue clutter. However, their effectiveness is limited when blood signals and tissue clutter overlap in spectra. To address this limitation, alternative approaches like singular value decomposition (SVD) based spatio-temporal filters have been proposed, but they still struggle with tissue motion and thermal noise. A novel technique called Principal Component Pursuit (PCP) has been introduced to remove thermal noise, tissue clutter, and tissue motion artifact in Doppler ultrasound images. PCP involves low-rank approximations of blood signals and projecting signals onto the l1-norm ball, resulting in significant improvements in noise and clutter suppression. Power Doppler images filtered with PCP show up to an 11 dB improvement in signal-to-noise ratio compared to conventional filters.
What is joint normal distribution?
5 answers
A joint normal distribution refers to a multivariate distribution where multiple random variables follow a normal distribution and are correlated. In the context of normal variance mixtures, this concept is extended by randomizing the covariance matrix through a non-negative random variable, resulting in a more generalized form of the multivariate normal distribution. Additionally, copulas are utilized to model dependencies between random variables with specified marginal distributions, allowing for joint distributions like a normal and a half-normal to be created. The joint normal distribution plays a crucial role in various fields, such as secure communication systems where Gaussian sequences are modulated to achieve secure transmissions. Furthermore, in modeling wideband radio channels, a multivariate log-normal distribution is proposed to jointly capture received power, mean delay, and delay spread, showcasing the importance of considering correlated variables together.
How to improve modeling resiliance with artifical noise in neural networks?
5 answers
To enhance modeling resilience in neural networks using artificial noise, a novel noise injection-based training scheme has been proposed. This method involves estimating gradients for both synaptic weights and noise levels during stochastic gradient descent training, optimizing noise levels alongside synaptic weights. By incorporating noise into the network, the model's robustness against adversarial attacks can be significantly improved, as demonstrated in experiments on MNIST and Fashion-MNIST datasets. Additionally, a method has been introduced to reduce label error rates and improve dataset quality by addressing noise condensity issues through a statistical probability-based label flipping process, enhancing the overall performance of neural network models trained on corrected datasets. These approaches collectively contribute to fortifying neural network models against various forms of noise and improving their overall resilience.