scispace - formally typeset
Search or ask a question

Showing papers on "Noise (signal processing) published in 2010"


Journal ArticleDOI
TL;DR: This paper model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework and develops a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings.
Abstract: In this paper, we model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework. We utilize a hierarchical form of the Laplace prior to model the sparsity of the unknown signal. We describe the relationship among a number of sparsity priors proposed in the literature, and show the advantages of the proposed model including its high degree of sparsity. Moreover, we show that some of the existing models are special cases of the proposed model. Using our model, we develop a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings. Unlike most existing CS reconstruction methods, the proposed algorithm is fully automated, i.e., the unknown signal coefficients and all necessary parameters are estimated solely from the observation, and, therefore, no user-intervention is needed. Additionally, the proposed algorithm provides estimates of the uncertainty of the reconstructions. We provide experimental results with synthetic 1-D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach.

718 citations


Journal ArticleDOI
TL;DR: A method to reduce noise based on the principle that the concentration changes of oxygenated and deoxygenated hemoglobin should be negatively correlated is developed, and it is shown that despite its simplicity, this method is effective in reducing noise and improving signal quality, for both online and offline noise reduction.

582 citations


Posted Content
TL;DR: It is proved that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements.
Abstract: This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models - e.g. Gaussian, frequency measurements - discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) - they make use of a much weaker notion - or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.

483 citations


Journal ArticleDOI
TL;DR: High electrode impedance may increase noise and decrease statistical power under some conditions, but these effects can be reduced by using a cool and dry recording environment and appropriate signal processing methods.
Abstract: To determine whether data quality is meaningfully reduced by high electrode impedance, EEG was recorded simultaneously from low- and high-impedance electrode sites during an oddball task. Low-frequency noise was found to be increased at high-impedance sites relative to low-impedance sites, especially when the recording environment was warm and humid. The increased noise at the high-impedance sites caused an increase in the number of trials needed to obtain statistical significance in analyses of P3 amplitude, but this could be partially mitigated by high-pass filtering and artifact rejection. High electrode impedance did not reduce statistical power for the N1 wave unless the recording environment was warm and humid. Thus, high electrode impedance may increase noise and decrease statistical power under some conditions, but these effects can be reduced by using a cool and dry recording environment and appropriate signal processing methods.

344 citations


Journal ArticleDOI
TL;DR: In this paper, a parametric amplifier based on Josephson junctions was proposed to reach the quantum limit at microwave frequencies, where the minimum noise energy added by a phase-preserving amplifier to the signal it processes amounts at least to half a photon at the signal frequency.
Abstract: Amplifiers are crucial in every experiment carrying out a very sensitive measurement. However, they always degrade the information by adding noise. Quantum mechanics puts a limit on how small this degradation can be. Theoretically, the minimum noise energy added by a phase-preserving amplifier to the signal it processes amounts at least to half a photon at the signal frequency. Here we propose a practical microwave device that can fulfil the minimal requirements to reach the quantum limit. The availability of such a device is of importance for the readout of solid-state qubits, and more generally for the measurement of very weak signals in various areas of science. We discuss how this device can be the basic building block for a variety of practical applications, such as amplification, noiseless frequency conversion, dynamic cooling and production of entangled signal pairs. The minimum noise energy that a phase-preserving amplifier adds to the signal is fundamentally limited to half a photon. A proposed parametric amplifier based on Josephson junctions should be able to reach this limit at microwave frequencies.

247 citations


Journal ArticleDOI
TL;DR: A variant of Synchrosqueezing is considered, based on the short-time Fourier transform, to precisely define the instantaneous frequencies of a multicomponent AM-FM signal and an algorithm to recover these instantaneous frequencies from the uniform or nonuniform samples of the signal is described.
Abstract: We propose a new approach for studying the notion of the instantaneous frequency of a signal. We build on ideas from the Synchrosqueezing theory of Daubechies, Lu and Wu and consider a variant of Synchrosqueezing, based on the short-time Fourier transform, to precisely define the instantaneous frequencies of a multi-component AM-FM signal. We describe an algorithm to recover these instantaneous frequencies from the uniform or nonuniform samples of the signal and show that our method is robust to noise. We also consider an alternative approach based on the conventional, Hilbert transform-based notion of instantaneous frequency to compare to our new method. We use these methods on several test cases and apply our results to a signal analysis problem in electrocardiography.

240 citations


Journal ArticleDOI
TL;DR: In this paper, a technique based on the dual-tree complex wavelet transform (DTCWT) is proposed to enhance the desired features related to some special type of machine fault.

216 citations


Journal ArticleDOI
TL;DR: Using chaotic Lorenz data and calculating root-mean-square-error, Lyapunov exponent, and correlation dimension, it is shown that the adaptive algorithm more effectively reduces noise in the Chaos Lorenz system than wavelet denoising with three different thresholding choices.
Abstract: Time series measured in real world is often nonlinear, even chaotic. To effectively extract desired information from measured time series, it is important to preprocess data to reduce noise. In this Letter, we propose an adaptive denoising algorithm. Using chaotic Lorenz data and calculating root-mean-square-error, Lyapunov exponent, and correlation dimension, we show that our adaptive algorithm more effectively reduces noise in the chaotic Lorenz system than wavelet denoising with three different thresholding choices. We further analyze an electroencephalogram (EEG) signal in sleep apnea and show that the adaptive algorithm again more effectively reduces the Electrocardiogram (ECG) and other types of noise contaminated in EEG than wavelet approaches.

214 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that correlation can be used to improve the performance of higher-criticism in the presence of correlated signals. But, it was also shown that the case of independent noise is the most difficult of all, from a statistical viewpoint, and that more accurate signal detection can be obtained when correlation is present.
Abstract: Higher criticism is a method for detecting signals that are both sparse and weak. Although first proposed in cases where the noise variables are independent, higher criticism also has reasonable performance in settings where those variables are correlated. In this paper we show that, by exploiting the nature of the correlation, performance can be improved by using a modified approach which exploits the potential advantages that correlation has to offer. Indeed, it turns out that the case of independent noise is the most difficult of all, from a statistical viewpoint, and that more accurate signal detection (for a given level of signal sparsity and strength) can be obtained when correlation is present. We characterize the advantages of correlation by showing how to incorporate them into the definition of an optimal detection boundary. The boundary has particularly attractive properties when correlation decays at a polynomial rate or the correlation matrix is Toeplitz.

213 citations


Journal ArticleDOI
TL;DR: This paper derives information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections by developing novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions and shows that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.
Abstract: In this paper, we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and mean-squared errors. Our goal is to relate the number of measurements, m , and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n . We consider support errors in a worst-case setting. We employ different variations of Fano's inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions, we develop new insights on max-likelihood analysis based on a novel superposition property. In particular, this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of max-likelihood. These results provide order-wise tight bounds. For output noise models, we show that asymptotically an SNR of ((n)) together with (k (n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors can be tolerated, a constant SNR turns out to be sufficient in the linear sparsity regime. In contrast for input noise models, we show that support recovery fails if the number of measurements scales as o(n(n)/SNR), implying poor compression performance for such cases. Motivated by the fact that the worst-case setup requires significantly high SNR and substantial number of measurements for input and output noise models, we consider a Bayesian setup. To derive necessary conditions, we develop novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions. We then develop a new max-likelihood analysis over the set of rate distortion quantization points to characterize tradeoffs between mean-squared distortion and the number of measurements using rate-distortion theory. We show that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.

210 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work shows that optimal capture can be formulated as a mixed integer programming problem, and lets us achieve higher worst-case SNR in the same capture time, or much faster capture for the same minimum acceptable level of SNR.
Abstract: Taking multiple exposures is a well-established approach both for capturing high dynamic range (HDR) scenes and for noise reduction. But what is the optimal set of photos to capture? The typical approach to HDR capture uses a set of photos with geometrically-spaced exposure times, at a fixed ISO setting (typically ISO 100 or 200). By contrast, we show that the capture sequence with optimal worst-case performance, in general, uses much higher and variable ISO settings, and spends longer capturing the dark parts of the scene. Based on a detailed model of noise, we show that optimal capture can be formulated as a mixed integer programming problem. Compared to typical HDR capture, our method lets us achieve higher worst-case SNR in the same capture time (for some cameras, up to 19 dB improvement in the darkest regions), or much faster capture for the same minimum acceptable level of SNR. Our experiments demonstrate this advantage for both real and synthetic scenes.

Journal ArticleDOI
TL;DR: Four types of noise (Gaussian noise, Salt & Pepper noise, Speckle noise and Poisson noise) are used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter .
Abstract: Image processing is basically the use of computer algorithms to perform image processing on digital images. Digital image processing is a part of digital signal processing. Digital image processing has many significant advantages over analog image processing. Image processing allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing of images. Wavelet transforms have become a very powerful tool for de-noising an image. One of the most popular methods is wiener filter. In this work four types of noise (Gaussian noise , Salt & Pepper noise, Speckle noise and Poisson noise) is used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter . Further results have been compared for all noises.

Journal ArticleDOI
TL;DR: In this paper, a new hybrid method based on optimal Morlet wavelet filter and autocorrelation enhancement is presented to diagnose rolling element bearing faults in the early stage of bearing failures.

Journal ArticleDOI
TL;DR: The effect of acquisition parameters on lesion detectability depends on signal size, and increasing the angular scan range increased detectability for all signal sizes.
Abstract: Purpose: Tomosynthesis is a promising modality for breast imaging. The appearance of the tomosynthesis reconstructed image is greatly affected by the choice of acquisition and reconstruction parameters. The purpose of this study was to investigate the limitations of tomosynthesis breast imaging due to scan parameters and quantum noise. Tomosynthesis image quality was assessed based on performance of a mathematical observer model in a signal-known exactly (SKE) detection task. Methods: SKE detectability (d′) was estimated using a prewhitening observer model. Structured breast background was simulated using filtered noise. Detectability was estimated for designer nodules ranging from 0.05 to 0.8 cm in diameter. Tomosynthesis slices were reconstructed using iterative maximum-likelihood expectation-maximization. The tomosynthesis scan angle was varied between 15° and 60°, the number of views between 11 and 41 and the total number of x-ray quanta was ∞, 6×105, and 6×104. Detectability in tomosynthesis was compared to that in a single projection. Results: For constant angular sampling distance, increasing the angular scan range increased detectability for all signal sizes. Large-scale signals were little affected by quantum noise or angular sampling. For small-scale signals, quantum noise and insufficient angular sampling degraded detectability. At high quantum noise levels, angular step size of 3° or below was sufficient to avoid image degradation. At lower quantum noise levels, increased angular sampling always resulted in increased detectability. The ratio of detectability in the tomosynthesis slice to that in a single projection exhibited a peak that shifted to larger signal sizes when the angular range increased. For a given angular range, the peak shifted toward smaller signals when the number of views was increased. The ratio was greater than unity for all conditions evaluated. Conclusion: The effect of acquisition parameters on lesion detectability depends on signal size. Tomosynthesis scan angle had an effect on detectability for all signals sizes, while quantum noise and angular sampling only affected the detectability small-scale signals.

Journal ArticleDOI
TL;DR: Compared with the traditional cumulant-based classifiers, the proposed K-S classifiers offer superior classification performance, require less number of signal samples (thus is fast), and is more robust to various channel impairments.
Abstract: A new approach to modulation classification based on the Kolmogorov-Smirnov (K-S) test is proposed. The K-S test is a non-parametric method to measure the goodness of fit. The basic procedure involves computing the empirical cumulative distribution function (ECDF) of some decision statistic derived from the received signal, and comparing it with the CDFs or the ECDFs of the signal under each candidate modulation format. The K-S-based modulation classifiers are developed for various channels, including the AWGN channel, the flat-fading channel, the OFDM channel, and the channel with unknown phase and frequency offsets, as well as the non-Gaussian noise channel, for both QAM and PSK modulations. Extensive simulation results demonstrate that compared with the traditional cumulant-based classifiers, the proposed K-S classifiers offer superior classification performance, require less number of signal samples (thus is fast), and is more robust to various channel impairments.

Journal ArticleDOI
TL;DR: The developed MSRF-PLL is fast in transient response compared to standard PLL technique, and the performance is robust against disturbances on the grid, voltage wave with harmonic distortion, and noise.
Abstract: This paper proposes a novel phase-locked loop (PLL) control strategy to synthesize unit vector using the modified synchronous reference frame (MSRF) instead of the traditional synchronous reference frame. The unit vector is used for vector rotation or inverse rotation in vector-controlled three-phase grid-connected converting equipment. The developed MSRF-PLL is fast in transient response compared to standard PLL technique. The performance is robust against disturbances on the grid, voltage wave with harmonic distortion, and noise. The proposed algorithm has been analyzed in detail and was fully implemented digitally using digital signal processor TMS320F2812. The experimental evaluation of the MSRF-PLL in a shunt active power filter confirms its fast dynamic response, noise immunity, and applicability.

Proceedings ArticleDOI
03 May 2010
TL;DR: This highly scalable design provides excellent noise immunity, low-hysteresis, and has the potential to be made flexible and formable in the field of human-friendly robotics.
Abstract: As robots and humans move towards sharing the same environment, the need for safety in robotic systems is of growing importance. Towards this goal of human-friendly robotics, a robust, low-cost, low-noise capacitive force sensing array is presented with application as a whole body artificial skin covering. This highly scalable design provides excellent noise immunity, low-hysteresis, and has the potential to be made flexible and formable. Noise immunity is accomplished through the use of shielding and local sensor processing. A small and low-cost multivibrator circuit is replicated locally at each taxel, minimizing stray capacitance and noise coupling. Each circuit has a digital pulse train output, which allows robust signal transmission in noisy electrical environments. Wire count is minimized through serial or row-column addressing schemes, and the use of an open-drain output on each taxel allows hundreds of sensors to require only a single output wire. With a small set of interface wires, large arrays can be scanned hundreds of times per second and dynamic response remains flat over a broad frequency range. Sensor performance is evaluated on a bench-top version of a 4×4 taxel array in quasi-static and dynamic cases.

Journal ArticleDOI
01 Jan 2010
TL;DR: An ECG signal processing method with quad level vector (QLV) is proposed for the ECG holter system to achieve better performance with low-computation complexity.
Abstract: An ECG signal processing method with quad level vector (QLV) is proposed for the ECG holter system. The ECG processing consists of the compression flow and the classification flow, and the QLV is proposed for both flows to achieve better performance with low-computation complexity. The compression algorithm is performed by using ECG skeleton and the Huffman coding. Unit block size optimization, adaptive threshold adjustment, and 4-bit-wise Huffman coding methods are applied to reduce the processing cost while maintaining the signal quality. The heartbeat segmentation and the R-peak detection methods are employed for the classification algorithm. The performance is evaluated by using the Massachusetts Institute of Technology-Boston's Beth Israel Hospital Arrhythmia Database, and the noise robust test is also performed for the reliability of the algorithm. Its average compression ratio is 16.9:1 with 0.641% percentage root mean square difference value and the encoding rate is 6.4 kbps. The accuracy performance of the R-peak detection is 100% without noise and 95.63% at the worst case with -10-dB SNR noise. The overall processing cost is reduced by 45.3% with the proposed compression techniques.

Journal ArticleDOI
TL;DR: In this article, issues relating to inaccuracy of ECG preprocessing filters are investigated in the context of facilitating efficient ECG interpretation and diagnosis, and several suggestions are made to improve and update existing ECG data preprocessing standards and guidelines.

Journal ArticleDOI
TL;DR: A cooperative sequential detection scheme to reduce the average sensing time that is required to reach a detection decision and how to implement the scheme in a robust manner when the assumed signal models have unknown parameters, such as signal strength and noise variance is studied.
Abstract: Efficient and reliable spectrum sensing plays a critical role in cognitive radio networks. This paper presents a cooperative sequential detection scheme to reduce the average sensing time that is required to reach a detection decision. In the scheme, each cognitive radio computes the log-likelihood ratio for its every measurement, and the base station sequentially accumulates these log-likelihood statistics and determines whether to stop making measurement. The paper studies how to implement the scheme in a robust manner when the assumed signal models have unknown parameters, such as signal strength and noise variance. These ideas are illustrated through two examples in spectrum sensing. One assumes both the signal and noise are Gaussian distributed, while the other assumes the target signal is deterministic.

Journal ArticleDOI
TL;DR: In this article, a fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra.

Journal ArticleDOI
TL;DR: The tradeoff that has to be made between noise reduction and interference rejection is theoretically demonstrated and a new relationship between both filters in which the MVDR is decomposed into the LCMV and a matched filter (MVDR solution in the absence of interference).
Abstract: In real-world environments, the signals captured by a set of microphones in a speech communication system are mixtures of the desired signal, interference, and ambient noise. A promising solution for proper speech acquisition (with reduced noise and interference) in this context consists in using the linearly constrained minimum variance (LCMV) beamformer to reject the interference, reduce the overall mixture energy, and preserve the target signal. The minimum variance distortionless response beamformer (MVDR) is also commonly known to reduce the interference-plus-noise energy without distorting the desired signal. In either case, it is of paramount importance to accurately quantify the achieved noise and interference reduction. Indeed, it is quite reasonable to ask, for instance, about the price that has to be paid in order to achieve total removal of the interference without distorting the target signal when using the LCMV. Besides, it is fundamental to understand the effect of the MVDR on both noise and interference. In this correspondence, we investigate the performance of the MVDR and LCMV beamformers when the interference and ambient noise coexist with the target source. We demonstrate a new relationship between both filters in which the MVDR is decomposed into the LCMV and a matched filter (MVDR solution in the absence of interference). Both components are properly weighted to achieve maximum interference-plus-noise reduction. We investigate the performance of the MVDR, LCMV, and matched filters and elaborate new closed-form expressions for their output signal-to-interference ratio (SIR) and output signal-to-noise ratio (SNR). We theoretically demonstrate the tradeoff that has to be made between noise reduction and interference rejection. In fact, the total removal of the interference may severely amplify the residual ambient noise. Conversely, totally focussing on noise reduction leads to increased level of residual interference. The proposed study is finally supported by several numerical examples.

Journal ArticleDOI
TL;DR: In this article, a simple learning rule that can reproduce the effect of motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters was proposed. But it does not require extrinsic information to separate noise from signal.
Abstract: It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.

Journal Article
TL;DR: In this paper, an index for predicting the effects of noise, nonlinear distortion, and linear filtering on speech quality is developed for both normal-hearing and hearing-impaired listeners.
Abstract: Signal modifications in audio devices such as hearing aids include both nonlinear and linear processing. An index is developed for predicting the effects of noise, nonlinear distortion, and linear filtering on speech quality. The index is designed for both normal-hearing and hearing-impaired listeners. It starts with a representation of the auditory periphery that incorporates aspects of impaired hearing. The cochlear model is followed by the extraction of signal features related to the quality judgments. One set of features measures the effects of noise and nonlinear distortion on speech quality, whereas second set of features measures the effects of linear filtering. The hearing-aid speech quality index (HASQI) is the product of the subindices computed for each of the two sets of features. The models are evaluated by comparing the model predictions with quality judgments made by normal-hearing and hearing-impaired listeners for speech stimuli containing noise, nonlinear distortion, linear processing, and combinations of these signal degradations.

Journal ArticleDOI
TL;DR: In this paper, a low-pass filtering of the control signal is proposed to reduce the chattering in sliding mode control in noise-free environments and outperforms the boundary layer design in noisy environments.
Abstract: The conventional approach to reducing control signal chattering in sliding mode control is to use the boundary layer design. However, when there is high-level measurement noise, the boundary layer design becomes ineffective in chattering reduction. This paper, therefore, proposes a new design for chattering reduction by low-pass filtering the control signal. The new design is non-trivial since it requires estimation of the sliding variable via a disturbance estimator. The new sliding mode control has the same performance as the boundary layer design in noise-free environments, and outperforms the boundary layer design in noisy environments. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society

Journal ArticleDOI
TL;DR: This paper investigates methods related to both the Singular Spectrum Analysis (SSA) and subspace-based methods in signal processing and describes common and specific features of these methods and considers different kinds of problems solved by them such as signal reconstruction, forecasting and parameter estimation.
Abstract: In the present paper we investigate methods related to both the Singular Spectrum Analysis (SSA) and subspacebased methods in signal processing. We describe common and specific features of these methods and consider different kinds of problems solved by them such as signal reconstruction, forecasting and parameter estimation. General recommendations on the choice of parameters to obtain minimal errors are provided. We demonstrate that the optimal choice depends on the particular problem. For the basic model ‘signal + residual’ we show that the error behavior depends on the type of residuals, deterministic or stochastic, and whether the noise is white or red. The structure of errors and the convergence rate are also discussed. The analysis is based on known theoretical results and extensive computer simulations. AMS 2000 subject classifications: Primary 62M20, 62F10, 62F12; secondary 60G35, 65C20, 62G05.

Journal ArticleDOI
TL;DR: In this paper, a modified Kalman-Filter (KF) framework was used for data fusion to estimate respiratory rate from multiple physiological sources, including ECG, respiration, and peripheral tonometry waveforms.
Abstract: We present an application of a modified Kalman-Filter (KF) framework for data fusion to the estimation of respiratory rate from multiple physiological sources which is robust to background noise. A novel index of the underlying signal quality of respiratory signals is presented and then used to modify the noise covariance matrix of the KF which discounts the effect of noisy data. The signal quality index, together with the KF innovation sequence, is also used to weight multiple independent estimates of the respiratory rate from independent KFs. The approach is evaluated both on a realistic artificial ECG model (with real additive noise) and on real data taken from 30 subjects with overnight polysomnograms, containing ECG, respiration, and peripheral tonometry waveforms from which respiration rates were estimated. Results indicate that our automated voting system can out-perform any individual respiration rate estimation technique at all levels of noise and respiration rates exhibited in our data. We also demonstrate that even the addition of a noisier extra signal leads to an improved estimate using our framework. Moreover, our simulations demonstrate that different ECG respiration extraction techniques have different error profiles with respect to the respiration rate, and therefore a respiration rate-related modification of any fusion algorithm may be appropriate.

Journal ArticleDOI
TL;DR: The finite-sample optimality of the generalized likelihood ratio test (GLRT) is discussed and the corresponding GLRT spectrum sensing algorithms are derived by exploiting the statistics of the received signal and the prior information on the channel, noise, as well as the data signal.
Abstract: We consider the spectrum sensing problem in cognitive radio networks. We offer a framework for optimal joint detection and parameter estimation when the secondary users have only a small number of signal samples. We discuss the finite-sample optimality of the generalized likelihood ratio test (GLRT) and derive the corresponding GLRT spectrum sensing algorithms by exploiting the statistics of the received signal and the prior information on the channel, noise, as well as the data signal. An iterative GLRT sensing algorithm, and a simple non-iterative GLRT sensing algorithm are developed for slow and fast-fading channels, respectively, with the latter also serving as an approximate sensing method for slow-fading channels. The proposed techniques are also extended for spectrum sensing in orthogonal frequency-division multiple-access (OFDMA) systems and in multiple-input multiple-output (MIMO) systems. It is seen that the proposed simple non-iterative fast-fading GLRT sensing algorithm offers the best performance in all systems under considerations, including slow fading channels, fast fading channels, OFDMA systems, and MIMO systems, and it significantly outperforms several state-of-the-art spectrum sensing methods in these systems when there is noise uncertainty.

Journal ArticleDOI
TL;DR: An algorithm is developed for recognizing OFDM versus SCLD signals that obviates the need for commonly required signal preprocessing tasks, such as signal and noise power estimation and the recovery of symbol timing and carrier information.
Abstract: Previous studies on the cyclostationarity aspect of orthogonal frequency division multiplexing (OFDM) and single carrier linearly digitally modulated (SCLD) signals assumed simplified signal and channel models or considered only second-order cyclostationarity This paper presents new results concerning the cyclostationarity of these signals under more general conditions, including time dispersive channels, additive Gaussian noise, and carrier phase, frequency, and timing offsets Analytical closed-form expressions are derived for time- and frequency-domain parameters of the cyclostationarity of OFDM and SCLD signals In addition, a condition to eliminate aliasing in the cycle and spectral frequency domains is derived Based on these results, an algorithm is developed for recognizing OFDM versus SCLD signals This algorithm obviates the need for commonly required signal preprocessing tasks, such as signal and noise power estimation and the recovery of symbol timing and carrier information

Patent
Sang-hyun Koh1, Jeong-a Cho1
19 May 2010
TL;DR: A display apparatus and method for controlling interference includes a signal receiving unit which receives a signal in an effective frequency band from an input device, a signal processing unit which processes a signal on the effective frequency bands to output a user input signal, a display unit which displays an image based on the user input signals, and a diminishing signal generating unit which generates a diminished signal having a waveform diminishing a noise outside the effective spectrum band as mentioned in this paper.
Abstract: A display apparatus and method for controlling interference includes a signal receiving unit which receives a signal in an effective frequency band from an input device; a signal processing unit which processes a signal on the effective frequency band to output a user input signal; a display unit which displays an image based on the user input signal; and a diminishing signal generating unit which generates a diminishing signal having a waveform diminishing a noise outside the effective frequency band.