scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 2013"


Journal ArticleDOI
06 Nov 2013-PLOS ONE
TL;DR: An overview of existing definitions of signal-to-noise ratio is provided and the relationship with activation detection power is investigated and Reference tables and conversion formulae are provided to facilitate comparability between fMRI studies.
Abstract: Signal-to-noise ratio, the ratio between signal and noise, is a quantity that has been well established for MRI data but is still subject of ongoing debate and confusion when it comes to fMRI data. fMRI data are characterised by small activation fluctuations in a background of noise. Depending on how the signal of interest and the noise are identified, signal-to-noise ratio for fMRI data is reported by using many different definitions. Since each definition comes with a different scale, interpreting and comparing signal-to-noise ratio values for fMRI data can be a very challenging job. In this paper, we provide an overview of existing definitions. Further, the relationship with activation detection power is investigated. Reference tables and conversion formulae are provided to facilitate comparability between fMRI studies.

389 citations


Journal ArticleDOI
03 Sep 2013-PLOS ONE
TL;DR: This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach and is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters.
Abstract: Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters.

334 citations


Journal ArticleDOI
TL;DR: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data.
Abstract: Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the shape and peak frequency of the noise power spectrum better than commercial smoothing kernels, and indicate that the spatial resolution at low contrast levels is not significantly degraded. Both the subjective evaluation using the ACR phantom and the objective evaluation on a low-contrast detection task using a CHO model observer demonstrate an improvement on low-contrast performance. The GPU implementation can process and transfer 300 slice images within 5 min. On patient data, the adaptive NLM algorithm provides more effective denoising of CT data throughout a volume than standard NLM, and may allow significant lowering of radiation dose. After a two week pilot study of lower dose CT urography and CT enterography exams, both GI and GU radiology groups elected to proceed with permanent implementation of adaptive NLM in their GI and GU CT practices. Conclusions: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with clinical workflow. The adaptive NLM algorithm provides effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose.

235 citations


Patent
31 Dec 2013
TL;DR: In this article, a continuous analyte measurement system is configured to be wholly, transcutaneously, intravascularly, or extracorporeally implanted in a host.
Abstract: Systems and methods of use involving sensors having a signal-to-noise ratio that is substantially unaffected by non-constant noise are provided for continuous analyte measurement in a host. In some embodiments, a continuous analyte measurement system is configured to be wholly, transcutaneously, intravascularly or extracorporeally implanted.

220 citations


Journal ArticleDOI
TL;DR: Broadband and narrowband chirp excitations are utilized to address the need to both test at multiple frequencies and achieve a high signal-to-noise ratio to minimize acquisition time.

204 citations


Journal ArticleDOI
TL;DR: To examine the effects of the reconstruction algorithm of magnitude images from multichannel diffusion MRI on fiber orientation estimation, six sclerosis patients with central giant cell granuloma are studied.
Abstract: Purpose: To examine the effects of the reconstruction algorithm of magnitude images from multi-channel diffusion MRI on fibre orientation estimation. Theory and Methods: It is well established that the method used to combine signals from different coil elements in multi-channel MRI can have an impact on the properties of the reconstructed magnitude image. Utilising a root-sum-of-squares (RSoS) approach results in a magnitude signal that follows an effective non-central-distribution. As a result, the noise floor, the minimum measurable in the absence of any true signal, is elevated. This is particularly relevant for diffusion-weighted MRI, where the signal attenuation is of interest. Results: In this study, we illustrate problems that such image reconstruction characteristics may cause in the estimation of fibre orientations, both for model-based and model-free approaches, when modern 32-channel coils are employed. We further propose an alternative image reconstruction method that is based on sensitivity encoding (SENSE) and preserves the Rician nature of the single-channel, magnitude MR signal. We show that for the same k-space data, RSoS can cause excessive overfitting and reduced precision in orientation estimation compared to the SENSE-based approach. Conclusion: These results highlight the importance of choosing the appropriate image reconstruction method for tractography studies that use multi-channel receiver coils for diffusion MRI acquisition.

195 citations


Journal ArticleDOI
TL;DR: This study presents GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data, and presents the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods.
Abstract: In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested.

193 citations


Journal ArticleDOI
TL;DR: This article proposes a novel channel estimation scheme, which utilizes the data symbols to construct pilots as well as the correlation characteristics between channels within two adjacent symbols, and demonstrates that the proposed scheme outperforms currently widely-used schemes especially in high signal-to-noise ratio regime.
Abstract: In vehicle-to-vehicle (V2V) communications, reliable channel estimation is critical for the system performance due to the extremely time-varying characteristic of V2V channels. In this article, we present a survey on the current channel estimation techniques for the IEEE 802.11p standard. According to deficiencies of the current schemes and considering characteristics of V2V channels, we propose a novel channel estimation scheme, which utilizes the data symbols to construct pilots as well as the correlation characteristics between channels within two adjacent symbols. Analysis and simulation results demonstrate that the proposed scheme outperforms currently widely-used schemes especially in high signal-to-noise ratio regime. At the end, some open issues for the future work conclude this article.

133 citations


Journal ArticleDOI
TL;DR: Of the two novel architectures, it is demonstrated that the best performing one consists of a reconstruction stage based on CAMP followed by a detector, which can be made fully adaptive by combining it with a conventional Constant False Alarm Rate (CFAR) processor.
Abstract: We consider the problem of target detection from a set of Compressed Sensing (CS) radar measurements corrupted by additive white Gaussian noise. We propose two novel architectures and compare their performance by means of Receiver Operating Characteristic (ROC) curves. Using asymptotic arguments and the Complex Approximate Message Passing (CAMP) algorithm, we characterize the statistics of the l1-norm reconstruction error and derive closed form expressions for both the detection and false alarm probabilities of both schemes. Of the two architectures, we demonstrate that the best performing one consists of a reconstruction stage based on CAMP followed by a detector. This architecture, which outperforms the l1-based detector in the ideal case of known background noise, can also be made fully adaptive by combining it with a conventional Constant False Alarm Rate (CFAR) processor. Using the state evolution framework of CAMP, we also derive Signal to Noise Ratio (SNR) maps that, together with the ROC curves, can be used to design a CS-based CFAR radar detector. Our theoretical findings are confirmed by means of both Monte Carlo simulations and experimental results.

126 citations


Journal ArticleDOI
TL;DR: To develop R2* mapping techniques corrected for confounding factors and optimized for noise performance, researchers at the Massachusetts Institute of Technology (MIT) used EMMARM, a state-of-the-art machine learning system, to solve the challenge of systematically cataloging individual neurons in the response to noise.
Abstract: R2* relaxometry has a number of important applications in MRI, including iron measurements in the liver (1–3), heart (3), pancreas (2), brain (4); BOLD imaging in functional MRI of the brain (5) and other organs such as kidney (6); tracking and detection of cells labeled with super-paramagnetic iron oxides (7). The utility of a quantitative imaging biomarker such as R2* mapping depends on its ability to measure a fundamental physical parameter (i.e., R2* at a given field strength) that correlates well with a meaningful physiological parameter such as local tissue oxygenation, iron concentration, etc. In addition, the method used to measure R2* should be accurate, precise (repeatable), reproducible across sites and robust to differences in imaging parameters, protocols, and scanner platforms. For these reasons, a thorough understanding of the factors that influence and potentially confound a biomarker such as R2* mapping is of considerable importance. Most R2* mapping methods use multiple magnitude images acquired at different echo times (TE) and model R2* as the monoexponential decay rate obtained by fitting the acquired signal at each voxel. The fitting can be performed from the magnitude of the signal measured in individual pixels or from larger regions of interest averaged over larger regions of tissue. Unfortunately, this method suffers from bias related to noise. In regions of high signal-to-noise ratio (SNR), the noise statistics of magnitude MR images are gaussian with zero mean (8). However, as the SNR decreases with R2* related signal decay, the noise statistics are altered and have a Rician distribution, which has a complicated dependence on the SNR with a nonzero mean (9,10). This leads to a TE-dependent and SNR-dependent bias in the signal and if not accounted for, leads to protocol-dependent bias in the estimation of R2*. In general, two approaches, truncation (11), and baseline fitting (12) have been used to address this limitation. However, these approaches discard potentially useful information and may retain some residual bias and reduced noise performance (13). Complex fitting, however, where the magnitude operation is not performed on the acquired images, has not been widely used. Using complex data, the noise distribution remains gaussian and is constant with zero mean for all TE. For this reason, complex R2* fitting does not suffer bias caused by nonzero mean noise in regions of low SNR. Further, the presence of fat in tissue can lead to dramatic alterations in the signal behavior of images acquired at increasing TE (14). This is especially important in organs such as the liver, where intracellular accumulation of liver triglycerides (hepatic steatosis) may occur in up to 30% of the US population (15), particularly in individuals suffering from obesity and/or type II diabetes. The pancreas is also well known to contain fat (16) and new reports are demonstrating an increasing role of intracellular accumulation of fat in muscle (17) and the heart (18). Past work has attempted to mitigate the effects of fat in R2* mapping by acquiring images at TEs where the water and main methylene resonance of fat (~−217 Hz from water peak at 1.5 T) are acquired “in-phase” (e.g., 4.6 ms, 9.2 ms, etc, at 1.5 T) (19,20). Increasing recognition that fat has multiple spectral peaks (21) had led to the realization that it is not possible to acquire images with water and all peaks of fat in-phase, except at a spin-echo or at TE = 0 for a free induction decay. The interference pattern of the fat peaks with themselves and the water peak leads to increased apparent signal decay, i.e., increased apparent R2*, even when echoes are acquired “in-phase”. Further, the use of relatively long TE such as 4.6, 9.2 ms, etc, greatly diminishes the noise performance and the upper limits of the dynamic range of R2* estimation methods needed to quantify signal decay in tissues with severe iron overload, where R2* values may be on the order of 1000 s−1 (T2* on the order of 1 ms) (1). Alternative techniques for fat-corrected R2* mapping are based on suppressing the fat signal. This can mainly be achieved by T1-based fat nulling using inversion-recovery sequences (22), or by frequency selective fat saturation (23,24). T1-based fat nulling can achieve nearly uniform fat suppression, but results in lengthened scan time and severely reduced signal-to-noise (SNR). Frequency selective fat saturation also lengthens the acquisition, and is problematic in the presence of: (a) B0 field inhomogeneities (because the peaks shift in frequency), or (b) high R2* values (because the peaks can broaden to the point that they overlap). Additionally, fat peaks near the water resonance will not be suppressed using conventional fat saturation. Water-selective R2* mapping is also expected to suffer from the same limitations. To address these challenges, in this work, we will describe the use of a multiecho chemical shift based R2* estimation method that simultaneously estimates, and therefore corrects for, the presence of fat. Signal modeling- based techniques for measuring R2* in the presence of fat were initially developed by Wehrli et al. (14). The method employed in this article is based on an extension of previously reported complex-based methods for R2*-corrected and spectrally modeled fat quantification (25,26). Here, we apply these complex signal estimation approaches to avoid the pitfalls associated with magnitude based relaxometry methods, while also correcting for the presence of tissue fat. We provide a detailed analysis of the noise performance and bias of these methods in comparison to magnitude-based methods. In addition, we will demonstrate that inclusion of fat in the signal model has minimal impact on the noise performance of complex R2* relaxometry. Further, the use of joint estimation of R2* will be used to maximize the noise performance of R2* fitting when signal decay is very rapid (i.e., T2* is very short). Finally, we formulate a Cramer-Rao Bound (CRB) analysis that can be used to optimize acquisition parameters to maximize the noise performance of fat-corrected R2* relaxometry for specific ranges of expected R2* values. Simulations and clinically relevant examples are used to demonstrate pitfalls in R2* relaxometry associated with the presence of fat and very high iron concentrations. Monte Carlo simulations and theoretical analysis based on CRB are also shown to provide a framework for acquisition parameter optimization.

123 citations


Journal ArticleDOI
TL;DR: This paper elaborate on the sum rate of D-MIMO systems employing linear zero-forcing receivers, accounting for both large and small-scale fading effects, as well as spatial correlation at the transmit side.
Abstract: The performance of single-cell distributed multiple-input multiple-output (D-MIMO) systems is not only affected by small-scale Rayleigh fading but also from large-scale fading and path-loss. In this paper, we elaborate on the sum rate of D-MIMO systems employing linear zero-forcing receivers, accounting for both large and small-scale fading effects, as well as spatial correlation at the transmit side. In particular, we consider the classical lognormal model and propose closed-form upper and lower bounds on the achievable sum rate. Using these bounds as a starting point, we pursue a "large-system" analysis and provide asymptotic expressions when the number of antennas at the base station (BS) grow large, and when the number of antennas at both ends grow large with a fixed and finite ratio. A detailed characterization in the asymptotically high and low signal to noise ratio regimes is also provided. An interesting observation from our results is that in order to maximize the sum rate, the RPs should be placed at unequal distances to the BS when they experience the same level of shadowing. The resulting closed-form expressions are compared with the corresponding results on MIMO optimal receivers.

Journal ArticleDOI
TL;DR: This study focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation and used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise- free signal.
Abstract: In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter.

Journal ArticleDOI
TL;DR: In this article, the TFM was modified to include the directional dependence of ultrasonic velocity in an anisotropic composite laminate, and practical procedures for measuring the direction-dependent velocity profile were described.
Abstract: As carbon fibre composite becomes more widely used for primary structural components in aerospace and other applications, the reliable detection of small defects in thick-sections is increasingly important. This article describes an experimental procedure for improving the detectability of such defects based on modifications to the Total Focusing Method (TFM) of processing ultrasonic array data to form an image. First the TFM is modified to include the directional dependence of ultrasonic velocity in an anisotropic composite laminate, and practical procedures for measuring the direction-dependent velocity profile are described. The performance of the TFM is then optimised in terms of the signal to noise ratio for Side-Drilled Holes (SDHs) by tuning both the frequency-domain filtering of data and the maximum aperture angle used in processing. Finally an attenuation correction is applied to the image so that the background structural noise level is uniform at all depths. The result is an image where the sensitivity (i.e. the signal to noise ratio) to a particular feature is independent of depth. Signals from 1.5 mm diameter SDHs in the final image at depths of 4, 10 and 16 mm are around 15 dB above the root-mean-square level of the surrounding structural noise. In a standard TFM image, the signals from the same SDHs are not visible above the structural noise.

Journal ArticleDOI
TL;DR: Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.
Abstract: Purpose: This work involved the development of a phantom-based method to quantify the performance of tube current modulation and iterative reconstruction in modern computed tomography (CT) systems. The quantification included resolution, HU accuracy, noise, and noise texture accounting for the impact of contrast, prescribed dose, reconstruction algorithm, and body size. Methods: A 42-cm-long, 22.5-kg polyethylene phantom was designed to model four body sizes. Each size was represented by a uniform section, for the measurement of the noise-power spectrum (NPS), and a feature section containing various rods, for the measurement of HU and the task-based modulation transfer function (TTF). The phantom was scanned on a clinical CT system (GE, 750HD) using a range of tube current modulation settings (NI levels) and reconstruction methods (FBP and ASIR30). An image quality analysis program was developed to process the phantom data to calculate the targeted image quality metrics as a function of contrast, prescribed dose, and body size. Results: The phantom fabrication closely followed the design specifications. In terms of tube current modulation, the tube current and resulting image noise varied as a function of phantom size as expected based on the manufacturer specification: From the 16- to 37-cm section, the HU contrast for each rod was inversely related to phantom size, and noise was relatively constant (<5% change). With iterative reconstruction, the TTF exhibited a contrast dependency with better performance for higher contrast objects. At low noise levels, TTFs of iterative reconstruction were better than those of FBP, but at higher noise, that superiority was not maintained at all contrast levels. Relative to FBP, the NPS of iterative reconstruction exhibited an ∼30% decrease in magnitude and a 0.1 mm−1 shift in the peak frequency. Conclusions: Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes. The testing platform enabled robust NPS, TTF, HU, and pixel noise measurements as a function of body size capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.

Journal ArticleDOI
TL;DR: In this article, a two-dimensional edge detection method was proposed to extract location information of intruder in the distributed vibration sensing system based on phase-sensitive optical time domain reflectometry, where the edge detection was used to calculate the spatial gradient of the image composed by Rayleigh traces at each point by convolving with Sobel operator, hence the amplitude fluctuation of Rayleigh backscattering traces induced by external vibration can be located.
Abstract: A two-dimensional edge detection method has been proposed to extract location information of intruder in the distributed vibration sensing system based on phase-sensitive optical time domain reflectometry. The edge detection method is used to calculate the spatial gradient of the image composed by Rayleigh traces at each point by convolving with Sobel operator, hence the amplitude fluctuation of Rayleigh backscattering traces induced by external vibration can be located. The signal to noise ratio of location information based on the method increases to as high as 8.4 dB compared to conventional method, where the effects of noise are reduced by local averaging within the neighborhood of mask. The spatial resolution could be also optimized from 5 m to ~3 m when 50 ns pulse is launched into the single mode fiber with 1 Km length. The sensing system has the potential to extract available signals from the hostile environments with strong background noise.

Journal ArticleDOI
TL;DR: The proposed reconstruction algorithm seeks to minimize a penalized likelihood-based cost functional, where the parameters of the likelihood function are estimated by computing the Fisher information matrix associated with the material decomposition step.
Abstract: Photon-counting detector technology has enabled the first experimental investigations of energy-resolved computed tomography (CT) imaging and the potential use for K-edge imaging. However, limitations in regards to detecter technology have been imposing a limit to effective count rates. As a consequence, this has resulted in high noise levels in the obtained images given scan time limitations in CT imaging applications. It has been well recognized in the area of low-dose imaging with conventional CT that iterative image reconstruction provides a superior signal to noise ratio compared to traditional filtered backprojection techniques. Furthermore, iterative reconstruction methods also allow for incorporation of a roughness penalty function in order to make a trade-off between noise and spatial resolution in the reconstructed images. In this work, we investigate statistically-principled iterative image reconstruction from material-decomposed sinograms in spectral CT. The proposed reconstruction algorithm seeks to minimize a penalized likelihood-based cost functional, where the parameters of the likelihood function are estimated by computing the Fisher information matrix associated with the material decomposition step. The performance of the proposed reconstruction method is quantitatively investigated by use of computer-simulated and experimental phantom data. The potential for improved K-edge imaging is also demonstrated in an animal experiment.

Journal ArticleDOI
TL;DR: A novel algorithm is proposed for automatic modulation classification in multiple-input multiple-output spatial multiplexing systems, which employs fourth-order cumulants of the estimated transmit signal streams as discriminating features and a likelihood ratio test (LRT) for decision making.
Abstract: A novel algorithm is proposed for automatic modulation classification in multiple-input multiple-output spatial multiplexing systems, which employs fourth-order cumulants of the estimated transmit signal streams as discriminating features and a likelihood ratio test (LRT) for decision making. The asymptotic likelihood function of the estimated feature vector is analytically derived and used with the LRT. Hence, the algorithm can be considered as asymptotically optimal for the employed feature vector when the channel matrix and noise variance are known. Both the case with perfect channel knowledge and the practically more relevant case with blind channel estimation are considered. The results show that the proposed algorithm provides a good classification performance while exhibiting a significantly lower computational complexity when compared with conventional algorithms.

Journal ArticleDOI
TL;DR: Inspired by an algorithmic approach for interference alignment, three cooperative algorithms are proposed to find suboptimal solutions for end-to-end sum-rate maximization problem in a multiple-antenna amplify-and-forward (AF) relay interference channel.
Abstract: Interference is a common impairment in wireless communication systems. Multi-hop relay networks use a set of intermediate nodes called relays to facilitate communication between multiple transmitters and multiple receivers through multiple hops. Relay based communication is especially sensitive to interference because the interference impacts both the received signal at the relay, and the received signal at the destination. Interference alignment is a signaling technique that provides high multiplexing gain in the interference channel. In this paper, inspired by an algorithmic approach for interference alignment, three cooperative algorithms are proposed to find suboptimal solutions for end-to-end sum-rate maximization problem in a multiple-antenna amplify-and-forward (AF) relay interference channel. The first algorithm aims at minimizing the sum power of enhanced noise from the relays and interference at the receivers. The second and third algorithms aim at minimizing matrix-weighted sum mean square errors with either equality or inequality power constraints to utilize a connection between mean square error and mutual information. The resulting iterative algorithms are convergent to points that we conjecture to be stationary points of the corresponding problems. Simulations show that the proposed algorithms achieve higher end-to-end sum-rates and multiplexing gains that existing strategies for AF relays, decode-and-forward relays, and direct transmission. The first algorithm outperforms the other algorithms at high signal-to-noise ratio (SNR) but performs worse than them at low SNR. Thanks to power control, the third algorithm outperforms the second algorithm at the cost of additional overhead.

Journal ArticleDOI
TL;DR: This paper showed that for similar SNR, L-PPM scheme offered improved performance, and their performance in terms of power and bandwidth efficiencies and the Bit Error Rate versus Signal-to-Noise Ratio (SNR) are compared analytically.
Abstract: As wireless communication systems become ever-more important and pervasive parts of our everyday life; system capacity and quality of service issues are becoming more critical. In order to increase the system capacity and improve the quality of service, it is necessary that we pay closer attention to bandwidth and power efficiency issues. In this paper, the bandwidth and power efficiency issues in Free Space Optics (FSO) transmissions are addressed under Pulse Position Modulation (L-PPM) and Pulse Amplitude Modulation (M-PAM) schemes, and their performance in terms of power and bandwidth efficiencies and the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are compared analytically. The comparative study of the L-PPM and MPAM schemes is discussed, and showed that for similar SNR, L-PPM scheme offered improved performance. For FSO communication systems, although the power efficiency is inferior to L-PPM scheme, On-Off Keying (OOK) modulation scheme is more commonly used due to its efficient bandwidth usage, but M-PAM is the bandwidth efficient modulation scheme in this research for more than “2” bits of information can be sent, while L-PPM is the power efficient modulation scheme for more number of bits can be sent, and it may be able to improve performance by increasing the number of bits in L-PPM scheme. General Terms Optical Communications, Modulation schemes, Bit Error Rate (BER), Signal to Noise Ratio (SNR), Bandwidth Efficiency, Power Efficiency.

01 Jan 2013
TL;DR: This paper describes the comparison between adaptive filtering algorithms that is least meansquare (LMS), Normalized least mean square (NLMS), time varying least means square (TVLMS, Recursive least square (RLS), Fast Transversal Recursive less square (FTRLS) and implementation aspects of these algorithms, their computational complexity and Signal to Noise ratio are examined.
Abstract: This paper describes the comparison between adaptive filtering algorithms that is least mean square (LMS), Normalized least mean square (NLMS),Time varying least mean square (TVLMS), Recursive least square (RLS), Fast Transversal Recursive least square (FTRLS). Implementation aspects of these algorithms, their computational complexity and Signal to Noise ratio are examined. These algorithms use small input and output delay. Here, the adaptive behaviour of the algorithms is analyzed. Recently, adaptive filtering algorithms have a nice tradeoff between the complexity and the convergence speed. Three performance criteria are used in the study of these algorithms: the minimum mean square error, the algorithm execution time and the required filter order.

Journal ArticleDOI
TL;DR: It is shown that the loss of spatial information is reduced when using a stack of spirals trajectory compared to concentric shells, making a meaningful image reconstruction impossible in the affected areas.

Journal ArticleDOI
01 Jan 2013
TL;DR: In this article, a new set of spike sorting features, explicitly framed to be computationally efficient and shown to outperform principal component analysis (PCA)-based spike sorting, is presented.
Abstract: Modern microelectrode arrays acquire neural signals from hundreds of neurons in parallel that are subsequently processed for spike sorting. It is important to identify, extract, and transmit appropriate features that allow accurate spike sorting while using minimum computational resources. This paper describes a new set of spike sorting features, explicitly framed to be computationally efficient and shown to outperform principal component analysis (PCA)-based spike sorting. A hardware friendly architecture, feasible for implantation, is also presented for detecting neural spikes and extracting features to be transmitted for off chip spike classification. The proposed feature set does not require any off-chip training, and requires about 5% of computations as compared to the PCA-based features for the same classification accuracy, tested for spike trains with a broad range of signal-to-noise ratio. Our simulations show a reduction of required bandwidth to about 2% of original data rate, with an average classification accuracy of greater than 94% at a typical signal to noise ratio of 5 dB.

Journal ArticleDOI
TL;DR: The experimental results demonstrated that the EMD based methods achieved better performance than the conventional digital filters, especially when the signal to noise ratio of the processed signal was low.

Journal ArticleDOI
TL;DR: Results show evidence of the de-noising effects and demonstrate that this method can effectively de- noise the noisy Lidar signals in strong background light and achieve improvement in the signal to noise ratio of system.

Journal ArticleDOI
TL;DR: The error probability of AFC is analyzed and the weight set is designed to minimize the error probability, which shows that AFC achieves the capacity of the Gaussian channel in a wide range of signal to noise ratio (SNR).
Abstract: In this paper, we propose a capacity-approaching analog fountain code (AFC) for wireless channels. In AFC, the number of generated coded symbols is potentially limitless. In contrast to the conventional binary rateless codes, each coded symbol in AFC is a real-valued symbol, generated as a weighted sum of d randomly selected information bits, where d and the weight coefficients are randomly selected from predefined probability mass functions. The coded symbols are then directly transmitted through wireless channels. We analyze the error probability of AFC and design the weight set to minimize the error probability. Simulation results show that AFC achieves the capacity of the Gaussian channel in a wide range of signal to noise ratio (SNR).

Journal ArticleDOI
TL;DR: An automatic ECG signal enhancement technique is proposed to remove noise components from time-frequency domain represented noisyECG signal and shows better signal to noise ratio (SNR) and lower root means square error (RMSE) compared to earlier reported wavelet transform with soft thresholding (WT-Soft) and wave let transform with subband dependent threshold ( WT-Subband) based technique.

Journal ArticleDOI
TL;DR: This work constructs polar codes for the block Rayleigh fading channel with known channel side information (CSI) and for the Rayleigh channel withknown channel distribution information (CDI), and shows that long polar codes are close to the theoretical limit.
Abstract: The application of polar codes for the Rayleigh fading channel is considered. We construct polar codes for the block Rayleigh fading channel with known channel side information (CSI) and for the Rayleigh channel with known channel distribution information (CDI). The construction of polar codes for the Rayleigh fading with known CSI allows them to work with any signal noise ratio (SNR). The rate of the codeword is adapted correspondingly. Polar codes for Rayleigh fading with known CDI suffer a penalty for not having complete information about the channel. The penalty, however, is small, about 1.3 dB. We perform simulations and compare the obtained results with the theoretical limits. We show that they are close to the theoretical limit. We compare polar codes with other good codes and the results show that long polar codes are closer to the limit.

Journal ArticleDOI
TL;DR: DR-PICCS enables to reconstruct CT images with lower noise than FBP and the loss of spatial resolution can be mitigated to a large extent and a denoising method, such as the directional diffusion filtering, has been demonstrated to reduce anisotropy in spatial resolution effectively when it was combined with DR-PicCS with statistical modeling.
Abstract: Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth andin vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In thein vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness; images also had a qualitatively more uniform noise spatial distribution when including a statistical model. Conclusions: DR-PICCS enables to reconstruct CT images with lower noise than FBP and the loss of spatial resolution can be mitigated to a large extent. The introduction of statistical modeling in DR-PICCS may improve some noise characteristics, but it also leads to anisotropic spatial resolution properties. A denoising method, such as the directional diffusion filtering, has been demonstrated to reduce anisotropy in spatial resolution effectively when it was combined with DR-PICCS with statistical modeling.

Patent
09 Aug 2013
TL;DR: In this paper, the successive pulsing of different color illumination appears white to the user, yet facilitates signal detection, even for lower cost monochrome sensors, as in barcode scanning and other automatic identification equipment.
Abstract: Signal detection and recognition employees coordinated illumination and capture of images under to facilitate extraction of a signal of interest. Pulsed illumination of different colors facilitates extraction of signals from color channels, as well as improved signal to noise ratio by combining signals of different color channels. The successive pulsing of different color illumination appears white to the user, yet facilitates signal detection, even for lower cost monochrome sensors, as in barcode scanning and other automatic identification equipment.

Journal ArticleDOI
TL;DR: It is shown that constructing elemental maps of PCA noise filtered data using the background subtraction method, does not guarantee an increase in the signal to noise ratio due to correlation of the spectral data as a result of the filtering process.