scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 2009"


Journal ArticleDOI
TL;DR: New sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power.
Abstract: Spectrum sensing is a fundamental component in a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based on randomly generated signals, wireless microphone signals and captured ATSC DTV signals are presented to verify the effectiveness of the proposed methods.

1,074 citations


Journal ArticleDOI
TL;DR: The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades, which performs a weighted average of the values of similar pixels which depends on the noise distribution model.
Abstract: Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades, which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.

664 citations


Journal ArticleDOI
TL;DR: This paper analyzes the capacity region of the ANC-based TWRC with linear processing (beamforming) at R and presents the optimal relay beamforming structure as well as an efficient algorithm to compute the optimal beamforming matrix based on convex optimization techniques.
Abstract: This paper studies the wireless two-way relay channel (TWRC), where two source nodes, S1 and S2, exchange information through an assisting relay node, R. It is assumed that R receives the sum signal from S1 and S2 in one timeslot, and then amplifies and forwards the received signal to both S1 and S2 in the next time-slot. By applying the principle of analogue network coding (ANC), each of S1 and S2 cancels the so-called "self-interference" in the received signal from R and then decodes the desired message. Assuming that S1 and S2 are each equipped with a single antenna and R with multi-antennas, this paper analyzes the capacity region of the ANC-based TWRC with linear processing (beamforming) at R. The capacity region contains all the achievable bidirectional rate-pairs of S1 and S2 under the given transmit power constraints at S1, S2, and R. We present the optimal relay beamforming structure as well as an efficient algorithm to compute the optimal beamforming matrix based on convex optimization techniques. Low-complexity suboptimal relay beamforming schemes are also presented, and their achievable rates are compared against the capacity with the optimal scheme.

610 citations


Journal ArticleDOI
TL;DR: In this paper, the error performance of the FSO using a subcarrier intensity modulation (SIM) based on a binary phase shift keying (BPSK) scheme in a clear but turbulent atmosphere is presented.
Abstract: Free-space optical communications (FSO) propagated over a clear atmosphere suffers from irradiance fluctuation caused by small but random atmospheric temperature fluctuations. This results in decreased signal-to-noise ratio (SNR) and consequently impaired performance. In this paper, the error performance of the FSO using a subcarrier intensity modulation (SIM) based on a binary phase shift keying (BPSK) scheme in a clear but turbulent atmosphere is presented. To evaluate the system error performance in turbulence regimes from weak to strong, the probability density function (pdf) of the received irradiance after traversing the atmosphere is modelled using the gamma-gamma distribution while the negative exponential distribution is used to model turbulence in the saturation region and beyond. The effect of turbulence induced irradiance fluctuation is mitigated using spatial diversity at the receiver. With reference to the single photodetector case, up to 12 dB gain in the electrical SNR is predicted with two direct detection PIN photodetectors in strong atmospheric turbulence.

510 citations


Journal ArticleDOI
TL;DR: This paper investigates the error rate performance of FSO systems for K-distributed atmospheric turbulence channels and discusses potential advantages of spatial diversity deployments at the transmitter and/or receiver, and presents efficient approximated closed-form expressions for the average bit-error rate (BER) of single-input multiple-output (SIMO) FSO Systems.
Abstract: Optical wireless, also known as free-space optics, has received much attention in recent years as a cost-effective, license-free and wide-bandwidth access technique for high data rates applications. The performance of free-space optical (FSO) communication, however, severely suffers from turbulence-induced fading caused by atmospheric conditions. Multiple laser transmitters and/or receivers can be placed at both ends to mitigate the turbulence fading and exploit the advantages of spatial diversity. Spatial diversity is particularly crucial for strong turbulence channels in which single-input single-output (SISO) link performs extremely poor. Atmospheric-induced strong turbulence fading in outdoor FSO systems can be modeled as a multiplicative random process which follows the K distribution. In this paper, we investigate the error rate performance of FSO systems for K-distributed atmospheric turbulence channels and discuss potential advantages of spatial diversity deployments at the transmitter and/or receiver. We further present efficient approximated closed-form expressions for the average bit-error rate (BER) of single-input multiple-output (SIMO) FSO systems. These analytical tools are reliable alternatives to time-consuming Monte Carlo simulation of FSO systems where BER targets as low as 10-9 are typically aimed to achieve.

458 citations


Journal ArticleDOI
TL;DR: In this paper, the error performance of an heterodyne differential phase-shift keying (DPSK) optical wireless (OW) communication system operating under various intensity fluctuations conditions is investigated.
Abstract: We study the error performance of an heterodyne differential phase-shift keying (DPSK) optical wireless (OW) communication system operating under various intensity fluctuations conditions. Specifically, it is assumed that the propagating signal suffers from the combined effects of atmospheric turbulence-induced fading, misalignment fading (i.e., pointing errors) and path-loss. Novel closed-form expressions for the statistics of the random attenuation of the propagation channel are derived and the bit-error rate (BER) performance is investigated for all the above fading effects. Numerical results are provided to evaluate the error performance of OW systems with the presence of atmospheric turbulence and/or misalignment. Moreover, nonlinear optimization is also considered to find the optimum beamwidth that achieves the minimum BER for a given signal-to-noise ratio value.

386 citations


Journal Article
TL;DR: In this paper, the authors studied the problem of feedback stabilization over a signal-to-noise ratio (SNR) constrained channel and showed that for either state feedback, or for output feedback delay-free, minimum phase plants, there are limitations on the ability to stabilize an unstable plant over an SNR constrained channel.
Abstract: There has recently been significant interest in feedback stabilization problems over communication channels, including several with bit rate limited feedback. Motivated by considering one source of such bit rate limits, we study the problem of stabilization over a signal-to-noise ratio (SNR) constrained channel. We discuss both continuous and discrete time cases, and show that for either state feedback, or for output feedback delay-free, minimum phase plants, there are limitations on the ability to stabilize an unstable plant over an SNR constrained channel. These limitations in fact match precisely those that might have been inferred by considering the associated ideal Shannon capacity bit rate over the same channel.

379 citations


Journal ArticleDOI
TL;DR: Findings show that the HLRT suffers from very high complexity, whereas the QHLRT provides a reasonable solution, and an upper bound on the performance of QHL RT-based algorithms, which employ unbiased and normally distributed non-data aided estimates of the unknown parameters, is proposed.
Abstract: In this paper, likelihood-based algorithms are explored for linear digital modulation classification. Hybrid likelihood ratio test (HLRT)- and quasi HLRT (QHLRT)- based algorithms are examined, with signal amplitude, phase, and noise power as unknown parameters. The algorithm complexity is first investigated, and findings show that the HLRT suffers from very high complexity, whereas the QHLRT provides a reasonable solution. An upper bound on the performance of QHLRT-based algorithms, which employ unbiased and normally distributed non-data aided estimates of the unknown parameters, is proposed. This is referred to as the QHLRT-Upper Bound (QHLRT-UB). Classification of binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals is presented as a case study. The Cramer-Rao Lower Bounds (CRBs) of non-data aided joint estimates of signal amplitude and phase, and noise power are derived for BPSK and QPSK signals, and further employed to obtain the QHLRT-UB. An upper bound on classification performance of any likelihood-based algorithms is also introduced. Method-of-moments (MoM) estimates of the unknown parameters are investigated and used to develop the QHLRT-based algorithm. Classification performance of this algorithm is compared with the upper bounds, as well as with the quasi Log-Likelihood Ratio (qLLR) and fourth-order cumulant based algorithms.

351 citations


Journal ArticleDOI
TL;DR: A new type of estimator is introduced that aims at maximizing the effective receive signal-to-noise ratio (SNR) after taking into consideration the channel estimation errors, thus referred to as the linear maximum SNR (LMSNR) estimator.
Abstract: In this work, we consider the two-way relay network (TWRN) where two terminals exchange their information through a relay node in a bi-directional manner and study the training-based channel estimation under the amplify-and-forward (AF) relay scheme. We propose a two-phase training protocol for channel estimation: in the first phase, the two terminals send their training signals concurrently to the relay; and in the second phase, the relay amplifies the received signal and broadcasts it to both terminals. Each terminal then estimates the channel parameters required for data detection. First, we assume the channel parameters to be deterministic and derive the maximum-likelihood (ML) -based estimator. It is seen that the newly derived ML estimator is nonlinear and differs from the conventional least-square (LS) estimator. Due to the difficulty in obtaining a closed-form expression of the mean square error (MSE) for the ML estimator, we resort to the Crameacuter-Rao lower bound (CRLB) on the estimation MSE for design of optimal training sequence. Secondly, we consider stochastic channels and focus on the class of linear estimators. In contrast to the conventional linear minimum-mean-square-error (LMMSE) -based estimator, we introduce a new type of estimator that aims at maximizing the effective receive signal-to-noise ratio (SNR) after taking into consideration the channel estimation errors, thus referred to as the linear maximum SNR (LMSNR) estimator. Furthermore, we prove that orthogonal training design is optimal for both the CRLB- and the LMSNR-based design criteria. Finally, simulations are conducted to corroborate the proposed studies.

338 citations


Journal ArticleDOI
TL;DR: Boundedness and ultimate boundedness of the closed-loop system under switched-gain output feedback is argued and a high-gain observer that switches between two gain values is proposed.

315 citations


Journal ArticleDOI
TL;DR: This paper introduces a simple and computationally efficient spectrum sensing scheme for Orthogonal Frequency Division Multiplexing (OFDM) based primary user signal using its autocorrelation coefficient and shows that the log likelihood ratio test (LLRT) statistic is the maximum likelihood estimate of the autoc orrelation coefficient in the low signal-to-noise ratio (SNR) regime.
Abstract: This paper introduces a simple and computationally efficient spectrum sensing scheme for Orthogonal Frequency Division Multiplexing (OFDM) based primary user signal using its autocorrelation coefficient. Further, it is shown that the log likelihood ratio test (LLRT) statistic is the maximum likelihood estimate of the autocorrelation coefficient in the low signal-to-noise ratio (SNR) regime. Performance of the local detector is studied for the additive white Gaussian noise (AWGN) and multipath channels using theoretical analysis. Obtained results are verified in simulation. The performance of the local detector in the face of shadowing is studied by simulations. A sequential detection (SD) scheme where many secondary users cooperate to detect the same primary user is proposed. User cooperation provides diversity gains as well as facilitates using simpler local detectors. The sequential detection reduces the delay and the amount of data needed in identification of the underutilized spectrum. The decision statistics from individual detectors are combined at the fusion center (FC). The statistical properties of the decision statistics are established. The performance of the scheme is studied through theory and validated by simulations. A comparison of the SD scheme with the Neyman-Pearson fixed sample size (FSS) test for the same false alarm and missed detection probabilities is also carried out.

Proceedings ArticleDOI
01 Nov 2009
TL;DR: Numerical results are presented, which show that this method achieves interference alignment at high SNRs, and can achieve different points on the boundary of the achievable rate region by adjusting the MSE weights.
Abstract: To achieve the full multiplexing gain of MIMO interference networks at high SNRs, the interference from different transmitters must be aligned in lower-dimensional subspaces at the receivers. Recently a distributed “max-SINR” algorithm for precoder optimization has been proposed that achieves interference alignment for sufficiently high SNRs. We show that this algorithm can be interpreted as a variation of an algorithm that minimizes the sum Mean Squared Error (MSE). To maximize sum utility, where the utility depends on rate or SINR, a weighted sum MSE objective is used to compute the beams, where the weights are updated according to the sum utility objective. We specify a class of utility functions for which convergence of the sum utility to a local optimum is guaranteed with asynchronous updates of beams, receiver filters, and utility weights. Numerical results are presented, which show that this method achieves interference alignment at high SNRs, and can achieve different points on the boundary of the achievable rate region by adjusting the MSE weights.

Journal ArticleDOI
TL;DR: The presented theoretical analysis and simulations demonstrate that due to the SINR enhancement, significant performance and throughput gains are offered by the proposed MIMO precoding technique compared to its conventional counterparts.
Abstract: This paper introduces a novel channel inversion (CI) precoding scheme for the downlink of phase shift keying (PSK)-based multiple input multiple output (MIMO) systems. In contrast to common practice where knowledge of the interference is used to eliminate it, the main idea proposed here is to use this knowledge to glean benefit from the interference. It will be shown that the system performance can be enhanced by exploiting some of the existent inter-channel interference (ICI). This is achieved by applying partial channel inversion such that the constructive part of ICI is preserved and exploited while the destructive part is eliminated by means of CI precoding. By doing so, the effective signal to interference-plus-noise ratio (SINR) delivered to the mobile unit (MU) receivers is enhanced without the need to invest additional transmitted signal power at the MIMO base station (BS). It is shown that the trade-off to this benefit is a minor increase in the complexity of the BS processing. The presented theoretical analysis and simulations demonstrate that due to the SINR enhancement, significant performance and throughput gains are offered by the proposed MIMO precoding technique compared to its conventional counterparts.

Journal ArticleDOI
TL;DR: This paper quantifies the performance improvement in terms of average bit error rate (BER) and outage capacity, which are among important parameters in practice, and compares single- and multiple-aperture systems from the point of view of fading reduction.
Abstract: Atmospheric turbulence can cause a significant performance degradation in free-space optical communication systems. It is well known that the effect of turbulence can be reduced by performing aperture averaging and/or employing spatial diversity at the receiver. In this paper, we provide a synthesis on the effectiveness of these techniques under different atmospheric turbulence conditions from a telecommunication point of view. In particular, we quantify the performance improvement in terms of average bit error rate (BER) and outage capacity, which are among important parameters in practice. The efficiency of channel coding and the feasibility of exploiting time diversity in aperture averaging receivers are discussed as well. We also compare single- and multiple-aperture systems from the point of view of fading reduction by considering uncorrelated fading on adjacent apertures for the latter case. We show that when the receiver is background noise limited, the use of multiple apertures is largely preferred to a single large aperture under strong turbulence conditions. A single aperture is likely to be preferred under moderate turbulence conditions, however. When the receiver is thermal noise limited, even under strong turbulence conditions, the use of multiple apertures is interesting only when working at a very low BER. We also provide discussions on several practical issues related to system implementation.

Journal ArticleDOI
TL;DR: It is shown that median filtering and linear filtering have similar asymptotic worst-case mean-squared error when the signal-to-noise ratio (SNR) is of order 1, which corresponds to the case of constant per-pixel noise level in a digital signal.
Abstract: Image processing researchers commonly assert that "median filtering is better than linear filtering for removing noise in the presence of edges." Using a straightforward large-n decision-theory framework, this folk-theorem is seen to be false in general. We show that median filtering and linear filtering have similar asymptotic worst-case mean-squared error (MSE) when the signal-to-noise ratio (SNR) is of order 1, which corresponds to the case of constant per-pixel noise level in a digital signal. To see dramatic benefits of median smoothing in an asymptotic setting, the per-pixel noise level should tend to zero (i.e., SNR should grow very large). We show that a two-stage median filtering using two very different window widths can dramatically outperform traditional linear and median filtering in settings where the underlying object has edges. In this two-stage procedure, the first pass, at a fine scale, aims at increasing the SNR. The second pass, at a coarser scale, correctly exploits the nonlinearity of the median. Image processing methods based on nonlinear partial differential equations (PDEs) are often said to improve on linear filtering in the presence of edges. Such methods seem difficult to analyze rigorously in a decision-theoretic framework. A popular example is mean curvature motion (MCM), which is formally a kind of iterated median filtering. Our results on iterated median filtering suggest that some PDE-based methods are candidates to rigorously outperform linear filtering in an asymptotic framework.

Journal ArticleDOI
TL;DR: This letter derives new gain control schemes for an amplify-and-forward single-frequency relay link in which loop interference from the relay transmit antenna to the relay receive antenna has to be tolerated.
Abstract: This letter derives new gain control schemes for an amplify-and-forward single-frequency relay link in which loop interference from the relay transmit antenna to the relay receive antenna has to be tolerated. The proposed gain control schemes take into account the effect of residual loop interference that remains after imperfect loop interference cancellation. As a result of our gain control strategy, the signal-to-interference and noise ratio can be maximized while, at the same time, transmit power is decreased. Finally, we evaluate system performance by deriving closed-form outage probability expressions for the gain control schemes.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper studies the use of artificial interference in reducing the likelihood that a message transmitted between two multi-antenna nodes is intercepted by an undetected eavesdropper, and uses the relative signal-to-interference-plus-noise-ratio (SINR) of a single transmitted data stream as a performance metric.
Abstract: This paper studies the use of artificial interference in reducing the likelihood that a message transmitted between two multi-antenna nodes is intercepted by an undetected eavesdropper. Unlike previous work that assumes some prior knowledge of the eavesdropper's channel and focuses on the information theoretic concept of secrecy capacity, we also consider the case where no information regarding the eavesdropper is present, and we use the relative signal-to-interference-plus-noise-ratio (SINR) of a single transmitted data stream as our performance metric. A portion of the transmit power is used to broadcast the information signal with just enough power to guarantee a certain SINR at the desired receiver, and the remainder of the power is used to broadcast artificial noise in order to mask the desired signal from a potential eavesdropper. The interference is designed to be orthogonal to the information signal when it reaches the desired receiver, and we study the resulting relative SINR of the desired receiver and the eavesdropper assuming both employ optimal beamformers.

Journal ArticleDOI
TL;DR: In this paper, the spectral kurtosis (SK) filter was applied to the gear residual signal to detect small tooth surface pitting in a two-stage helical reduction gearbox.

Proceedings ArticleDOI
30 Nov 2009
TL;DR: A novel multiple target localization approach is proposed by exploiting the compressive sensing theory, which indicates that sparse or compressible signals can be recovered from far fewer samples than that needed by the Nyquist sampling theorem.
Abstract: In this paper, a novel multiple target localization approach is proposed by exploiting the compressive sensing theory, which indicates that sparse or compressible signals can be recovered from far fewer samples than that needed by the Nyquist sampling theorem. We formulate the multiple target locations as a sparse matrix in the discrete spatial domain. The proposed algorithm uses the received signal strengths (RSSs) to find the location of targets. Instead of recording all RSSs over the spatial grid to construct a radio map from targets, far fewer numbers of RSS measurements are collected, and a data pre-processing procedure is introduced. Then, the target locations can be recovered from these noisy measurements, only through an l1-minimization program. The proposed approach reduces the number of measurements in a logarithmic sense, while achieves a high level of localization accuracy. Analytical studies and simulations are provided to show the performance of the proposed approach on localization accuracy.

Proceedings ArticleDOI
30 Sep 2009
TL;DR: Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, and the single-letter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymPTotically optimal.
Abstract: Compressed sensing deals with the reconstruction of a high-dimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimension of the signal increases has been studied extensively. This work takes a fundamental perspective on the problem of inferring about individual elements of the sparse signal given the measurements, where the dimensions of the system become increasingly large. Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, i.e., those elements become independent conditioned on the measurements. Furthermore, the problem of inferring about each signal element admits a single-letter characterization in the sense that the posterior distribution of the element, which is a sufficient statistic, becomes asymptotically identical to the posterior of inferring about the same element in scalar Gaussian noise. The result leads to simple characterization of all other elemental metrics of the compressed sensing problem, such as the mean squared error and the error probability for reconstructing the support set of the sparse signal. Finally, the single-letter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymptotically optimal.

Proceedings ArticleDOI
14 Jun 2009
TL;DR: It is shown that it is possible to detect optimally and efficiently FDM signals, with 25% bandwidth gain with respect to analogous OFDM signal parameters, which indicates that the transmission of spectrally efficient non orthogonal F DM signals is tangible.
Abstract: This paper investigates the transmission of Frequency Division Multiplexed (FDM) signals, where carrier orthogonality is intentionally violated in order to increase bandwidth efficiency. In analogy to conventional OFDM, signal generation relies on an Inverse Fractional Fourier Transform (IFRFT) that can be implemented with O(N log2 N) algorithmic complexity. Optimal Maximum Likelihood (ML) detection is overly complex due to the presence of substantial Intercarrier Interference (ICI). Consequently, we investigate an alternative detection mechanism based on the Generalized Sphere Decoding (GSD) algorithm. We examine the bandwidth efficiency and the error performance in Additive White Gaussian Noise (AWGN), for various FDM signal parameters. In particular, we show that it is possible to detect optimally and efficiently FDM signals, with 25% bandwidth gain with respect to analogous OFDM signals. This indicates that the transmission of spectrally efficient non orthogonal FDM signals is tangible.

Journal ArticleDOI
TL;DR: It is demonstrated that NLML performs better than the conventional local maximum likelihood (LML) estimation method in preserving and defining sharp tissue boundaries in terms of a well-defined sharpness metric while also having superior performance in method error.
Abstract: Postacquisition denoising of magnetic resonance (MR) images is of importance for clinical diagnosis and computerized analysis, such as tissue classification and segmentation. It has been shown that the noise in MR magnitude images follows a Rician distribution, which is signal-dependent when signal-to-noise ratio (SNR) is low. It is particularly difficult to remove the random fluctuations and bias introduced by Rician noise. The objective of this paper is to estimate the noise free signal from MR magnitude images. We model images as random fields and assume that pixels which have similar neighborhoods come from the same distribution. We propose a nonlocal maximum likelihood (NLML) estimation method for Rician noise reduction. Our method yields an optimal estimation result that is more accurate in recovering the true signal from Rician noise than NL means algorithm in the sense of SNR, contrast, and method error. We demonstrate that NLML performs better than the conventional local maximum likelihood (LML) estimation method in preserving and defining sharp tissue boundaries in terms of a well-defined sharpness metric while also having superior performance in method error.

Journal ArticleDOI
TL;DR: Am amplitude increased and latency decreased with increasing SNR; in addition, there was no main effect of tone level across the two signal levels tested (60 and 75 dB SPL).

Journal ArticleDOI
TL;DR: A generic spectrum-surveying framework is proposed that introduces both standardization and automation to this process, as well as enables a distributed approach to spectrum surveying.
Abstract: Dynamic spectrum access networks and wireless spectrum policy reforms heavily rely on accurate spectrum utilization statistics, which are obtained via spectrum surveys. In this paper, we propose a generic spectrum-surveying framework that introduces both standardization and automation to this process, as well as enables a distributed approach to spectrum surveying. The proposed framework outlines procedures for the collection, analysis, and modeling of spectrum measurements. Furthermore, we propose two techniques for processing spectrum data without the need for a priori knowledge. In addition, these techniques overcome the challenges associated with spectrum data processing, such as a large dynamic range of signals and the variation of the signal-to-noise ratio across the spectrum. Finally, we present mathematical tools for the analysis and extraction of important spectrum occupancy parameters. The proposed processing techniques have been validated using empirical spectrum measurements collected from the FM, television (TV), cellular, and paging bands. Results show that the primary signals in the FM band can be classified with a miss-detection rate of about 2% at the cost of 50% false-alarm rate, while nearly 100% reliability in classification can be achieved with the other bands. However, the classification accuracy depends on the duration and the range of frequencies over which data are collected, as well as the RF characteristics of the spectrum measurement receiver.

Proceedings ArticleDOI
01 Nov 2009
TL;DR: A MIMO extension of SISO time-domain cancellation techniques based on subtraction of an estimated loop signal is presented and it is shown how the loop interference can be suppressed in the spatial domain by applying multi-antenna techniques.
Abstract: The main technical problem in full-duplex relaying is to suppress the looping interference signal from relay transmission to relay reception. The earlier literature on the topic is mostly restricted to SISO channels. We take a step further and consider a system where the coverage of a MIMO transmission link is boosted with a two-hop full-duplex MIMO relay. First, we present a MIMO extension of SISO time-domain cancellation techniques based on subtraction of an estimated loop signal. We show how the loop interference can be suppressed in the spatial domain by applying multi-antenna techniques. The solution involves design of linear receive and transmit filters for the relay to improve the quality of the useful signal and to minimize the effect of the loop interference.We propose null space projection and minimum mean square error filters for spatial loop interference suppression as well as discuss shortly how to combine them with time-domain cancellation.

Journal ArticleDOI
TL;DR: This paper considers a wideband cognitive radio network (CRN) which can simultaneously sense multiple narrowband channels and thus aggregate the perceived available channels for transmission and shows that the optimal sensing time is around 6 ms and it is almost insensitive to the total transmit power.
Abstract: In this paper, we consider a wideband cognitive radio network (CRN) which can simultaneously sense multiple narrowband channels and thus aggregate the perceived available channels for transmission. We study the problem of designing the optimal spectrum sensing time and power allocation schemes so as to maximize the average achievable throughput of the CRN subject to the constraints of probability of detection and the total transmit power. The optimal sensing time and power allocation strategies are developed under two different total power constraints, namely, instantaneous power constraint and average power constraint. Finally, numerical results show that, under both cases, for a CRN with three 6 MHz channels, if the frame duration is 100 ms and the target probability of detection is 90% for the worst case signal-to-noise ratio of primary users being -12 dB, -15 dB and -20 dB, respectively, the optimal sensing time is around 6 ms and it is almost insensitive to the total transmit power.

Journal ArticleDOI
TL;DR: The paper derives closed form expressions for the signal-to-noise ratio gain provided by this detector over the corresponding conventional clutter subtraction energy detector in the two extreme conditions of weak and strong noise and shows that time reversal provides, under weak noise, the optimal waveform shape to probe the environment.
Abstract: The paper studies detection of a target buried in a rich scattering medium by time reversal. We use a multi-static configuration with receive and transmit arrays of antennas. In time reversal, the backscattered field is recorded, time reversed, and retransmitted (mathematically or physically) into the same scattering medium. We derive two array detectors: the time-reversal channel matched filter when the target channel response is known; and the time-reversal generalized-likelihood ratio test (TR-GLRT) when the target channel response is unknown. The noise added in the initial probing step to the time-reversal signal makes the analysis of the TR-GLRT detector non trivial. The paper derives closed form expressions for the signal-to-noise ratio gain provided by this detector over the corresponding conventional clutter subtraction energy detector in the two extreme conditions of weak and strong (electronic additive) noise and shows that time reversal provides, under weak noise, the optimal waveform shape to probe the environment. We analyze the impact of the array configuration on the detection performance. Finally, experiments with electromagnetic data collected in a multipath scattering laboratory environment confirm our analytical results. Under the realistic conditions tested, time reversal provides detection gains over conventional detection that range from 2 to 4.7 dB.

Journal ArticleDOI
TL;DR: Different weighting factors used to create fused images in DECT cause statistically significant differences in CT value, signal-to-noise ratio (SNR), and image quality.
Abstract: Objective:The aim of this study was to evaluate the influence of different weighting factors on contrast enhancement, signal-to-noise ratio (SNR), and image quality in image fusion in dual energy computed tomography (DECT) angiography.Material and Methods:Fifteen patients underwent a CT angiography

Journal ArticleDOI
TL;DR: It is shown by both analysis and numerical results that under the same sensing conditions and channel environments, Anderson-Darling sensing has much higher sensitivity to detect an existing signal than energy detector-based sensing, especially in a case where the received signal has a low signal-to-noise ratio (SNR) without prior knowledge of primary user signals.
Abstract: One of the most important challenges in cognitive radio is how to measure or sense the existence of a signal transmission in a specific channel, that is, how to conduct spectrum sensing. In this letter, we first formulate spectrum sensing as a goodness of fit testing problem, and then apply the Anderson-Darling test, one of goodness of fit tests, to derive a sensing method called Anderson-Darling sensing. It is shown by both analysis and numerical results that under the same sensing conditions and channel environments, Anderson-Darling sensing has much higher sensitivity to detect an existing signal than energy detector-based sensing, especially in a case where the received signal has a low signal-to-noise ratio (SNR) without prior knowledge of primary user signals.

Journal ArticleDOI
06 May 2009-Langmuir
TL;DR: A computationally rapid image analysis method, weighted overdetermined regression, is presented for two-dimensional Gaussian fitting of particle location with subpixel resolution from a pixelized image of light intensity, showing that precision and speed are both improved.
Abstract: A computationally rapid image analysis method, weighted overdetermined regression, is presented for two-dimensional (2D) Gaussian fitting of particle location with subpixel resolution from a pixelized image of light intensity. Compared to least-squares Gaussian iterative fitting, which is most exact but prohibitively slow for large data sets, the precision of this new method is equivalent when the signal-to-noise ratio is high and approaches it when the signal-to-noise ratio is low, while enjoying a more than 100-fold improvement in computational time. Compared to another widely used approximation method, nine-point regression, we show that precision and speed are both improved. Additionally, weighted regression runs nearly as fast and with greatly improved precision compared to the simplest method, the moment method, which, despite its limited precision, is frequently employed because of its speed. Quantitative comparisons are presented for both circular and elliptical Gaussian intensity distributions. T...