scispace - formally typeset
Search or ask a question

Showing papers on "Signal-to-noise ratio published in 2010"


Journal ArticleDOI
TL;DR: In this paper, an analytical closed-form expression of an achievable secrecy rate was derived for the case of noncolluding eavesdroppers and an upper bound on the secrecy rate is provided.
Abstract: We consider the problem of secure communication with multiantenna transmission in fading channels. The transmitter simultaneously transmits an information-bearing signal to the intended receiver and artificial noise to the eavesdroppers. We obtain an analytical closed-form expression of an achievable secrecy rate and use it as the objective function to optimize the transmit power allocation between the information signal and the artificial noise. Our analytical and numerical results show that equal power allocation is a simple yet near-optimal strategy for the case of noncolluding eavesdroppers. When the number of colluding eavesdroppers increases, more power should be used to generate the artificial noise. We also provide an upper bound on the SNR, above which, the achievable secrecy rate is positive and shows that the bound is tight at low SNR. Furthermore, we consider the impact of imperfect channel state information (CSI) at both the transmitter and the receiver and find that it is wise to create more artificial noise to confuse the eavesdroppers than to increase the signal strength for the intended receiver if the CSI is not accurately obtained.

515 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work presents a low complexity method for noise PSD estimation based on a minimum mean-squared error estimator of the noise magnitude-Squared DFT coefficients, which improves segmental SNR and PESQ for non-stationary noise sources.
Abstract: Most speech enhancement algorithms heavily depend on the noise power spectral density (PSD). Because this quantity is unknown in practice, estimation from the noisy data is necessary. We present a low complexity method for noise PSD estimation. The algorithm is based on a minimum mean-squared error estimator of the noise magnitude-squared DFT coefficients. Compared to minimum statistics based noise tracking, segmental SNR and PESQ are improved for non-stationary noise sources with 1 dB and 0.25 MOS points, respectively. Compared to recently published algorithms, similar good noise tracking performance is obtained, but at a computational complexity that is in the order of a factor 40 lower.

269 citations


Journal ArticleDOI
TL;DR: An improved version of CS-based high-resolution imaging to overcome strong noise and clutter by combining coherent projectors and weighting with the CS optimization for ISAR image generation is presented.
Abstract: The theory of compressed sampling (CS) indicates that exact recovery of an unknown sparse signal can be achieved from very limited samples. For inversed synthetic aperture radar (ISAR), the image of a target is usually constructed by strong scattering centers whose number is much smaller than that of pixels of an image plane. This sparsity of the ISAR signal intrinsically paves a way to apply CS to the reconstruction of high-resolution ISAR imagery. CS-based high-resolution ISAR imaging with limited pulses is developed, and it performs well in the case of high signal-to-noise ratios. However, strong noise and clutter are usually inevitable in radar imaging, which challenges current high-resolution imaging approaches based on parametric modeling, including the CS-based approach. In this paper, we present an improved version of CS-based high-resolution imaging to overcome strong noise and clutter by combining coherent projectors and weighting with the CS optimization for ISAR image generation. Real data are used to test the robustness of the improved CS imaging compared with other current techniques. Experimental results show that the approach is capable of precise estimation of scattering centers and effective suppression of noise.

268 citations


Journal ArticleDOI
TL;DR: In this paper, two coherent frequency combs are used to measure the full complex response of a sample in a configuration analogous to a dispersive Fourier transform spectrometer, infrared time domain spectrometers, or a multi-heterodyne laser spectroscopy.
Abstract: Two coherent frequency combs are used to measure the full complex response of a sample in a configuration analogous to a dispersive Fourier transform spectrometer, infrared time domain spectrometer, or a multiheterodyne laser spectrometer. This dual-comb spectrometer retains the frequency accuracy and resolution of the reference underlying the stabilized combs. We discuss the specific design of our coherent dual-comb spectrometer and demonstrate the potential of this technique by measuring the overtone vibration of hydrogen cyanide, centered at 194 THz (1545 nm). We measure the fully normalized, complex response of the gas over a 9 THz bandwidth at 220 MHz frequency resolution yielding 41,000 resolution elements. The average spectral signal-to-noise ratio (SNR) over the 9 THz bandwidth is 2500 for both the magnitude and phase of the measured spectral response and the peak SNR is 4000. This peak SNR corresponds to a fractional absorption sensitivity of 0.05% and a phase sensitivity of 250 microradians. As the spectral coverage of combs expands, coherent dual-comb spectroscopy could provide high-frequency accuracy and resolution measurements of a complex sample response across a range of spectral regions. Work of U. S. government, not subject to copyright.

266 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore whether it is better to do ICIC or simply single-cell beamforming and show that beamforming is preferred for all users when the edge SNR (signal-to-noise ratio) is low (10 dB).
Abstract: Downlink spatial intercell interference cancellation (ICIC) is considered for mitigating other-cell interference using multiple transmit antennas. A principle question we explore is whether it is better to do ICIC or simply standard single-cell beamforming. We explore this question analytically and show that beamforming is preferred for all users when the edge SNR (signal-to-noise ratio) is low ( 10 dB), for example in an urban setting. At medium SNR, a proposed adaptive strategy, where multiple base stations jointly select transmission strategies based on the user location, outperforms both while requiring a lower feedback rate than the pure ICIC approach. The employed metric is sum rate, which is normally a dubious metric for cellular systems, but surprisingly we show that even with this reward function the adaptive strategy also improves fairness. When the channel information is provided by limited feedback, the impact of the induced quantization error is also investigated. The analysis provides insights on the feedback design, and it is shown that ICIC with well-designed feedback strategies still provides significant throughput gain.

236 citations


Journal ArticleDOI
TL;DR: The method is able to handle unconstrained blurs, but also allows the use of constraints or of prior information on the blurring filter, as well as theUse of filters defined in a parametric manner and shows to be applicable to a much wider range of blurs.
Abstract: A method for blind image deblurring is presented. The method only makes weak assumptions about the blurring filter and is able to undo a wide variety of blurring degradations. To overcome the ill-posedness of the blind image deblurring problem, the method includes a learning technique which initially focuses on the main edges of the image and gradually takes details into account. A new image prior, which includes a new edge detector, is used. The method is able to handle unconstrained blurs, but also allows the use of constraints or of prior information on the blurring filter, as well as the use of filters defined in a parametric manner. Furthermore, it works in both single-frame and multiframe scenarios. The use of constrained blur models appropriate to the problem at hand, and/or of multiframe scenarios, generally improves the deblurring results. Tests performed on monochrome and color images, with various synthetic and real-life degradations, without and with noise, in single-frame and multiframe scenarios, showed good results, both in subjective terms and in terms of the increase of signal to noise ratio (ISNR) measure. In comparisons with other state of the art methods, our method yields better results, and shows to be applicable to a much wider range of blurs.

229 citations


Journal ArticleDOI
TL;DR: This paper derives information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections by developing novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions and shows that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.
Abstract: In this paper, we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and mean-squared errors. Our goal is to relate the number of measurements, m , and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n . We consider support errors in a worst-case setting. We employ different variations of Fano's inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions, we develop new insights on max-likelihood analysis based on a novel superposition property. In particular, this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of max-likelihood. These results provide order-wise tight bounds. For output noise models, we show that asymptotically an SNR of ((n)) together with (k (n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors can be tolerated, a constant SNR turns out to be sufficient in the linear sparsity regime. In contrast for input noise models, we show that support recovery fails if the number of measurements scales as o(n(n)/SNR), implying poor compression performance for such cases. Motivated by the fact that the worst-case setup requires significantly high SNR and substantial number of measurements for input and output noise models, we consider a Bayesian setup. To derive necessary conditions, we develop novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions. We then develop a new max-likelihood analysis over the set of rate distortion quantization points to characterize tradeoffs between mean-squared distortion and the number of measurements using rate-distortion theory. We show that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.

210 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work shows that optimal capture can be formulated as a mixed integer programming problem, and lets us achieve higher worst-case SNR in the same capture time, or much faster capture for the same minimum acceptable level of SNR.
Abstract: Taking multiple exposures is a well-established approach both for capturing high dynamic range (HDR) scenes and for noise reduction. But what is the optimal set of photos to capture? The typical approach to HDR capture uses a set of photos with geometrically-spaced exposure times, at a fixed ISO setting (typically ISO 100 or 200). By contrast, we show that the capture sequence with optimal worst-case performance, in general, uses much higher and variable ISO settings, and spends longer capturing the dark parts of the scene. Based on a detailed model of noise, we show that optimal capture can be formulated as a mixed integer programming problem. Compared to typical HDR capture, our method lets us achieve higher worst-case SNR in the same capture time (for some cameras, up to 19 dB improvement in the darkest regions), or much faster capture for the same minimum acceptable level of SNR. Our experiments demonstrate this advantage for both real and synthetic scenes.

209 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work proposes a weighting function that produces statistically optimal estimates under the assumption of compound-Gaussian noise, based on a calibrated camera model that accounts for all noise sources and allows us to simultaneously estimate the irradiance and its uncertainty.
Abstract: Given a multi-exposure sequence of a scene, our aim is to recover the absolute irradiance falling onto a linear camera sensor. The established approach is to perform a weighted average of the scaled input exposures. However, there is no clear consensus on the appropriate weighting to use. We propose a weighting function that produces statistically optimal estimates under the assumption of compound-Gaussian noise. Our weighting is based on a calibrated camera model that accounts for all noise sources. This model also allows us to simultaneously estimate the irradiance and its uncertainty. We evaluate our method on simulated and real world photographs, and show that we consistently improve the signal-to-noise ratio over previous approaches. Finally, we show the effectiveness of our model for optimal exposure sequence selection and HDR image denoising.

197 citations


Journal ArticleDOI
TL;DR: The power-versus-distortion tradeoff for the distributed transmission of a memoryless bivariate Gaussian source over a two-to-one average-power limited Gaussian multiple-access channel is studied and an uncoded transmission scheme is introduced which is asymptotically optimal as the SNR tends to infinity.
Abstract: We study the power-versus-distortion tradeoff for the distributed transmission of a memoryless bivariate Gaussian source over a two-to-one average-power limited Gaussian multiple-access channel. In this problem, each of two separate transmitters observes a different component of a memoryless bivariate Gaussian source. The two transmitters then describe their source component to a common receiver via an average-power constrained Gaussian multiple-access channel. From the output of the multiple-access channel, the receiver wishes to reconstruct each source component with the least possible expected squared-error distortion. Our interest is in characterizing the distortion pairs that are simultaneously achievable on the two source components. We focus on the ?equal bandwidth? case, where the source rate in source-symbols per second is equal to the channel rate in channel-uses per second. We present sufficient conditions and necessary conditions for the achievability of a distortion pair. These conditions are expressed as a function of the channel signal-to-noise ratio (SNR) and of the source correlation. In several cases, the necessary conditions and sufficient conditions are shown to agree. In particular, we show that if the channel SNR is below a certain threshold, then an uncoded transmission scheme is optimal. Moreover, we introduce a ?source-channel vector-quantizer? scheme which is asymptotically optimal as the SNR tends to infinity.

189 citations


Proceedings ArticleDOI
20 Jun 2010
TL;DR: In this paper, a comprehensive performance comparison of energy detection, matched-filter detection, and cyclostationarity-based detection, the three popular choices for spectrum sensing by cognitive radios are derived.
Abstract: This paper presents a comprehensive performance comparison of energy detection, matched-filter detection, and cyclostationarity-based detection, the three popular choices for spectrum sensing by cognitive radios. Analytical expressions for the false alarm and detection probability achieved by all the detectors are derived. For cyclostationarity-based detection, two architectures that exploit cyclostationarity are proposed: the Spectral Correlation Density (SCD) detector, and the Magnitude Squared Coherence (MSC) detector. The MSC detector offers improved performance compared to existing detectors, and this is demonstrated using the 802.22 RF capture database. It is also shown that the cyclostationarity-based detectors are naturally insensitive to uncertainty in the noise variance, as the decision statistic is based on the noise rejection property of the cyclostationary spectrum. Simulation results plotting the receiver operating characteristics corroborate the theoretical results, and enable visual comparison of the performance.

Journal ArticleDOI
TL;DR: An electrodynamic model of GPS direct and reflected signal interference that has a bare-soil model as the input and the total GPS received power as the output is built and it is demonstrated how this model can reproduce and explain the main features of experimental multipath modulation patterns such as changes in phase and amplitude.
Abstract: Reflected Global Positioning System (GPS) signals can be used to infer information about soil moisture in the vicinity of the GPS antenna. Interference of direct and reflected signals causes the composite signal, observed using signal-to-noise ratio (SNR) data, to undulate with time while the GPS satellite ascends or descends at relatively low elevation angles. The soil moisture change affects both the phase of the SNR modulation pattern and its magnitude. In order to more thoroughly understand the mechanism of how the soil moisture change leads to a change in the SNR modulation, we built an electrodynamic model of GPS direct and reflected signal interference, i.e., multipath, that has a bare-soil model as the input and the total GPS received power as the output. This model treats soil as a continuously stratified medium with a specific composition of material ingredients having complex dielectric permittivity according to well-known mixing models. The critical part of this electrodynamic model is a numerical algorithm that allows us to calculate polarization-dependent reflection coefficients of such media with various profiles of dielectric permittivity dictated by the soil type and moisture. In this paper, we demonstrate how this model can reproduce and explain the main features of experimental multipath modulation patterns such as changes in phase and amplitude. We also discuss the interplay between true penetration depth and effective reflector depth. Based on these modeling comparisons, we formulate recommendations to improve the performance of bare soil moisture retrievals from the data obtained using GPS multipath modulation.

Journal ArticleDOI
TL;DR: The results indicate that modulation frame durations, provide a good compromise between different types of spectral distortions, namely musical noise and temporal slurring, and given a proper selection of modulation frame duration, the proposed modulation spectral subtraction does not suffer from musical noise artifacts typically associated with acoustic spectral subtracted.

Journal ArticleDOI
TL;DR: The proposed new and improved energy detector for random signals in Gaussian noise is proposed by replacing the squaring operation of the signal amplitude in the conventional energy detector with an arbitrary positive power operation, which confirms that the conventionalenergy detector based on the generalized likelihood ratio test is not optimum in terms of the detection performance.
Abstract: New and improved energy detector for random signals in Gaussian noise is proposed by replacing the squaring operation of the signal amplitude in the conventional energy detector with an arbitrary positive power operation. Numerical results show that the best power operation depends on the probability of false alarm, the probability of detection, the average signal-to-noise ratio or the sample size. By choosing the optimum power operation according to different system settings, new energy detectors with better detection performances can be derived. These results give useful guidance on how to improve the performances of current wireless systems using the energy detector. It also confirms that the conventional energy detector based on the generalized likelihood ratio test using the generalized likelihood function is not optimum in terms of the detection performance.

Journal ArticleDOI
17 Jun 2010-Sensors
TL;DR: Results showed that high noise reduction is the major advantage of the EEMD based filter, especially on arrhythmia ECGs.
Abstract: A novel noise filtering algorithm based on ensemble empirical mode decomposition (EEMD) is proposed to remove artifacts in electrocardiogram (ECG) traces. Three noise patterns with different power—50 Hz, EMG, and base line wander – were embedded into simulated and real ECG signals. Traditional IIR filter, Wiener filter, empirical mode decomposition (EMD) and EEMD were used to compare filtering performance. Mean square error between clean and filtered ECGs was used as filtering performance indexes. Results showed that high noise reduction is the major advantage of the EEMD based filter, especially on arrhythmia ECGs.

Journal ArticleDOI
TL;DR: This work develops a method to enhance the SNR of the N1 wave and measure its peak latency and amplitude in both average and single-trial waveforms, and provides quantitative evidence that a multiple linear regression approach can be applied to these filtered waveforms to obtain an automatic, reliable and unbiased estimate.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper proposes a novel technique for self-interference suppression in full-duplex Multiple-Input Multiple-Output (MIMO) relays that can suppress interference substantially with less impact on the useful signal.
Abstract: Full-duplex relays can provide cost-effective cover-age extension and throughput enhancement However, the main limiting factor is the resulting self-interference signal which deteriorates the relay performance In this paper, we propose a novel technique for self-interference suppression in full-duplex Multiple-Input Multiple-Output (MIMO) relays The relay employs transmit and receive weight filters for suppressing the self-interference signal Unlike existing techniques that are based on zero forcing of self-interference, we aim at maximizing the ratio between the power of the useful signal to the self-interference power at the relay reception and transmission Our simulation results show that the proposed algorithm outperforms the existing schemes since it can suppress interference substantially with less impact on the useful signal

Journal ArticleDOI
TL;DR: The proposed Kalman filtering based algorithm provides a suitable solution to the motion artifact removal problem in NIR studies by combining the advantages of the existing adaptive and Wiener filtering methods in one algorithm which allows efficient real time application with no requirement on additional sensor measurements.
Abstract: Background: As a continuation of our earlier work, we present in this study a Kalman filtering based algorithm for the elimination of motion artifacts present in Near Infrared spectroscopy (NIR) measurements. Functional NIR measurements suffer from head motion especially in real world applications where movement cannot be restricted such as studies involving pilots, children, etc. Since head movement can cause fluctuations unrelated to metabolic changes in the blood due to the cognitive activity, removal of these artifacts from NIR signal is necessary for reliable assessment of cognitive activity in the brain for real life applications. Methods: Previously, we had worked on adaptive and Wiener filtering for the cancellation of motion artifacts in NIR studies. Using the same NIR data set we have collected in our previous work where different speed motion artifacts were induced on the NIR measurements we compared the results of the newly proposed Kalman filtering approach with the results of previously studied adaptive and Wiener filtering methods in terms of gains in signal to noise ratio. Here, comparisons are based on paired t-tests where data from eleven subjects are used. Results: The preliminary results in this current study revealed that the proposed Kalman filtering method provides better estimates in terms of the gain in signal to noise ratio than the classical adaptive filtering approach without the need for additional sensor measurements and results comparable to Wiener filtering but better suitable for real-time applications. Conclusions: This paper presented a novel approach based on Kalman filtering for motion artifact removal in NIR recordings. The proposed approach provides a suitable solution to the motion artifact removal problem in NIR studies by combining the advantages of the existing adaptive and Wiener filtering methods in one algorithm which allows efficient real time application with no requirement on additional sensor measurements.

Journal ArticleDOI
TL;DR: The constrained Cramer-Rao bound (CRB) is obtained, and this bound is shown to equal the CRB of an estimator with knowledge of the support set, for almost all feasible parameter values.
Abstract: The goal of this contribution is to characterize the best achievable mean-squared error (MSE) in estimating a sparse deterministic parameter from measurements corrupted by Gaussian noise. To this end, an appropriate definition of bias in the sparse setting is developed, and the constrained Cramer-Rao bound (CRB) is obtained. This bound is shown to equal the CRB of an estimator with knowledge of the support set, for almost all feasible parameter values. Consequently, in the unbiased case, our bound is identical to the MSE of the oracle estimator. Combined with the fact that the CRB is achieved at high signal-to-noise ratios signal-to-noise ratio (SNRs) by the maximum likelihood technique, our result provides a new interpretation for the common practice of using the oracle estimator as a gold standard against which practical approaches are compared.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed novel level set-based active contour model for breast ultrasound (BUS) image segmentation can model the BUS images well, be robust to noise, and segment theBUS images accurately and reliably.

Journal ArticleDOI
TL;DR: A novel and unified communication-theoretic framework for the analysis of channel capacity over fading channels is proposed and it is shown that the framework can handle various fading channel models, communication types, and adaptation transmission policies.
Abstract: Since the trail-blazing paper of C. Shannon in 1948, channel capacity has been regarded as the fundamental information-theoretic performance measure to predict the maximum information rate of a communication system. However, in contrast with the analysis of other important performance measures of wireless communication systems, a unified and general approach for computing the channel capacity over fading channels has yet to be proposed. Motivated by this consideration, we propose a novel and unified communication-theoretic framework for the analysis of channel capacity over fading channels. It is shown that the framework can handle various fading channel models, communication types, and adaptation transmission policies. In particular, the specific contributions of this paper are as follows: (1) We introduce a transform operator, called the E i-transform, which is shown to provide a unified tool to compute the channel capacity with either side information at the receiver or side information at the transmitter and the receiver, directly from the moment-generating function (MGF) or the MGF and the truncated MGF of the Signal-to-Noise-Ratio (SNR) at the receiver, respectively; (2) we show that when either a channel inversion or a truncated channel inversion adaptation policy is considered, the channel capacity can readily be computed from the Mellin or the Hankel transform of the MGF of the received SNR, respectively; (3) a simple yet effective numerical method for the analysis of higher order statistics (HOS) of the channel capacity with side information at the receiver is introduced; and (4) some efficient and ad hoc numerical methods are explicitly introduced to allow the efficient computation of the proposed frameworks. Numerical and simulation results are also shown and compared to substantiate the analytical derivation.

Journal ArticleDOI
TL;DR: It is shown that the performance of Space Shift Keying (SSK) modulation can be improved via opportunistic power allocation methods and analytical tractability on a 2 × 1 Multiple-Input-Multiple-Output (MIMO) system setup over correlated Rayleigh fading channels.
Abstract: In this Letter, we show that the performance of Space Shift Keying (SSK) modulation can be improved via opportunistic power allocation methods. For analytical tractability, we focus on a 2 × 1 Multiple-Input-Multiple-Output (MIMO) system setup over correlated Rayleigh fading channels. A closed-form solution of the optimal power allocation problem is derived, and it is shown that the transmit-power of each transmit-antenna should be chosen as a function of the power imbalance ratio and correlation coefficient of the transmit-receive wireless links. Numerical results are shown to substantiate the analytical derivation and the claimed performance improvement.

Journal ArticleDOI
TL;DR: In this paper, the authors used a set of 18 perforation shots, with a variety of positions and propellant amounts, to test the ability of surface arrays to detect and locate seismic sources in the subsurface.
Abstract: Recently there has been much interest in the use of data from surface arrays in conjunction with migration-based processing methods for passive seismic monitoring. In this study we use an example of this kind of data recorded whilst 18 perforation shots, with a variety of positions and propellant amounts, were detonated in the subsurface. As the perforation shots provide signals with known source positions and origin times, the analysis of these data is an invaluable opportunity to test the accuracy and ability of surface arrays to detect and locate seismic sources in the subsurface. In all but one case the signals from the perforation shots are not visible in the raw or preprocessed data. However, clear source images are produced for 12 of the perforation shots showing that arrays of surface sensors are capable of imaging microseismic events, even when the signals are not visible in individual traces. We find that point source locations are within typically 45 m (laterally) of the true shot location, however the depths are less well constrained (∼150 m). We test the sensitivity of our imaging method to the signal-to-noise ratio in the data using signals embedded in realistic noise. We find that the position of the imaged shot location is quite insensitive to the level of added noise, the primary effect of increased noise being to defocus the source image. Given the migration approach, the array geometry and the nature of coherent noise during the experiment, signals embedded in noise with ratios ≥0.1 can be used to successfully image events. Furthermore, comparison of results from data and synthetic signals embedded in noise shows that, in this case, prestack corrections of traveltimes to account for near-surface structure will not enhance event detectability. Although, the perforation shots have a largely isotropic radiation pattern the results presented here show the potential for the use of surface sensors in microseismic monitoring as a viable alternative to classical downhole methods.

Journal ArticleDOI
TL;DR: The tradeoff that has to be made between noise reduction and interference rejection is theoretically demonstrated and a new relationship between both filters in which the MVDR is decomposed into the LCMV and a matched filter (MVDR solution in the absence of interference).
Abstract: In real-world environments, the signals captured by a set of microphones in a speech communication system are mixtures of the desired signal, interference, and ambient noise. A promising solution for proper speech acquisition (with reduced noise and interference) in this context consists in using the linearly constrained minimum variance (LCMV) beamformer to reject the interference, reduce the overall mixture energy, and preserve the target signal. The minimum variance distortionless response beamformer (MVDR) is also commonly known to reduce the interference-plus-noise energy without distorting the desired signal. In either case, it is of paramount importance to accurately quantify the achieved noise and interference reduction. Indeed, it is quite reasonable to ask, for instance, about the price that has to be paid in order to achieve total removal of the interference without distorting the target signal when using the LCMV. Besides, it is fundamental to understand the effect of the MVDR on both noise and interference. In this correspondence, we investigate the performance of the MVDR and LCMV beamformers when the interference and ambient noise coexist with the target source. We demonstrate a new relationship between both filters in which the MVDR is decomposed into the LCMV and a matched filter (MVDR solution in the absence of interference). Both components are properly weighted to achieve maximum interference-plus-noise reduction. We investigate the performance of the MVDR, LCMV, and matched filters and elaborate new closed-form expressions for their output signal-to-interference ratio (SIR) and output signal-to-noise ratio (SNR). We theoretically demonstrate the tradeoff that has to be made between noise reduction and interference rejection. In fact, the total removal of the interference may severely amplify the residual ambient noise. Conversely, totally focussing on noise reduction leads to increased level of residual interference. The proposed study is finally supported by several numerical examples.

Journal ArticleDOI
TL;DR: This paper derives and compares eight stochastic distances and assesses the performance of hypothesis tests that employ them and maximum likelihood estimation, concluding that tests based on the triangular distance have the closest empirical size to the theoretical one, while thosebased on the arithmetic-geometric distances have the best power.
Abstract: Images obtained with coherent illumination, as is the case of sonar, ultrasound-B, laser, and synthetic aperture radar, are affected by speckle noise which reduces the ability to extract information from the data. Specialized techniques are required to deal with such imagery, which has been modeled by the G 0 distribution and, under which, regions with different degrees of roughness and mean brightness can be characterized by two parameters; a third parameter, which is the number of looks, is related to the overall signal-to-noise ratio. Assessing distances between samples is an important step in image analysis; they provide grounds of the separability and, therefore, of the performance of classification procedures. This paper derives and compares eight stochastic distances and assesses the performance of hypothesis tests that employ them and maximum likelihood estimation. We conclude that tests based on the triangular distance have the closest empirical size to the theoretical one, while those based on the arithmetic-geometric distances have the best power. Since the power of tests based on the triangular distance is close to optimum, we conclude that the safest choice is using this distance for hypothesis testing, even when compared with classical distances as Kullback-Leibler and Bhattacharyya.

Journal ArticleDOI
TL;DR: This proposed scheme mainly focuses on compressing the large signals, while maintaining the average power constant by properly choosing transform parameters, and outperforms other companding scheme in terms of spectrum side-lobes, PAPR reduction and BER performance.
Abstract: Companding transform is a simple and efficient method in reducing the Peak-to-Average Power Ratio (PAPR) of Orthogonal Frequency Division Multiplexing (OFDM) systems. In this paper, a novel nonlinear companding scheme is proposed to reduce the PAPR and improve Bit Error Rate (BER) for OFDM systems. This proposed scheme mainly focuses on compressing the large signals, while maintaining the average power constant by properly choosing transform parameters. Moreover, analysis shows that the proposed scheme without de-companding at the receiver can also offer a good BER performance. Finally, simulation results show that the proposed scheme outperforms other companding scheme in terms of spectrum side-lobes, PAPR reduction and BER performance.

Journal ArticleDOI
TL;DR: It is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.
Abstract: The optimization of the pilot overhead in single-user wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-used block fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is found that crosstalk noise significantly limits the scalability of ONoCs, and a novel compact high-SNR optical router is proposed to improve the maximum ONoC size to 8×8.
Abstract: Crosstalk noise is an intrinsic characteristic of photonic devices used by optical networks-on-chip (ONoCs) as well as a potential issue. For the first time, this paper analyzed and modeled the crosstalk noise, signal-to-noise ratio (SNR), and bit error rate (BER) of optical routers and ONoCs. The analytical models for crosstalk noise, minimum SNR, and maximum BER in mesh-based ONoCs are presented. An automated crosstalk analyzer for optical routers is developed. We find that crosstalk noise significantly limits the scalability of ONoCs. For example, due to crosstalk noise, the maximum BER is 10−3 on the 8×8 mesh-based ONoC using an optimized crossbar-based optical router. To achieve the BER of 10−9 for reliable transmissions, the maximum ONoC size is 6×6. A novel compact high-SNR optical router is proposed to improve the maximum ONoC size to 8×8.

Journal ArticleDOI
TL;DR: In this article, the focus of an experimental study is to optimize the cutting parameters using two performance measures, workpiece surface temperature and surface roughness, and the optimal cutting parameters for each performance measure were obtained employing Taguchi techniques.
Abstract: Problem statement: In machining operation, the quality of surface finish is an important requirement for many turned workpieces. Thus, the choice of optimized cutting parameters is very important for controlling the required surface quality. Approach: The focus of present experimental study is to optimize the cutting parameters using two performance measures, workpiece surface temperature and surface roughness. Optimal cutting parameters for each performance measure were obtained employing Taguchi techniques. The orthogonal array, signal to noise ratio and analysis of variance were employed to study the performance characteristics in turning operation. Results: The experimental results showed that the workpiece surface temperature can be sensed and used effectively as an indicator to control the cutting performance and improves the optimization process. Conclusion: Thus, it is possible to increase machine utilization and decrease production cost in an automated manufacturing environment.

Journal ArticleDOI
TL;DR: A new channel estimation prototype for the amplify-and-forward (AF) two-way relay network (TWRN) is proposed by allowing the relay to first estimate the channel parameters and then allocate the powers for these parameters, so that the final data detection at the source terminals could be optimized.
Abstract: In this paper, we propose a new channel estimation prototype for the amplify-and-forward (AF) two-way relay network (TWRN). By allowing the relay to first estimate the channel parameters and then allocate the powers for these parameters, the final data detection at the source terminals could be optimized. Specifically, we consider the classical three-node TWRN where two source terminals exchange their information via a single relay node in between and adopt the maximum likelihood (ML) channel estimation at the relay node. Two different power allocation schemes to the training signals are then proposed to maximize the average effective signal-to-noise ratio (AESNR) of the data detection and minimize the mean-square-error (MSE) of the channel estimation, respectively. The optimal/sub-optimal training designs for both schemes are found as well. Simulation results corroborate the advantages of the proposed technique over the existing ones.