scispace - formally typeset
Search or ask a question
Author

Roberto Lopez-Valcarce

Other affiliations: University of Iowa
Bio: Roberto Lopez-Valcarce is an academic researcher from University of Vigo. The author has contributed to research in topics: Estimator & Wireless sensor network. The author has an hindex of 25, co-authored 174 publications receiving 1901 citations. Previous affiliations of Roberto Lopez-Valcarce include University of Iowa.


Papers
More filters
Journal ArticleDOI
TL;DR: This work studies the problem of detecting a Gaussian signal with rank-P unknown spatial covariance matrix in spatially uncorrelated Gaussian noise with unknown covariance using multiple antennas and derived the generalized likelihood ratio test (GLRT).
Abstract: Spectrum sensing is a key component of the cognitive radio paradigm. Primary signals are typically detected with uncalibrated receivers at signal-to-noise ratios (SNRs) well below decodability levels. Multiantenna detectors exploit spatial independence of receiver thermal noise to boost detection performance and robustness. We study the problem of detecting a Gaussian signal with rank-P unknown spatial covariance matrix in spatially uncorrelated Gaussian noise with unknown covariance using multiple antennas. The generalized likelihood ratio test (GLRT) is derived for two scenarios. In the first one, the noises at all antennas are assumed to have the same (unknown) variance, whereas in the second, a generic diagonal noise covariance matrix is allowed in order to accommodate calibration uncertainties in the different antenna frontends. In the latter case, the GLRT statistic must be obtained numerically, for which an efficient method is presented. Furthermore, for asymptotically low SNR, it is shown that the GLRT does admit a closed form, and the resulting detector performs well in practice. Extensions are presented in order to account for unknown temporal correlation in both signal and noise, as well as frequency-selective channels.

161 citations

Proceedings ArticleDOI
14 Jun 2010
TL;DR: A novel detector is proposed, based on an approximation to the Generalized Likelihood Ratio, outperforming previous schemes for uncalibrated multiantenna receivers, and trading off performance and complexity.
Abstract: Spectrum sensing is a key ingredient of the dynamic spectrum access paradigm, but it needs powerful detectors operating at SNRs well below the decodability levels of primary signals. Noise uncertainty poses a significant challenge to the development of such schemes, requiring some degree of diversity (spatial, temporal, or in distribution) for identifiability of the noise level. Multiantenna detectors exploit spatial independence of receiver thermal noise. We review this class of schemes and propose a novel detector trading off performance and complexity. However, most of these methods assume that the noise power, though unknown, is the same at all antennas. As it turns out, calibration errors have a substantial impact on these detectors. Another novel detector is proposed, based on an approximation to the Generalized Likelihood Ratio, outperforming previous schemes for uncalibrated multiantenna receivers.

84 citations

Journal ArticleDOI
TL;DR: A new non-data-aided estimate is proposed, which makes use of the sixth-order moment of the received data, and which can be tuned for a particular constellation in order to extend the usable range of SNR values.
Abstract: Signal-to-noise ratio (SNR) estimation is an important task in many digital communication systems. With nonconstant modulus constellations, the performance of the classical second- and fourth-order moments estimate is known to degrade with increasing SNR. A new non-data-aided estimate is proposed, which makes use of the sixth-order moment of the received data, and which can be tuned for a particular constellation in order to extend the usable range of SNR values. The advantage of the new method is especially significant for constellations with two different amplitude levels, e.g. 16-amplitude-and-phase-shift keying (16-APSK)

75 citations

Journal ArticleDOI
TL;DR: Strengths of data- and model-driven approaches are combined to develop estimators capable of incorporating multiple forms of spectral and propagation prior information while fitting the rapid variations of shadow fading across space.
Abstract: Power spectral density (PSD) maps providing the distribution of RF power across space and frequency are constructed using power measurements collected by a network of low-cost sensors. By introducing linear compression and quantization to a small number of bits, sensor measurements can be communicated to the fusion center with minimal bandwidth requirements. Strengths of data- and model-driven approaches are combined to develop estimators capable of incorporating multiple forms of spectral and propagation prior information while fitting the rapid variations of shadow fading across space. To this end, novel nonparametric and semiparametric formulations are investigated. It is shown that PSD maps can be obtained using support vector machine-type solvers. In addition to batch approaches, an online algorithm attuned to real-time operation is developed. Numerical tests assess the performance of the novel algorithms.

71 citations

Journal ArticleDOI
01 Jul 2012
TL;DR: A two-objective evolutionary algorithm is proposed which takes concurrently into account during the evolutionary process both the localization accuracy and certain topological constraints induced by connectivity considerations, thus manifesting its effectiveness and stability.
Abstract: To know the location of nodes plays an important role in many current and envisioned wireless sensor network applications. In this framework, we consider the problem of estimating the locations of all the nodes of a network, based on noisy distance measurements for those pairs of nodes in range of each other, and on a small fraction of anchor nodes whose actual positions are known a priori. The methods proposed so far in the literature for tackling this non-convex problem do not generally provide accurate estimates. The difficulty of the localization task is exacerbated by the fact that the network is not generally uniquely localizable when its connectivity is not sufficiently high. In order to alleviate this drawback, we propose a two-objective evolutionary algorithm which takes concurrently into account during the evolutionary process both the localization accuracy and certain topological constraints induced by connectivity considerations. The proposed method is tested with different network configurations and sensor setups, and compared in terms of normalized localization error with another metaheuristic approach, namely SAL, based on simulated annealing. The results show that, in all the experiments, our approach achieves considerable accuracies and significantly outperforms SAL, thus manifesting its effectiveness and stability.

65 citations


Cited by
More filters
01 Nov 1981
TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.
Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a survey of self-interference mitigation techniques for in-band full-duplex (IBFD) wireless systems and discuss the challenges and opportunities in the design and analysis of IBFD wireless systems.
Abstract: In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. One of the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference that the modem's transmitter causes to its own receiver. This tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems.

1,752 citations

Posted Content
TL;DR: This tutorial surveys a wide range of IBFD self-interference mitigation techniques and discusses numerous other research challenges and opportunities in the design and analysis of IB FD wireless systems.
Abstract: In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. Because one the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference caused by an IBFD node's own transmissions to its desired receptions, this tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems.

1,549 citations

Journal ArticleDOI
TL;DR: This handbook is a very useful handbook for engineers, especially those working in signal processing, and provides real data bootstrap applications to illustrate the theory covered in the earlier chapters.
Abstract: tions. Bootstrap has found many applications in engineering field, including artificial neural networks, biomedical engineering, environmental engineering, image processing, and radar and sonar signal processing. Basic concepts of the bootstrap are summarized in each section as a step-by-step algorithm for ease of implementation. Most of the applications are taken from the signal processing literature. The principles of the bootstrap are introduced in Chapter 2. Both the nonparametric and parametric bootstrap procedures are explained. Babu and Singh (1984) have demonstrated that in general, these two procedures behave similarly for pivotal (Studentized) statistics. The fact that the bootstrap is not the solution for all of the problems has been known to statistics community for a long time; however, this fact is rarely touched on in the manuscripts meant for practitioners. It was first observed by Babu (1984) that the bootstrap does not work in the infinite variance case. Bootstrap Techniques for Signal Processing explains the limitations of bootstrap method with an example. I especially liked the presentation style. The basic results are stated without proofs; however, the application of each result is presented as a simple step-by-step process, easy for nonstatisticians to follow. The bootstrap procedures, such as moving block bootstrap for dependent data, along with applications to autoregressive models and for estimation of power spectral density, are also presented in Chapter 2. Signal detection in the presence of noise is generally formulated as a testing of hypothesis problem. Chapter 3 introduces principles of bootstrap hypothesis testing. The topics are introduced with interesting real life examples. Flow charts, typical in engineering literature, are used to aid explanations of the bootstrap hypothesis testing procedures. The bootstrap leads to second-order correction due to pivoting; this improvement in the results due to pivoting is also explained. In the second part of Chapter 3, signal processing is treated as a regression problem. The performance of the bootstrap for matched filters as well as constant false-alarm rate matched filters is also illustrated. Chapters 2 and 3 focus on estimation problems. Chapter 4 introduces bootstrap methods used in model selection. Due to the inherent structure of the subject matter, this chapter may be difficult for nonstatisticians to follow. Chapter 5 is the most impressive chapter in the book, especially from the standpoint of statisticians. It provides real data bootstrap applications to illustrate the theory covered in the earlier chapters. These include applications to optimal sensor placement for knock detection and land-mine detection. The authors also provide a MATLAB toolbox comprising frequently used routines. Overall, this is a very useful handbook for engineers, especially those working in signal processing.

1,292 citations