scispace - formally typeset
Search or ask a question
Author

Mats Viberg

Bio: Mats Viberg is an academic researcher from Chalmers University of Technology. The author has contributed to research in topics: Sensor array & Estimation theory. The author has an hindex of 41, co-authored 231 publications receiving 11749 citations. Previous affiliations of Mats Viberg include Linköping University & Blekinge Institute of Technology.


Papers
More filters
Proceedings ArticleDOI
05 Jun 2019
TL;DR: Simulation results show that, the proposed method outperforms the existing Partial Relaxed Covariance Fitting method, especially in difficult conditions with small sample size and low Signal-to-Noise Ratio, but at significantly lower cost.
Abstract: The so-called Partial Relaxation approach has recently been proposed to solve the Direction-of-Arrival estimation problem. In this paper, we extend the previous work by applying Covariance Fitting with a data model that includes the noise covariance. Instead of applying a single source approximation to multi-source estimation criteria, which is the case for MUSIC, the conventional beamformer, or the Capon beamformer, the Partial Relaxation approach accounts for the existence of multiple sources using a non-parametric modification of the signal model. In the Partial Relaxation framework, the structure of the desired direction is kept, whereas the sensor array manifold corresponding to the remaining signals is relaxed [1], [2]. This procedure allows to compute a closed-form solution for the relaxed signal part and to come up with a simple spectral search with a significantly reduced computational complexity. Unlike in the existing Partial Relaxed Covariance Fitting approach, in this paper we utilize more prior-knowledge on the structure of the covariance matrix by also considering the noise covariance. Simulation results show that, the proposed method outperforms the existing Partial Relaxed Covariance Fitting method, especially in difficult conditions with small sample size and low Signal-to-Noise Ratio. Its threshold performance is close to that of Deterministic Maximum Likelihood, but at significantly lower cost.

2 citations

Posted Content
TL;DR: In this paper, a wideband spectrum sensing method is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate, where the correlation matrix of a finite number of noisy samples is computed and used by a nonlinear least square estimator to detect the occupied and vacant channels of the spectrum.
Abstract: For systems and devices, such as cognitive radio and networks, that need to be aware of available frequency bands, spectrum sensing has an important role. A major challenge in this area is the requirement of a high sampling rate in the sensing of a wideband signal. In this paper a wideband spectrum sensing method is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate. The correlation matrix of a finite number of noisy samples is computed and used by a non-linear least square (NLLS) estimator to detect the occupied and vacant channels of the spectrum. We provide an expression for the detection threshold as a function of sampling parameters and noise power. Also, a sequential forward selection algorithm is presented to find the occupied channels with low complexity. The method can be applied to both correlated and uncorrelated wideband multichannel signals. A comparison with conventional energy detection using Nyquist-rate sampling shows that the proposed scheme can yield similar performance for SNR above 4 dB with a factor of 3 smaller sampling rate.

2 citations

Proceedings ArticleDOI
31 Oct 1994
TL;DR: In this article, the problem of using a partly calibrated array for maximum likelihood (ML) bearing estimation of possibly coherent signals buried in unknown correlated noise fields is shown to admit a neat solution under fairly general conditions.
Abstract: The problem of using a partly calibrated array for maximum likelihood (ML) bearing estimation of possibly coherent signals buried in unknown correlated noise fields is shown to admit a neat solution under fairly general conditions. The ML estimator introduced in this paper (and referred to as MLE) is shown to be asymptotically equivalent to a subspace-based bearing estimator proposed by Wu and Wong (see IEEE Trans. Signal Processing, vol. 42, Sept. 1994) (called UNCLE and re-derived herein by a simpler approach than in the original work). A statistical analysis is performed, proving that the MLE and UNCLE methods are asymptotically equivalent and statistically efficient. In a simulation study, the methods are also found to possess very similar finite-sample properties. >

2 citations

Proceedings ArticleDOI
01 Nov 2012
TL;DR: Methods to apply LASSO without grid size limitation and with less complexity are presented, which show by simulations that compared to practical implementations of ML, the proposed techniques are less sensitive to the source power difference.
Abstract: The SPS-LASSO has recently been introduced as a solution to the problem of regularization parameter selection in the complex-valued LASSO problem Still, the dependence on the grid size and the polynomial time of performing convex optimization technique in each iteration, in addition to the deficiencies in the low noise regime, confines its performance for Direction of Arrival (DOA) estimation This work presents methods to apply LASSO without grid size limitation and with less complexity As we show by simulations, the proposed methods loose a negligible performance compared to the Maximum Likelihood (ML) estimator, which needs a combinatorial search We also show by simulations that compared to practical implementations of ML, the proposed techniques are less sensitive to the source power difference

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors examined the performance of using multi-element array (MEA) technology to improve the bit-rate of digital wireless communications and showed that with high probability extraordinary capacity is available.
Abstract: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised.

10,526 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations