scispace - formally typeset
Search or ask a question
Author

Mats Viberg

Bio: Mats Viberg is an academic researcher from Chalmers University of Technology. The author has contributed to research in topics: Sensor array & Estimation theory. The author has an hindex of 41, co-authored 231 publications receiving 11749 citations. Previous affiliations of Mats Viberg include Linköping University & Blekinge Institute of Technology.


Papers
More filters
Proceedings ArticleDOI
15 Apr 2007
TL;DR: It is shown how interpolation using local models can be used to make the calibration grid more dense without increasing the number of measurements to improve the performance of the DOA estimation with ESPRIT using arrays with large position errors.
Abstract: In arrays with scan dependent errors, such as large position errors, a dense calibration grid can become necessary. Calibration time is, however, very expensive and keeping the measured calibration grid as sparse as possible is important. In this paper it is shown how interpolation using local models can be used to make the calibration grid more dense without increasing the number of measurements. Furthermore, it is shown how the performance of the DOA estimation with ESPRIT using arrays with large position errors can be improved by a second step including weighted calibration.

5 citations

Posted Content
TL;DR: In this article, the authors considered the problem of minimizing the mean-square error at the fusion center of a time-correlated signal using an EH sensor and provided the optimal power allocation strategies for a number of illustrative scenarios.
Abstract: We consider the remote estimation of a time-correlated signal using an energy harvesting (EH) sensor. The sensor observes the unknown signal and communicates its observations to a remote fusion center using an amplify-and-forward strategy. We consider the design of optimal power allocation strategies in order to minimize the mean-square error at the fusion center. Contrary to the traditional approaches, the degree of correlation between the signal values constitutes an important aspect of our formulation. We provide the optimal power allocation strategies for a number of illustrative scenarios. We show that the most majorized power allocation strategy, i.e. the power allocation as balanced as possible, is optimal for the cases of circularly wide-sense stationary (c.w.s.s.) signals with a static correlation coefficient, and sampled low-pass c.w.s.s. signals for a static channel. We show that the optimal strategy can be characterized as a water-filling type solution for sampled low-pass c.w.s.s. signals for a fading channel. Motivated by the high-complexity of the numerical solution of the optimization problem, we propose low-complexity policies for the general scenario. Numerical evaluations illustrate the close performance of these low-complexity policies to that of the optimal policies, and demonstrate the effect of the EH constraints and the degree of freedom of the signal.

5 citations

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A paradigm for generating an array model from noise corrupted calibration vectors is developed and the key idea is to use a local parametric model of the sensor responses.
Abstract: Many practical applications of signal processing require accurate determination of signal parameters from sensor array measurements. Most estimation techniques are sensitive to errors in the array response model. Thus, reliable array calibration schemes are of great importance. A paradigm for generating an array model from noise corrupted calibration vectors is developed. The key idea is to use a local parametric model of the sensor responses. The potential improvement using the suggested scheme is demonstrated on real data collected from a full-scale hydroacoustic array. >

5 citations

Proceedings ArticleDOI
07 May 1996
TL;DR: The ML signal parameter estimator derived for the non-coherent case (or its large-sample realizations such as MODE os WSF) asymptotically achieves the lowest possible estimation error variance (corresponding to the coherent Cramer-Rao bound).
Abstract: The problem of estimating the parameters of several wavefronts from the measurements of multiple sensors is often referred to as array signal processing. The maximum likelihood (ML) estimator in array signal processing for the case of non-coherent signals has been studied extensively. The focus here is on the ML estimator for the case of stochastic coherent signals which arises due to, for example, specular multipath propagation. We show the very surprising fact that the ML estimates of the signal parameters obtained by ignoring the information that the sources are coherent, coincide in large samples with the ML estimates obtained by exploiting the coherent source information. Thus, the ML signal parameter estimator derived for the non-coherent case (or its large-sample realizations such as MODE os WSF) asymptotically achieves the lowest possible estimation error variance (corresponding to the coherent Cramer-Rao bound).

5 citations

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A novel region-based scheme for dynamically modeling time-evolving statistics of video background, leading to an effective segmentation of foreground moving objects for a video surveillance system through introducing dynamic background region merging and splitting.
Abstract: This paper proposes a novel region-based scheme for dynamically modeling time-evolving statistics of video background, leading to an effective segmentation of foreground moving objects for a video surveillance system. In (L. Li et al., 2004) statistical-based video surveillance systems employ a Bayes decision rule for classifying foreground and background changes in individual pixels. Although principal feature representations significantly reduce the size of tables of statistics, pixel-wise maintenance remains a challenge due to the computations and memory requirement. The proposed region-based scheme, which is an extension of the above method, replaces pixel-based statistics by region-based statistics through introducing dynamic background region (or pixel) merging and splitting. Simulations have been performed to several outdoor and indoor image sequences, and results have shown a significant reduction of memory requirements for tables of statistics while maintaining relatively good quality in foreground segmented video objects.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors examined the performance of using multi-element array (MEA) technology to improve the bit-rate of digital wireless communications and showed that with high probability extraordinary capacity is available.
Abstract: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised.

10,526 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations