scispace - formally typeset
Search or ask a question
Author

Mats Viberg

Bio: Mats Viberg is an academic researcher from Chalmers University of Technology. The author has contributed to research in topics: Sensor array & Estimation theory. The author has an hindex of 41, co-authored 231 publications receiving 11749 citations. Previous affiliations of Mats Viberg include Linköping University & Blekinge Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: An overview of existing subspace-based techniques for system identification is given, grouped into the classes of realization-based and direct techniques.

94 citations

Journal ArticleDOI
TL;DR: An analysis for the class of so-called subspace fitting algorithms shows that an overall optimal weighting exists for a particular array and noise covariance error model and concludes that no other method can yield more accurate estimates for large samples and small model errors.
Abstract: The principal sources of estimation error in sensor array signal processing applications are the finite sample effects of additive noise and imprecise models for the antenna array and spatial noise statistics While the effects of these errors have been studied individually, their combined effect has not yet been rigorously analyzed The authors undertake such an analysis for the class of so-called subspace fitting algorithms In addition to deriving first-order asymptotic expressions for the estimation error, they show that an overall optimal weighting exists for a particular array and noise covariance error model In a companion paper, the optimally weighted subspace fitting method is shown to be asymptotically equivalent with the more complicated maximum a posteriori estimator Thus, for the model in question, no other method can yield more accurate estimates for large samples and small model errors Numerical examples and computer simulations are included to illustrate the obtained results and to verify the asymptotic analysis for realistic scenarios >

89 citations

Journal ArticleDOI
TL;DR: This paper provides a new analytic expression of the bias and RMS error (root mean square) error of the estimated direction of arrival (DOA) in the presence of modeling errors, and shows that the DOA estimation error can be expressed as a ratio of Hermitian forms, with a stochastic vector containing the modeling error.
Abstract: This paper provides a new analytic expression of the bias and RMS error (root mean square) error of the estimated direction of arrival (DOA) in the presence of modeling errors. In , first-order approximations of the RMS error are derived, which are accurate for small enough perturbations. However, the previously available expressions are not able to capture the behavior of the estimation algorithm into the threshold region. In order to fill this gap, we provide a second-order performance analysis, which is valid in a larger interval of modeling errors. To this end, it is shown that the DOA estimation error for each signal source can be expressed as a ratio of Hermitian forms, with a stochastic vector containing the modeling error. Then, an analytic expression for the moments of such a Hermitian forms ratio is provided. Finally, a closed-form expression for the performance (bias and RMS error) is derived. Simulation results indicate that the new result is accurate into the region where the algorithm breaks down.

84 citations

Journal ArticleDOI
TL;DR: Analysis of methods for estimating the parameters of narrow-band signals arriving at an array of sensors using so-called deterministic and stochastic maximum likelihood methods indicates that both ML methods provide efficient estimates for very moderate array sizes, whereas the beamforming method requires a somewhat larger array aperture to overcome the inherent bias and resolution problem.
Abstract: This paper considers analysis of methods for estimating the parameters of narrow-band signals arriving at an array of sensors. This problem has important applications in, for instance, radar direction finding and underwater source localization. The so-called deterministic and stochastic maximum likelihood (ML) methods are the main focus of this paper. A performance analysis is carried out assuming a finite number of samples and that the array is composed of a sufficiently large number of sensors. Several thousands of antennas are not uncommon in, e.g., radar applications. Strong consistency of the parameter estimates is proved, and the asymptotic covariance matrix of the estimation error is derived. Unlike the previously studied large sample case, the present analysis shows that the accuracy is the same for the two ML methods. Furthermore, the asymptotic covariance matrix of the estimation error coincides with the deterministic Cramer-Rao bound. Under a certain assumption, the ML methods can be implemented by means of conventional beamforming for a large enough number of sensors. We also include a simple simulation study, which indicates that both ML methods provide efficient estimates for very moderate array sizes, whereas the beamforming method requires a somewhat larger array aperture to overcome the inherent bias and resolution problem. >

81 citations

Proceedings ArticleDOI
24 Sep 1997
TL;DR: In this paper, it is shown that a better conditioned minimization problem can be obtained if the problem is separated with respect to the linear parameters, which will increase the convergence speed of the minimization.
Abstract: Neural network minimization problems are often ill-conditioned and in this contribution two ways to handle this will be discussed. It is shown that a better conditioned minimization problem can be obtained if the problem is separated with respect to the linear parameters. This will increase the convergence speed of the minimization. The Levenberg-Marquardt minimization method is often concluded to perform better than the Gauss-Newton and the steepest descent methods on neural network minimization problems. The reason for this is investigated and it is shown that the Levenberg-Marquardt method divides the parameters into two subsets. For one subset the convergence is almost quadratic like that of the Gauss-Newton method, and on the other subset the parameters do hardly converge at all. In this way a fast convergence among the important parameters is obtained.

79 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors examined the performance of using multi-element array (MEA) technology to improve the bit-rate of digital wireless communications and showed that with high probability extraordinary capacity is available.
Abstract: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised.

10,526 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations