scispace - formally typeset
Search or ask a question
Author

Mats Viberg

Bio: Mats Viberg is an academic researcher from Chalmers University of Technology. The author has contributed to research in topics: Sensor array & Estimation theory. The author has an hindex of 41, co-authored 231 publications receiving 11749 citations. Previous affiliations of Mats Viberg include Linköping University & Blekinge Institute of Technology.


Papers
More filters
Proceedings ArticleDOI
22 Jun 2014
TL;DR: An optimization in terms of mainlobe and sidelobe properties of the ambiguity function is formulated, then an approximate solution based on convex relaxation is provided, showing the advantage of selecting a smart signal compared to a conventional waveform design.
Abstract: Modern signal generators offer the capability to synthesize arbitrary wideband waveforms for radar and sonar applications. This makes it possible to optimize signals for specific purposes or scenarios. Herein, we discuss how to design a wideband waveform for clutter suppression. We formulate an optimization in terms of mainlobe and sidelobe properties of the ambiguity function, then we provide an approximate solution based on convex relaxation. Numerical evaluation shows the advantage of selecting a smart signal compared to a conventional waveform design. The advantages are shown as a lower probability of false alarm and a higher probability of correct target detection.

2 citations

Proceedings ArticleDOI
24 Oct 1999
TL;DR: For robust detection based on the criterion function of a certain class of estimators, a two-step procedure is proposed where an alternative representation of the residuals is found using a predictor and a parameter is estimated from the new residuals.
Abstract: A critical problem in many signal processing applications is the determination of the correct model order for example, the number of multipath components in a received communication signal. One approach to detect the model order is to use the distribution of the criterion function of the estimator applied to find interesting parameters. Unfortunately, the nominal distribution of such a criterion function relies heavily on a correct model of the observed signal. In practice, with modeling errors present, the distribution is unknown. For robust detection based on the criterion function of a certain class of estimators, a two-step procedure is proposed. First, an alternative representation of the residuals is found using a predictor. Second, using bootstrap resampling, a parameter is estimated from the new residuals. This parameter transforms the criterion function to pivotal form. Numerical experiments show robustness to a range of possible modeling errors. An example from real measured array data is included.

1 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This work adopts an experimentally validated additive noise model in which the level of the noise at an antenna is proportional to the signal power at that antenna, which constitutes a non-convex formulation for the single antenna information user case.
Abstract: We investigate the performance of a communication system with simultaneous wireless information and power transfer capabilities under non-ideal transmitter hardware. We adopt an experimentally validated additive noise model in which the level of the noise at an antenna is proportional to the signal power at that antenna. We consider the linear precoder design problem and focus on the problem of minimizing the mean-square error under energy harvesting constraints. This set-up, in general, constitutes a non-convex formulation. For the single antenna information user case, we provide a tight convex relaxation, i.e. a convex formulation from which an optimal solution for the original problem can be constructed. For the general case, we propose a block coordinate descent technique to solve the resulting non-convex problem. Our numerical results illustrate the effect of hardware impairments on the system.

1 citations

Proceedings ArticleDOI
26 Dec 2006
TL;DR: A predistortion method that is based on a coherence function criterion that carries out linearization without knowing the linear block in the Hammerstein system is proposed, particularly desirable for nonlinear acoustic echo cancellation applications.
Abstract: This paper addresses compensation for nonlinearity in Hammerstein nonlinear systems. We propose a predistortion method that is based on a coherence function criterion. The proposed method carries out linearization without knowing the linear block in the Hammerstein system. This is particularly desirable for nonlinear acoustic echo cancellation applications where dealing with the linear block can be computational cumbersome due to the long room acoustic impulse response. Effectiveness of the algorithm is demonstrated through computer simulations.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors examined the performance of using multi-element array (MEA) technology to improve the bit-rate of digital wireless communications and showed that with high probability extraordinary capacity is available.
Abstract: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised.

10,526 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations

Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations