scispace - formally typeset
Search or ask a question
Author

Preben Mogensen

Other affiliations: Nokia, Bell Labs, Aalto University  ...read more
Bio: Preben Mogensen is an academic researcher from Aalborg University. The author has contributed to research in topics: Telecommunications link & Scheduling (computing). The author has an hindex of 64, co-authored 512 publications receiving 16042 citations. Previous affiliations of Preben Mogensen include Nokia & Bell Labs.


Papers
More filters
Journal ArticleDOI
TL;DR: A simple stochastic MIMO model channel has been developed that uses the correlation matrices at the mobile station (MS) and base station (BS) so that results of the numerous single-input/multiple-output studies that have been published in the literature can be used as input parameters.
Abstract: Theoretical and experimental studies of multiple-input/multiple-output (MIMO) radio channels are presented. A simple stochastic MIMO model channel has been developed. This model uses the correlation matrices at the mobile station (MS) and base station (BS) so that results of the numerous single-input/multiple-output studies that have been published in the literature can be used as input parameters. The model is simplified to the narrowband channels. The validation of the model is based upon data collected in both picocell and microcell environments. The stochastic model has also been used to investigate the capacity of MIMO radio channels, considering two different power allocation strategies, water filling and uniform and two different antenna topologies, 4/spl times/4 and 2/spl times/4. Space diversity used at both ends of the MIMO radio link is shown to be an efficient technique in picocell environments, achieving capacities within 14 b/s/Hz and 16 b/s/Hz in 80% of the cases for a 4/spl times/4 antenna configuration implementing water filling at a SNR of 20 dB.

1,493 citations

Journal ArticleDOI
TL;DR: It is found that in typical urban environments the power azimuth spectrum (PAS) is accurately described by a Laplacian function, while a Gaussian PDF matches the Azimuth PDF.
Abstract: A simple statistical model of azimuthal and temporal dispersion in mobile radio channels is proposed. The model includes the probability density function (PDF) of the delay and azimuth of the impinging waves as well as their expected power conditioned on the delay and azimuth. The statistical properties are extracted from macrocellular measurements conducted in a variety of urban environments. It is found that in typical urban environments the power azimuth spectrum (PAS) is accurately described by a Laplacian function, while a Gaussian PDF matches the azimuth PDF. Moreover, the power delay spectrum (PDS) and the delay PDF are accurately modeled by an exponential decaying function. In bad urban environments, channel dispersion is better characterized by a multicluster model, where the PAS and PDS are modeled as a sum of Laplacian functions and exponential decaying functions, respectively.

647 citations

Proceedings ArticleDOI
22 Apr 2007
TL;DR: An adjusted Shannon capacity formula is introduced, where it is shown that the bandwidth efficiency can be calculated based on system parameters, while the SNR efficiency is extracted from detailed link level studies.
Abstract: In this paper we propose a modification to Shannon capacity bound in order to facilitate accurate benchmarking of UTRAN long term evolution (LTE). The method is generally applicable to wireless communication systems, while we have used LTE air-interface technology as a case study. We introduce an adjusted Shannon capacity formula, where we take into account the system bandwidth efficiency and the SNR efficiency of LTE. Separating these issues, allows for simplified parameter extraction. We show that the bandwidth efficiency can be calculated based on system parameters, while the SNR efficiency is extracted from detailed link level studies including advanced features of MIMO and frequency domain packet scheduling (FDPS). We then use the adjusted Shannon capacity formula combined with G-factor distributions for macro and micro cell scenarios to predict LTE cell spectral efficiency (SE). Such LTE SE predictions are compared to LTE cell SE results generated by system level simulations. The results show an excellent match of less that 5-10% deviation.

580 citations

Journal ArticleDOI
TL;DR: The other major technology transformations that are likely to define 6G are discussed: cognitive spectrum sharing methods and new spectrum bands; the integration of localization and sensing capabilities into the system definition, the achievement of extreme performance requirements on latency and reliability; new network architecture paradigms involving sub-networks and RAN-Core convergence; and new security and privacy schemes.
Abstract: The focus of wireless research is increasingly shifting toward 6G as 5G deployments get underway. At this juncture, it is essential to establish a vision of future communications to provide guidance for that research. In this paper, we attempt to paint a broad picture of communication needs and technologies in the timeframe of 6G. The future of connectivity is in the creation of digital twin worlds that are a true representation of the physical and biological worlds at every spatial and time instant, unifying our experience across these physical, biological and digital worlds. New themes are likely to emerge that will shape 6G system requirements and technologies, such as: (i) new man-machine interfaces created by a collection of multiple local devices acting in unison; (ii) ubiquitous universal computing distributed among multiple local devices and the cloud; (iii) multi-sensory data fusion to create multi-verse maps and new mixed-reality experiences; and (iv) precision sensing and actuation to control the physical world. With rapid advances in artificial intelligence, it has the potential to become the foundation for the 6G air interface and network, making data, compute and energy the new resources to be exploited for achieving superior performance. In addition, in this paper we discuss the other major technology transformations that are likely to define 6G: (i) cognitive spectrum sharing methods and new spectrum bands; (ii) the integration of localization and sensing capabilities into the system definition, (iii) the achievement of extreme performance requirements on latency and reliability; (iv) new network architecture paradigms involving sub-networks and RAN-Core convergence; and (v) new security and privacy schemes.

420 citations

Proceedings ArticleDOI
24 Sep 2000
TL;DR: A simple framework for Monte Carlo simulations of a multiple-input-multiple-output radio channel is proposed and it is demonstrated that the Shannon capacity of the channel is highly dependent on the considered environment.
Abstract: A simple framework for Monte Carlo simulations of a multiple-input-multiple-output radio channel is proposed. The derived model includes the partial correlation between the paths in the channel, as well as fast fading and time dispersion. The only input parameters required for the model are the shape of the power delay spectrum and the spatial correlation functions at the transmit and receive end. Thus, the required parameters are available in the open literature for a large variety of environments. It is furthermore demonstrated that the Shannon capacity of the channel is highly dependent on the considered environment.

302 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 2005

9,038 citations

Journal ArticleDOI
TL;DR: The gains in multiuser systems are even more impressive, because such systems offer the possibility to transmit simultaneously to several users and the flexibility to select what users to schedule for reception at any given point in time.
Abstract: Multiple-input multiple-output (MIMO) technology is maturing and is being incorporated into emerging wireless broadband standards like long-term evolution (LTE) [1]. For example, the LTE standard allows for up to eight antenna ports at the base station. Basically, the more antennas the transmitter/receiver is equipped with, and the more degrees of freedom that the propagation channel can provide, the better the performance in terms of data rate or link reliability. More precisely, on a quasi static channel where a code word spans across only one time and frequency coherence interval, the reliability of a point-to-point MIMO link scales according to Prob(link outage) ` SNR-ntnr where nt and nr are the numbers of transmit and receive antennas, respectively, and signal-to-noise ratio is denoted by SNR. On a channel that varies rapidly as a function of time and frequency, and where circumstances permit coding across many channel coherence intervals, the achievable rate scales as min(nt, nr) log(1 + SNR). The gains in multiuser systems are even more impressive, because such systems offer the possibility to transmit simultaneously to several users and the flexibility to select what users to schedule for reception at any given point in time [2].

5,158 citations

01 Jan 2000
TL;DR: This article briefly reviews the basic concepts about cognitive radio CR, and the need for software-defined radios is underlined and the most important notions used for such.
Abstract: An Integrated Agent Architecture for Software Defined Radio. Rapid-prototype cognitive radio, CR1, was developed to apply these.The modern software defined radio has been called the heart of a cognitive radio. Cognitive radio: an integrated agent architecture for software defined radio. Http:bwrc.eecs.berkeley.eduResearchMCMACR White paper final1.pdf. The cognitive radio, built on a software-defined radio, assumes. Radio: An Integrated Agent Architecture for Software Defined Radio, Ph.D. The need for software-defined radios is underlined and the most important notions used for such. Mitola III, Cognitive radio: an integrated agent architecture for software defined radio, Ph.D. This results in the set-theoretic ontology of radio knowledge defined in the. Cognitive Radio An Integrated Agent Architecture for Software.This article first briefly reviews the basic concepts about cognitive radio CR. Cognitive Radio-An Integrated Agent Architecture for Software Defined Radio. Cognitive Radio RHMZ 2007. Software-defined radio SDR idea 1. Cognitive radio: An integrated agent architecture for software.Cognitive Radio SOFTWARE DEFINED RADIO, AND ADAPTIVE WIRELESS SYSTEMS2 Cognitive Networks. 3 Joseph Mitola III, Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio Stockholm.

3,814 citations