scispace - formally typeset
Search or ask a question
Author

Preben Mogensen

Other affiliations: Nokia, Bell Labs, Aalto University  ...read more
Bio: Preben Mogensen is an academic researcher from Aalborg University. The author has contributed to research in topics: Telecommunications link & Scheduling (computing). The author has an hindex of 64, co-authored 512 publications receiving 16042 citations. Previous affiliations of Preben Mogensen include Nokia & Bell Labs.


Papers
More filters
Proceedings ArticleDOI
11 Dec 2006
TL;DR: The FDPS performance is shown to depend significantly on the frequency-domain scheduling resolution as well as the accuracy of the channel state reports, and the scheduling resolution should preferably be as low as 375 kHz to yield significant FDPS gain.
Abstract: In this paper we investigate the potential of downlink frequency domain packet scheduling (FDPS) for the 3GPP UTRAN long-term evolution. Utilizing frequency-domain channel quality reports, the scheduler flexibly multiplexes users on different portions of the system bandwidth. Compared to frequency-blind, but time-opportunistic scheduling, FDPS shows gains in both average system capacity and cell-edge data rates on the order of 40%. However, the FDPS performance is shown to depend significantly on the frequency-domain scheduling resolution as well as the accuracy of the channel state reports. Assuming Typical Urban channel profile, studies show that the scheduling resolution should preferably be as low as 375 kHz to yield significant FDPS gain with two-branch receive diversity and in 20 MHz. Further, to have convincing FDPS gain the std. of error of radio state reports needs to be kept within 1.5-2 dB.

171 citations

Proceedings ArticleDOI
22 Apr 2007
TL;DR: It is shown that frequency-domain packet scheduling can provide a gain of around 35% in both throughput and coverage over opportunistic time-domain only scheduling and by using an equal throughput scheduler coverage can be improved by 100% at the expense of a 5% loss in average cell throughput in comparison with the proportional fair scheduler.
Abstract: In this paper we evaluate the performance of downlink channel dependent scheduling in time and frequency domains. The investigation is based on the 3GPP UTRAN long term evolution (LTE) parameters. A scheduler framework is developed encompassing frequency domain packet scheduling, HARQ management and inter-user fairness control. It is shown that by dividing the packet scheduler into a time-domain and a frequency-domain part the fairness between users can be effectively controlled. Different algorithms are applied in each scheduler part, and the combined performance is evaluated in terms of cell throughput, coverage, and capacity. We show that frequency-domain packet scheduling can provide a gain of around 35% in both throughput and coverage over opportunistic time-domain only scheduling. Furthermore, it is shown that by using an equal throughput scheduler, coverage can be improved by 100% at the expense of a 5% loss in average cell throughput in comparison with the proportional fair scheduler.

165 citations

Journal ArticleDOI
TL;DR: The suitability of using OFDMA or SC-FDMA in the uplink for local area high-data-rate scenarios by considering as target performance metrics the PAPR and multiuser diversity gain is discussed.
Abstract: The system requirements for IMT-A are currently being specified by the ITU. Target peak data rates of 1 Gb/s in local areas and 100 Mb/s in wide areas are expected to be provided by means of advanced MIMO antenna configurations and very high spectrum allocations (on the order of 100 MHz). For the downlink, OFDMA is unanimously considered the most appropriate technique for achieving high spectral efficiency. For the uplink, the LTE of the 3GPP, for example, employs SCFDMA due to its low PAPR properties compared to OFDMA. For future IMT-A systems, the decision on the most appropriate uplink access scheme is still an open issue, as many benefits can be obtained by exploiting the flexible frequency granularity of OFDMA. In this article we discuss the suitability of using OFDMA or SC-FDMA in the uplink for local area high-data-rate scenarios by considering as target performance metrics the PAPR and multiuser diversity gain. Also, new bandwidth configurations have been proposed to cope with the 100 MHz spectrum allocation. In particular, the PAPR analysis shows that a localized (not distributed) allocation of the resource blocks (RBs) in the frequency domain shall be employed for SC-FDMA in order to keep its advantages over OFDMA in terms of PAPR reduction. Furthermore, from the multiuser diversity gain evaluation emerges the fact that the impact of different RB sizes and bandwidth configurations is low, given the propagation characteristics of the assumed local area environment. For full bandwidth usage, OFDMA only outperforms SC-FDMA when the number of frequency multiplexed users is low. As the spectrum load decreases, instead, OFDMA outperforms SC-FDMA also for a high number of frequency multiplexed users, due to its more flexible resource allocation. In this contex different channel-aware scheduling algorithms have been proposed due to the resource allocation differences between the two blocks chemes.

163 citations

Journal ArticleDOI
TL;DR: This paper investigates the performance of aerial radio connectivity in a typical rural area network deployment using extensive channel measurements and system simulations, and introduces and evaluates a novel downlink inter-cell interference coordination mechanism applied to the aerial command and control traffic.
Abstract: Widely deployed cellular networks are an attractive solution to provide large scale radio connectivity to unmanned aerial vehicles. One main prerequisite is that co-existence and optimal performance for both aerial and terrestrial users can be provided. Today’s cellular networks are, however, not designed for aerial coverage, and deployments are primarily optimized to provide good service for terrestrial users. These considerations, in combination with the strict regulatory requirements, lead to extensive research and standardization efforts to ensure that the current cellular networks can enable reliable operation of aerial vehicles in various deployment scenarios. In this paper, we investigate the performance of aerial radio connectivity in a typical rural area network deployment using extensive channel measurements and system simulations. First, we highlight that downlink and uplink radio interference play a key role, and yield relatively poor performance for the aerial traffic, when load is high in the network. Second, we analyze two potential terminal side interference mitigation solutions: interference cancellation and antenna beam selection. We show that each of these can improve the overall, aerial and terrestrial, system performance to a certain degree, with up to 30% throughput gain, and an increase in the reliability of the aerial radio connectivity to over 99%. Further, we introduce and evaluate a novel downlink inter-cell interference coordination mechanism applied to the aerial command and control traffic. Our proposed coordination mechanism is shown to provide the required aerial downlink performance at the cost of 10% capacity degradation in the serving and interfering cells.

162 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: Both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.
Abstract: The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9 % of outdoor and indoor devices, if the latter is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95 % of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.

155 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 2005

9,038 citations

Journal ArticleDOI
TL;DR: The gains in multiuser systems are even more impressive, because such systems offer the possibility to transmit simultaneously to several users and the flexibility to select what users to schedule for reception at any given point in time.
Abstract: Multiple-input multiple-output (MIMO) technology is maturing and is being incorporated into emerging wireless broadband standards like long-term evolution (LTE) [1]. For example, the LTE standard allows for up to eight antenna ports at the base station. Basically, the more antennas the transmitter/receiver is equipped with, and the more degrees of freedom that the propagation channel can provide, the better the performance in terms of data rate or link reliability. More precisely, on a quasi static channel where a code word spans across only one time and frequency coherence interval, the reliability of a point-to-point MIMO link scales according to Prob(link outage) ` SNR-ntnr where nt and nr are the numbers of transmit and receive antennas, respectively, and signal-to-noise ratio is denoted by SNR. On a channel that varies rapidly as a function of time and frequency, and where circumstances permit coding across many channel coherence intervals, the achievable rate scales as min(nt, nr) log(1 + SNR). The gains in multiuser systems are even more impressive, because such systems offer the possibility to transmit simultaneously to several users and the flexibility to select what users to schedule for reception at any given point in time [2].

5,158 citations

01 Jan 2000
TL;DR: This article briefly reviews the basic concepts about cognitive radio CR, and the need for software-defined radios is underlined and the most important notions used for such.
Abstract: An Integrated Agent Architecture for Software Defined Radio. Rapid-prototype cognitive radio, CR1, was developed to apply these.The modern software defined radio has been called the heart of a cognitive radio. Cognitive radio: an integrated agent architecture for software defined radio. Http:bwrc.eecs.berkeley.eduResearchMCMACR White paper final1.pdf. The cognitive radio, built on a software-defined radio, assumes. Radio: An Integrated Agent Architecture for Software Defined Radio, Ph.D. The need for software-defined radios is underlined and the most important notions used for such. Mitola III, Cognitive radio: an integrated agent architecture for software defined radio, Ph.D. This results in the set-theoretic ontology of radio knowledge defined in the. Cognitive Radio An Integrated Agent Architecture for Software.This article first briefly reviews the basic concepts about cognitive radio CR. Cognitive Radio-An Integrated Agent Architecture for Software Defined Radio. Cognitive Radio RHMZ 2007. Software-defined radio SDR idea 1. Cognitive radio: An integrated agent architecture for software.Cognitive Radio SOFTWARE DEFINED RADIO, AND ADAPTIVE WIRELESS SYSTEMS2 Cognitive Networks. 3 Joseph Mitola III, Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio Stockholm.

3,814 citations