scispace - formally typeset
Search or ask a question
Author

Preben Mogensen

Other affiliations: Nokia, Bell Labs, Aalto University  ...read more
Bio: Preben Mogensen is an academic researcher from Aalborg University. The author has contributed to research in topics: Telecommunications link & Scheduling (computing). The author has an hindex of 64, co-authored 512 publications receiving 16042 citations. Previous affiliations of Preben Mogensen include Nokia & Bell Labs.


Papers
More filters
Patent
01 Feb 2007
TL;DR: In this article, a method for persistent uplink and downlink resource allocation for data flow over a plurality of time intervals with a single signaling event at a wireless network radio network layer is presented.
Abstract: Apparatus, methods, computer program products, systems and circuits are provided that allow for persistent uplink and downlink resource allocations. A method includes: allocating resources to a user equipment for a data flow over a plurality of time intervals with a single signaling event at a wireless network radio network layer; and informing the user equipment of the allocated resources.

18 citations

Proceedings ArticleDOI
27 Jun 2019
TL;DR: The analysis with 5G new radio assumptions shows that overlaying is mostly beneficial when SIC is employed in medium to high SNR scenarios or, in some cases, with low URLLC load, and the use of separate bands supports higher loads for both services simultaneously.
Abstract: 5G networks should support heterogeneous services with an efficient usage of the radio resources, while meeting the distinct requirements of each service class. We consider the problem of multiplexing enhanced mobile broadband (eMBB) traffic, and grant-free ultra-reliable low-latency communications (URLLC) in uplink. Two multiplexing options are considered; either eMBB and grant-free URLLC are transmitted in separate frequency bands to avoid their mutual interference, or both traffic share the available bandwidth leading to overlaying transmissions. This work presents an approach to evaluate the supported loads for URLLC and eMBB in different operation regimes. Minimum mean square error receivers with and without successive interference cancellation (SIC) are considered in Rayleigh fading channels. The outage probability is derived and the achievable transmission rates are obtained based on that. The analysis with 5G new radio assumptions shows that overlaying is mostly beneficial when SIC is employed in medium to high SNR scenarios or, in some cases, with low URLLC load. Otherwise, the use of separate bands supports higher loads for both services simultaneously. Practical insights based on the approach are discussed.

18 citations

Proceedings ArticleDOI
31 Dec 2012
TL;DR: This study shows that a simple geometrical-based extension to standard empirical path loss prediction models can give quite reasonable accuracy in predicting the signal strength from tilted base station antennas in small urban macro-cells.
Abstract: Base station antenna downtilt is one of the most important parameters for optimizing a cellular network with tight frequency reuse. By downtilting, inter-site interference is reduced, which leads to an improved performance of the network. In this study we show that a simple geometrical-based extension to standard empirical path loss prediction models can give quite reasonable accuracy in predicting the signal strength from tilted base station antennas in small urban macro-cells. Our evaluation is based on measurements on several sectors in a 2.6 GHz Long Term Evolution (LTE) cellular network, with electrical antenna downtilt in the range from 0 to 10 degrees, as well as predictions based on ray-tracing and 3D building databases covering the measurement area. Although the calibrated ray-tracing predictions are highly accurate compared with the measured data, the combined LOS/NLOS COST-WI model with downtilt correction performs similarly for distances above a few hundred meters. Generally, predicting the effect of base station antenna tilt close to the base station is difficult due to multiple vertical sidelobes.

18 citations

Proceedings ArticleDOI
Liang Hu1, Claudio Coletti1, Nguyen Huan1, Preben Mogensen1, Jan Elling2 
06 May 2012
TL;DR: This paper is envisaged to provide a first quantitative study on how much indoor deployed Wi-Fi can offload the operator's 3G HSPA macro cellular networks in a real large-scale dense-urban scenario.
Abstract: this paper is envisaged to provide a first quantitative study on how much indoor deployed Wi-Fi can offload the operator's 3G HSPA macro cellular networks in a real large-scale dense-urban scenario. Wi-Fi has been perceived as a cost-effective mean of adding wireless capacity by leveraging low-cost access points and unlicensed spectrum. However, the quantitative offloading gain that Wi-Fi can achieve is still unknown. We studied the Wi-Fi offloading gain as a function of access point density, where it is shown that 10 access points/km2 can already boost average user throughput by 300% and the gain increases linearly proportional to the access point density. Indoor Wi-Fi deployment also significantly reduces the number of users in outage, especially for indoor area. A user is considered to be in outage if they have a user throughput less than 512 kbps. We also propose three Wi-Fi deployment algorithms: Traffic-centric, Outage-centric, Uniform Random. Simulation results show that Traffic-centric performs best in boosting average user throughput while Outage-centric performs best in reducing user outage. Finally, Wi-Fi offloading solution is compared with another offloading solution - HSPA Femto cell. We show that Wi-Fi provides both much higher average user throughput and network outage reduction than HSPA Femto cells by exploring 20 MHz unlicensed ISM band.

18 citations

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This paper compares three different classification algorithms, which use standard LTE measurements from the UE as input, for detecting the presence of airborne users in the network and shows how waiting for the final decision can even improve this accuracy to values close to 100%.
Abstract: The overall cellular network performance can be optimized for both ground and aerial users, if different treatment is given for the two user classes. Airborne UAVs experience different radio conditions that terrestrial users due to clearance in the radio path, which leads to strong desired signal reception, but at the same time increases the interference. Based on this, one can for instance use different interference coordination techniques for aerial users as for terrestrial user and/or use specific mobility settings for each class. This paper compares three different classification algorithms, which use standard LTE measurements from the UE as input, for detecting the presence of airborne users in the network. The algorithms are evaluated based on measurements done with mobile phones attached under a flying drone and on a car. Results are discussed showing the advantages and drawbacks for each option regarding different use cases, and the compromise between specificity and sensibility. For the collected data results show reliability close to 99% in most cases and also discuss how waiting for the final decision can even improve this accuracy to values close to 100%.

18 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 2005

9,038 citations

Journal ArticleDOI
TL;DR: The gains in multiuser systems are even more impressive, because such systems offer the possibility to transmit simultaneously to several users and the flexibility to select what users to schedule for reception at any given point in time.
Abstract: Multiple-input multiple-output (MIMO) technology is maturing and is being incorporated into emerging wireless broadband standards like long-term evolution (LTE) [1]. For example, the LTE standard allows for up to eight antenna ports at the base station. Basically, the more antennas the transmitter/receiver is equipped with, and the more degrees of freedom that the propagation channel can provide, the better the performance in terms of data rate or link reliability. More precisely, on a quasi static channel where a code word spans across only one time and frequency coherence interval, the reliability of a point-to-point MIMO link scales according to Prob(link outage) ` SNR-ntnr where nt and nr are the numbers of transmit and receive antennas, respectively, and signal-to-noise ratio is denoted by SNR. On a channel that varies rapidly as a function of time and frequency, and where circumstances permit coding across many channel coherence intervals, the achievable rate scales as min(nt, nr) log(1 + SNR). The gains in multiuser systems are even more impressive, because such systems offer the possibility to transmit simultaneously to several users and the flexibility to select what users to schedule for reception at any given point in time [2].

5,158 citations

01 Jan 2000
TL;DR: This article briefly reviews the basic concepts about cognitive radio CR, and the need for software-defined radios is underlined and the most important notions used for such.
Abstract: An Integrated Agent Architecture for Software Defined Radio. Rapid-prototype cognitive radio, CR1, was developed to apply these.The modern software defined radio has been called the heart of a cognitive radio. Cognitive radio: an integrated agent architecture for software defined radio. Http:bwrc.eecs.berkeley.eduResearchMCMACR White paper final1.pdf. The cognitive radio, built on a software-defined radio, assumes. Radio: An Integrated Agent Architecture for Software Defined Radio, Ph.D. The need for software-defined radios is underlined and the most important notions used for such. Mitola III, Cognitive radio: an integrated agent architecture for software defined radio, Ph.D. This results in the set-theoretic ontology of radio knowledge defined in the. Cognitive Radio An Integrated Agent Architecture for Software.This article first briefly reviews the basic concepts about cognitive radio CR. Cognitive Radio-An Integrated Agent Architecture for Software Defined Radio. Cognitive Radio RHMZ 2007. Software-defined radio SDR idea 1. Cognitive radio: An integrated agent architecture for software.Cognitive Radio SOFTWARE DEFINED RADIO, AND ADAPTIVE WIRELESS SYSTEMS2 Cognitive Networks. 3 Joseph Mitola III, Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio Stockholm.

3,814 citations