scispace - formally typeset
Search or ask a question
Author

Giuseppe Durisi

Bio: Giuseppe Durisi is an academic researcher from Chalmers University of Technology. The author has contributed to research in topics: MIMO & Fading. The author has an hindex of 41, co-authored 247 publications receiving 7119 citations. Previous affiliations of Giuseppe Durisi include Polytechnic University of Turin & ETH Zurich.


Papers
More filters
Journal ArticleDOI
02 Aug 2016
TL;DR: In this article, the authors review recent advances in information theory, which provide the theoretical principles that govern the transmission of short packets, and then apply these principles to three exemplary scenarios (the two-way channel, the downlink broadcast channel, and the uplink random access channel), thereby illustrating how the transmissions of control information can be optimized when the packets are short.
Abstract: Most of the recent advances in the design of high-speed wireless systems are based on information-theoretic principles that demonstrate how to efficiently transmit long data packets. However, the upcoming wireless systems, notably the fifth-generation (5G) system, will need to support novel traffic types that use short packets. For example, short packets represent the most common form of traffic generated by sensors and other devices involved in machine-to-machine (M2M) communications. Furthermore, there are emerging applications in which small packets are expected to carry critical information that should be received with low latency and ultrahigh reliability. Current wireless systems are not designed to support short-packet transmissions. For example, the design of current systems relies on the assumption that the metadata (control information) is of negligible size compared to the actual information payload. Hence, transmitting metadata using heuristic methods does not affect the overall system performance. However, when the packets are short, metadata may be of the same size as the payload, and the conventional methods to transmit it may be highly suboptimal. In this paper, we review recent advances in information theory, which provide the theoretical principles that govern the transmission of short packets. We then apply these principles to three exemplary scenarios (the two-way channel, the downlink broadcast channel, and the uplink random access channel), thereby illustrating how the transmission of control information can be optimized when the packets are short. The insights brought by these examples suggest that new principles are needed for the design of wireless protocols supporting short packets. These principles will have a direct impact on the system design.

984 citations

Journal ArticleDOI
TL;DR: In this paper, the authors study the potential advantages of allowing for non-orthogonal sharing of RAN resources in uplink communications from a set of eMBB, mMTC, and URLLC devices to a common base station.
Abstract: The grand objective of 5G wireless technology is to support three generic services with vastly heterogeneous requirements: enhanced mobile broadband (eMBB), massive machine-type communications (mMTCs), and ultra-reliable low-latency communications (URLLCs). Service heterogeneity can be accommodated by network slicing, through which each service is allocated resources to provide performance guarantees and isolation from the other services. Slicing of the radio access network (RAN) is typically done by means of orthogonal resource allocation among the services. This paper studies the potential advantages of allowing for non-orthogonal sharing of RAN resources in uplink communications from a set of eMBB, mMTC, and URLLC devices to a common base station. The approach is referred to as heterogeneous non-orthogonal multiple access (H-NOMA), in contrast to the conventional NOMA techniques that involve users with homogeneous requirements and hence can be investigated through a standard multiple access channel. The study devises a communication-theoretic model that accounts for the heterogeneous requirements and characteristics of the three services. The concept of reliability diversity is introduced as a design principle that leverages the different reliability requirements across the services in order to ensure performance guarantees with non-orthogonal RAN slicing. This paper reveals that H-NOMA can lead, in some regimes, to significant gains in terms of performance tradeoffs among the three generic services as compared to orthogonal slicing.

654 citations

Journal ArticleDOI
TL;DR: The principal finding is that outage capacity, despite being an asymptotic quantity, is a sharp proxy for the finite-blocklength fundamental limits of slow-fading channels.
Abstract: This paper investigates the maximal achievable rate for a given blocklength and error probability over quasi-static multiple-input multiple-output fading channels, with and without channel state information at the transmitter and/or the receiver. The principal finding is that outage capacity, despite being an asymptotic quantity, is a sharp proxy for the finite-blocklength fundamental limits of slow-fading channels. Specifically, the channel dispersion is shown to be zero regardless of whether the fading realizations are available at both transmitter and receiver, at only one of them, or at neither of them. These results follow from analytically tractable converse and achievability bounds. Numerical evaluation of these bounds verifies that zero dispersion may indeed imply fast convergence to the outage capacity as the blocklength increases. In the example of a particular 1 $\,\times\,$ 2 single-input multiple-output Rician fading channel, the blocklength required to achieve 90% of capacity is about an order of magnitude smaller compared with the blocklength required for an AWGN channel with the same capacity. For this specific scenario, the coding/decoding schemes adopted in the LTE-Advanced standard are benchmarked against the finite-blocklength achievability and converse bounds.

400 citations

Journal ArticleDOI
TL;DR: It is illustrated that, for the 1-bit quantized case, pilot-based channel estimation together with maximal-ratio combing, or zero-forcing detection enables reliable multi-user communication with high-order constellations, in spite of the severe nonlinearity introduced by the ADCs.
Abstract: We investigate the uplink throughput achievable by a multiple-user (MU) massive multiple-input multiple-output (MIMO) system, in which the base station is equipped with a large number of low-resolution analog-to-digital converters (ADCs). Our focus is on the case where neither the transmitter nor the receiver have any a priori channel state information. This implies that the fading realizations have to be learned through pilot transmission followed by channel estimation at the receiver, based on coarsely quantized observations. We propose a novel channel estimator, based on Bussgang’s decomposition, and a novel approximation to the rate achievable with finite-resolution ADCs, both for the case of finite-cardinality constellations and of Gaussian inputs, that is accurate for a broad range of system parameters. Through numerical results, we illustrate that, for the 1-bit quantized case, pilot-based channel estimation together with maximal-ratio combing, or zero-forcing detection enables reliable multi-user communication with high-order constellations, in spite of the severe nonlinearity introduced by the ADCs. Furthermore, we show that the rate achievable in the infinite-resolution (no quantization) case can be approached using ADCs with only a few bits of resolution. We finally investigate the robustness of low-ADC-resolution MU-MIMO uplink against receive power imbalances between the different users, caused for example by imperfect power control.

372 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the performance of linear precoders, such as maximal-ratio transmission and zero-forcing, subject to coarse quantization and derived a closed-form approximation on the rate achievable under such quantization.
Abstract: Massive multiuser (MU) multiple-input multiple-output (MIMO) is foreseen to be one of the key technologies in fifth-generation wireless communication systems. In this paper, we investigate the problem of downlink precoding for a narrowband massive MU-MIMO system with low-resolution digital-to-analog converters (DACs) at the base station (BS). We analyze the performance of linear precoders, such as maximal-ratio transmission and zero-forcing, subject to coarse quantization. Using Bussgang’s theorem, we derive a closed-form approximation on the rate achievable under such coarse quantization. Our results reveal that the performance attainable with infinite-resolution DACs can be approached using DACs having only 3–4 bits of resolution, depending on the number of BS antennas and the number of user equipments (UEs). For the case of 1-bit DACs, we also propose novel nonlinear precoding algorithms that significantly outperform linear precoders at the cost of an increased computational complexity. Specifically, we show that nonlinear precoding incurs only a 3 dB penalty compared with the infinite-resolution case for an uncoded bit-error rate of 10−3, in a system with 128 BS antennas that uses 1-bit DACs and serves 16 single-antenna UEs. In contrast, the penalty for linear precoders is about 8 dB.

307 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article identifies the primary drivers of 6G systems, in terms of applications and accompanying technological trends, and identifies the enabling technologies for the introduced 6G services and outlines a comprehensive research agenda that leverages those technologies.
Abstract: The ongoing deployment of 5G cellular systems is continuously exposing the inherent limitations of this system, compared to its original premise as an enabler for Internet of Everything applications. These 5G drawbacks are spurring worldwide activities focused on defining the next-generation 6G wireless system that can truly integrate far-reaching applications ranging from autonomous systems to extended reality. Despite recent 6G initiatives (one example is the 6Genesis project in Finland), the fundamental architectural and performance components of 6G remain largely undefined. In this article, we present a holistic, forward-looking vision that defines the tenets of a 6G system. We opine that 6G will not be a mere exploration of more spectrum at high-frequency bands, but it will rather be a convergence of upcoming technological trends driven by exciting, underlying services. In this regard, we first identify the primary drivers of 6G systems, in terms of applications and accompanying technological trends. Then, we propose a new set of service classes and expose their target 6G performance requirements. We then identify the enabling technologies for the introduced 6G services and outline a comprehensive research agenda that leverages those technologies. We conclude by providing concrete recommendations for the roadmap toward 6G. Ultimately, the intent of this article is to serve as a basis for stimulating more out-of-the-box research around 6G.

2,416 citations

Journal ArticleDOI

2,415 citations

Book ChapterDOI
15 Feb 2011

1,876 citations

Journal ArticleDOI
TL;DR: This paper overviews the current research efforts on smart radio environments, the enabling technologies to realize them in practice, the need of new communication-theoretic models for their analysis and design, and the long-term and open research issues to be solved towards their massive deployment.
Abstract: Future wireless networks are expected to constitute a distributed intelligent wireless communications, sensing, and computing platform, which will have the challenging requirement of interconnecting the physical and digital worlds in a seamless and sustainable manner. Currently, two main factors prevent wireless network operators from building such networks: (1) the lack of control of the wireless environment, whose impact on the radio waves cannot be customized, and (2) the current operation of wireless radios, which consume a lot of power because new signals are generated whenever data has to be transmitted. In this paper, we challenge the usual “more data needs more power and emission of radio waves” status quo, and motivate that future wireless networks necessitate a smart radio environment: a transformative wireless concept, where the environmental objects are coated with artificial thin films of electromagnetic and reconfigurable material (that are referred to as reconfigurable intelligent meta-surfaces), which are capable of sensing the environment and of applying customized transformations to the radio waves. Smart radio environments have the potential to provide future wireless networks with uninterrupted wireless connectivity, and with the capability of transmitting data without generating new signals but recycling existing radio waves. We will discuss, in particular, two major types of reconfigurable intelligent meta-surfaces applied to wireless networks. The first type of meta-surfaces will be embedded into, e.g., walls, and will be directly controlled by the wireless network operators via a software controller in order to shape the radio waves for, e.g., improving the network coverage. The second type of meta-surfaces will be embedded into objects, e.g., smart t-shirts with sensors for health monitoring, and will backscatter the radio waves generated by cellular base stations in order to report their sensed data to mobile phones. These functionalities will enable wireless network operators to offer new services without the emission of additional radio waves, but by recycling those already existing for other purposes. This paper overviews the current research efforts on smart radio environments, the enabling technologies to realize them in practice, the need of new communication-theoretic models for their analysis and design, and the long-term and open research issues to be solved towards their massive deployment. In a nutshell, this paper is focused on discussing how the availability of reconfigurable intelligent meta-surfaces will allow wireless network operators to redesign common and well-known network communication paradigms.

1,504 citations

Book
03 Jan 2018
TL;DR: This monograph summarizes many years of research insights in a clear and self-contained way and providest the reader with the necessary knowledge and mathematical toolsto carry out independent research in this area.
Abstract: Massive multiple-input multiple-output MIMO is one of themost promising technologies for the next generation of wirelesscommunication networks because it has the potential to providegame-changing improvements in spectral efficiency SE and energyefficiency EE. This monograph summarizes many years ofresearch insights in a clear and self-contained way and providesthe reader with the necessary knowledge and mathematical toolsto carry out independent research in this area. Starting froma rigorous definition of Massive MIMO, the monograph coversthe important aspects of channel estimation, SE, EE, hardwareefficiency HE, and various practical deployment considerations.From the beginning, a very general, yet tractable, canonical systemmodel with spatial channel correlation is introduced. This modelis used to realistically assess the SE and EE, and is later extendedto also include the impact of hardware impairments. Owing tothis rigorous modeling approach, a lot of classic "wisdom" aboutMassive MIMO, based on too simplistic system models, is shownto be questionable.

1,352 citations