scispace - formally typeset
Search or ask a question
Posted Content

A Tutorial on Ultra-Reliable and Low-Latency Communications in 6G: Integrating Domain Knowledge into Deep Learning

TL;DR: This tutorial illustrates how to integrate theoretical knowledge of wireless communications into different kinds of deep learning algorithms for URLLC and provides simulation and experimental results to validate the effectiveness of different learning algorithms and discuss future directions.
Abstract: As one of the key communication scenarios in the 5th and also the 6th generation (6G) of mobile communication networks, ultra-reliable and low-latency communications (URLLC) will be central for the development of various emerging mission-critical applications. State-of-the-art mobile communication systems do not fulfill the end-to-end delay and overall reliability requirements of URLLC. In particular, a holistic framework that takes into account latency, reliability, availability, scalability, and decision making under uncertainty is lacking. Driven by recent breakthroughs in deep neural networks, deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLC in future 6G networks. This tutorial illustrates how domain knowledge (models, analytical tools, and optimization frameworks) of communications and networking can be integrated into different kinds of deep learning algorithms for URLLC. We first provide some background of URLLC and review promising network architectures and deep learning frameworks for 6G. To better illustrate how to improve learning algorithms with domain knowledge, we revisit model-based analytical tools and cross-layer optimization frameworks for URLLC. Following that, we examine the potential of applying supervised/unsupervised deep learning and deep reinforcement learning in URLLC and summarize related open problems. Finally, we provide simulation and experimental results to validate the effectiveness of different learning algorithms and discuss future directions.
Citations
More filters
Journal ArticleDOI
01 May 1975
TL;DR: The Fundamentals of Queueing Theory, Fourth Edition as discussed by the authors provides a comprehensive overview of simple and more advanced queuing models, with a self-contained presentation of key concepts and formulae.
Abstract: Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.

2,562 citations

Journal ArticleDOI
TL;DR: In this paper, the authors explore the emerging opportunities brought by 6G technologies in IoT networks and applications, by conducting a holistic survey on the convergence of 6G and IoT, and highlight interesting research challenges and point out potential directions to spur further research in this promising area.
Abstract: The sixth generation (6G) wireless communication networks are envisioned to revolutionize customer services and applications via the Internet of Things (IoT) towards a future of fully intelligent and autonomous systems. In this article, we explore the emerging opportunities brought by 6G technologies in IoT networks and applications, by conducting a holistic survey on the convergence of 6G and IoT. We first shed light on some of the most fundamental 6G technologies that are expected to empower future IoT networks, including edge intelligence, reconfigurable intelligent surfaces, space-air-ground-underwater communications, Terahertz communications, massive ultra-reliable and low-latency communications, and blockchain. Particularly, compared to the other related survey papers, we provide an in-depth discussion of the roles of 6G in a wide range of prospective IoT applications via five key domains, namely Healthcare Internet of Things, Vehicular Internet of Things and Autonomous Driving, Unmanned Aerial Vehicles, Satellite Internet of Things, and Industrial Internet of Things. Finally, we highlight interesting research challenges and point out potential directions to spur further research in this promising area.

305 citations

Journal ArticleDOI
TL;DR: This tutorial focuses on the role of DRL with an emphasis on deep Multi-Agent Reinforcement Learning (MARL) for AI-enabled wireless networks, and provides a selective description of RL algorithms such as Model-Based RL (MBRL) and cooperative MARL and highlights their potential applications in future wireless networks.
Abstract: Deep Reinforcement Learning (DRL) has recently witnessed significant advances that have led to multiple successes in solving sequential decision-making problems in various domains, particularly in wireless communications. The next generation of wireless networks is expected to provide scalable, low-latency, ultra-reliable services empowered by the application of data-driven Artificial Intelligence (AI). The key enabling technologies of future wireless networks, such as intelligent meta-surfaces, aerial networks, and AI at the edge, involve more than one agent which motivates the importance of multi-agent learning techniques. Furthermore, cooperation is central to establishing self-organizing, self-sustaining, and decentralized networks. In this context, this tutorial focuses on the role of DRL with an emphasis on deep Multi-Agent Reinforcement Learning (MARL) for AI-enabled wireless networks. The first part of this paper will present a clear overview of the mathematical frameworks for single-agent RL and MARL. The main idea of this work is to motivate the application of RL beyond the model-free perspective which was extensively adopted in recent years. Thus, we provide a selective description of RL algorithms such as Model-Based RL (MBRL) and cooperative MARL and we highlight their potential applications in future wireless networks. Finally, we overview the state-of-the-art of MARL in fields such as Mobile Edge Computing (MEC), Unmanned Aerial Vehicles (UAV) networks, and cell-free massive MIMO, and identify promising future research directions. We expect this tutorial to stimulate more research endeavors to build scalable and decentralized systems based on MARL.

135 citations

Journal ArticleDOI
TL;DR: In this article, the authors study the latest perspectives and future megatrends that are most likely to drive 6G and highlight the key requirements of 6G based on contemporary research such as UN sustainability goals, business model, edge intelligence, digital divide, and the trends in machine learning for 6G.
Abstract: Next-generation of the cellular network will attempt to overcome the limitations of the current Fifth Generation (5G) networks and equip itself to address the challenges which become obvious in the future. Currently, academia and industry have focused their attention on the Sixth Generation (6G) network, which is anticipated to be the next big game-changer in the telecom industry. The outbreak of COVID’19 has made the whole world to opt for virtual meetings, live video interactions ranging from healthcare, business to education. However, we miss an immersive experience due to the lack of supporting technology. Experts have anticipated that starting from the post-pandemic age, the performance requirements of technology for virtual and real-time communication, the rise of several verticals such as industrial automation, robotics, and autonomous driving will increase tremendously, and will skyrocket during the next decade. In this manuscript, we study the latest perspectives and future megatrends that are most likely to drive 6G. Initially, we describe the instances that lead us to the vision of 6G. Later, we narrate some of the use cases and the KPIs essential to meet their performance requirement. Further, we highlight the key requirements of 6G based on contemporary research such as UN sustainability goals, business model, edge intelligence, digital divide, and the trends in machine learning for 6G.

111 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations


"A Tutorial on Ultra-Reliable and Lo..." refers background or methods in this paper

  • ...FNNs work well in most of the small-scale problems....

    [...]

  • ...Then, problem (22) is converted to an unconstrained problem as follows, min λCMDPm >0,m=1,...,Mc max π LCMDP(π,λ CMDP)....

    [...]

  • ...By applying G(t) = r(s (t) , a (t)) + G(t + 1), we can express the value function in recursive form, known as the Bellman equation [68]....

    [...]

  • ...Illustration of actor-critic algorithm with FNNs [64, 68]....

    [...]

  • ...Depending on whether the transition probabilities are available or not, reinforcement learning algorithms are classified into two categories: model-based and modelfree [23, 68]....

    [...]

Book
01 Mar 2004
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Abstract: Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.

33,341 citations

Journal ArticleDOI
01 Jan 1988-Nature
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

23,814 citations


"A Tutorial on Ultra-Reliable and Lo..." refers background in this paper

  • ...Although the universal approximation theorem of DNNs indicates that a DNN can be arbitrarily accurate when approximating a continuous function [65, 66], how to design structures and hyperparameters of DNNs to achieve a satisfactory accuracy remains unclear....

    [...]