scispace - formally typeset
Open AccessPosted Content

Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

Reads0
Chats0
TLDR
In this paper, a comprehensive literature review on applications of deep reinforcement learning in communications and networking is presented, which includes dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation.
Abstract
This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.

read more

Citations
More filters
Posted Content

Federated Learning in Mobile Edge Networks: A Comprehensive Survey

TL;DR: In a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved, this raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale.
Journal ArticleDOI

Toward Smart Wireless Communications via Intelligent Reflecting Surfaces: A Contemporary Survey

TL;DR: A literature review on recent applications and design aspects of the intelligent reflecting surface (IRS) in the future wireless networks, and the joint optimization of the IRS’s phase control and the transceivers’ transmission control in different network design problems, e.g., rate maximization and power minimization problems.
Journal ArticleDOI

Satellite Communications in the New Space Era: A Survey and Future Challenges

TL;DR: In this article, the authors present a survey of the state of the art in satellite communications, while highlighting the most promising open research topics, such as new constellation types, on-board processing capabilities, non-terrestrial networks and space-based data collection/processing.
Journal ArticleDOI

Quantum Machine Learning for 6G Communication Networks: State-of-the-Art and Vision for the Future

TL;DR: A novel QC-assisted and QML-based framework for 6G communication networks is proposed while articulating its challenges and potential enabling technologies at the network infrastructure, network edge, air interface, and user end.
Posted Content

Reconfigurable Intelligent Surfaces: Principles and Opportunities

TL;DR: A comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies is provided in this article.
References
More filters
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Book

Dynamic Programming

TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Related Papers (5)