scispace - formally typeset
Open AccessJournal ArticleDOI

Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

TLDR
In this article, the authors review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning and investigate their employment in the compelling applications of wireless networks, including heterogeneous networks, cognitive radios (CR), Internet of Things (IoT), machine to machine networks (M2M), and so on.
Abstract
Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of Things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.

read more

Content maybe subject to copyright    Report

1
Thirty Years of Machine Learning:
The Road to Pareto-Optimal Wireless Networks
Jingjing Wang, Member, IEEE, Chunxiao Jiang, Senior Member, IEEE,
Haijun Zhang, Senior Member, IEEE, Yong Ren, Senior Member, IEEE,
Kwang-Cheng Chen, Fellow, IEEE, and Lajos Hanzo, Fellow, IEEE
Abstract—Future wireless networks have a substantial
potential in terms of supporting a broad range of complex
compelling applications both in military and civilian fields,
where the users are able to enjoy high-rate, low-latency,
low-cost and reliable information services. Achieving this
ambitious goal requires new radio techniques for adaptive
learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and
wireless services. Machine learning (ML) algorithms have
great success in supporting big data analytics, efficient pa-
rameter estimation and interactive decision making. Hence,
in this article, we review the thirty-year history of ML by
elaborating on supervised learning, unsupervised learning,
reinforcement learning and deep learning. Furthermore, we
investigate their employment in the compelling applications
of wireless networks, including heterogeneous networks (Het-
Nets), cognitive radios (CR), Internet of things (IoT), machine
to machine networks (M2M), and so on. This article aims
for assisting the readers in clarifying the motivation and
methodology of the various ML algorithms, so as to invoke
them for hitherto unexplored services as well as scenarios of
future wireless networks.
Index Terms—Machine learning (ML), future wireless
network, deep learning, regression, classification, clustering,
network association, resource allocation.
NOMENCLATURE
5G The 5th Generation Mobile Network
AI Artificial Intelligent
AMC Automatic Modulation Classification
ANN Artificial Neural Network
This work is partly supported by the National Natural Science Foun-
dation of China (61922050), the Pre-research Fund of Equipments of
Ministry of Education of China (6141A02022615), the Research Fund
of China Academy of Space Technology (Co/Co-20180605-47), and
also partly supported by the National Natural Science Foundation of
China (61822104, 61771044), the Fundamental Research Funds for the
Central Universities (RC1631, FRF-TP-19-002C1). Dr. Wang would like
to acknowledge the financial support of the Shuimu Tsinghua Scholar
Program. (Corresponding author: Chunxiao Jiang)
J. Wang and Y. Ren are with the Department of Electronic Engi-
neering, Tsinghua University, Beijing, 100084, China. E-mail: chinaeep-
hd@gmail.com, reny@tsinghua.edu.cn.
C. Jiang is with Tsinghua Space Center, Tsinghua University, Beijing,
100084, China. E-mail: jchx@tsinghua.edu.cn.
H. Zhang is with Institute of Artificial Intelligence, Beijing Advanced
Innovation Center for Materials Genome Engineering, Beijing Engineer-
ing and Technology Research Center for Convergence Networks and
Ubiquitous Services, University of Science and Technology Beijing,
Beijing 100083, China. E-mail: haijunzhang@ieee.org.
K.-C. Chen is with the Electrical Engineering Department, University
of South Florida, Tampa, FL 33620, USA. Email: kwangcheng@usf.edu.
L. Hanzo is with the School of Electronics and Computer Science,
University of Southampton, Southampton, SO17 1BJ, UK. Email: l-
h@ecs.soton.ac.uk.
AP Access Point
AWGN Additive White Gaussian Noise
BBU BaseBand processing Unit
BS Base Station
CDF Cumulative Distribution Function
CNN Convolutional Neural Network
CogNet Cognitive Network
CoMP Coordinated Multiple Points
CR Cognitive Radio
C-RAN Cloud Radio Access Network
CRN Cognitive Radio Network
CSI Channel State Information
CSMA/CA Carrier-Sense Multiple Access with Collision
Avoidance
CSMA/CD Carrier-Sense Multiple Access with Collision
Detection
C-S Mode Client-Server Mode
D2D Device to Device
DBN Deep Belief Network
DNN Deep Neural Network
DQN Deep Q-Network
EA Energy Awareness
EE Energy Efficiency
EH Energy Harvesting
ELP Exponentially-weighted algorithm with Linear
Programming
EM Expectation Maximization
eMBB enhanced Mobile Broad Band
ERM Empirical Risk Minimization
EXP3 EXPonential weights for EXPloration and
EXPloitation
FANET Flying Ad Hoc Network
FDA Fisher Discriminant Analysis
FDI False Data Injection
FSMC Finite State Markov Channel
GMM Gaussian Mixture Model
HetNet Heterogeneous Network
HMM Hidden Markov Model
ICA Independent Component Analysis
IEEE Institute of Electrical and Electronics Engineers
IoT Internet of Things
ITS Intelligent Transportation System
KNN K-Nearest Neighbors
LED Light Emitting Diode
LOS Line of Sight
LS Least Square
LSTM Long Short Term Memory

2
LTE Long Term Evolution
M2M Machine to Machine
MANET Mobile Ad Hoc Network
MAP Maximum a Posteriori
MDP Markov Decision Process
MIMO Multiple-Input and Multiple-Output
ML Machine Learning
MLE Maximum Likelihood Estimation
mMTC massive Machine Type of Communication
NB-IoT NarrowBand Internet of Things
NB-M2M NarrowBand Machine to Machine
NFV Network Function Virtualization
NLOS Non-Line of Sight
NOMA Non-Orthogonal Multiple Access
OFDM Orthogonal Frequency Division Multiplexing
OSPF Open Shortest Path First
P2P Peer to Peer
PCA Principal Component Analysis
POMDP Partially Observable Markov Decision Process
PU Primary User
QoE Quality of Experience
QoS Quality of Service
RAT Radio Access Technology
RBM Restricted Boltzmann Machine
RBF Radial Basis Function
RFID Radio Frequency IDentification
RNN Recurrent Neural Network
RRU Remote Radio Unit
SDA Stacked Denoising Auto-encoder
SDN Software Defined Network
SDR Software Defined Radio
SE Spectrum Efficiency
SG Stochastic Geometry
SRM Structural Risk Minimization
STBC Space Time Block Code
SU Secondary User
SVM Support Vector Machine
TAS Transmit Antenna Selection
TCP Transmission Control Protocol
TD Temporal Difference
TOA Time of Arrival
UAV Unmanned Aerial Vehicle
UDN Ultra Dense Network
uRLLC ultra-Reliable Low-Latency Communication
V2I Vehicle to Infrastructure
V2V Vehicle to Vehicle
V2X Vehicle to Everything
VANET Vehicular Ad Hoc Network
VLC Visible Light Communication
VR Virtual Reality
WANET Wireless Ad Hoc Network
WBAN Wireless Body Area Network
WLAN Wireless Local Area Network
WiMAX Worldwide Interoperability for Microwave
Access
Wi-Fi Wireless Fidelity
WMAN Wireless Metropolitan Area Network
WPAN Wireless Personal Area Network
WSN Wireless Sensor Network
WWAN Wireless Wide Area Network
I. INTRODUCTION
W
IRELESS networks have supported a variety of
military services, intelligent transportation, health-
care, etc. To elaborate briefly, next-generation mobile
networks are expected to support high date rate commu-
nication [1]. As a complement, wireless sensor networks
(WSN) support sustained monitoring in unmanned or
hostile environments relying on widely dispersed operat-
ing sensors [2]. Furthermore, the popular Wi-Fi network
provides convenient Internet access for various devices
in indoor scenarios [3]. With the rapid proliferation of
portable mobile devices and the demand for a high quality
of service (QoS) and quality of experience (QoE), future
wireless networks will continue to support a broad range
of compelling applications, where the users benefit from
high-rate, low-latency, low-cost and reliable information
services.
A. Motivation
In contrast to the conventional wireless networks, future
wireless networks have the following evolutionary tenden-
cy [4], [5]:
Network Scale: The network is associated with a
tremendous network size including all kinds of en-
tities, each of which has different service capabilities
as well as requirements. Furthermore, interactions
among these entities result in a diverse variety of
traffic, such as text, voice, audio, images, video, etc.
Network Structure: On one hand, the future wireless
network tends to have a self-configuring element,
where each entity cooperatively completes tasks. This
characteristic is termed as “being ad hoc”. On the
other hand, the future wireless network is hetero-
geneous and hierarchical, having different network
slices
1
. Furthermore, the mobility of entities results
in a complex time-variant network structure, which
requires dynamic time-space association.
Network Control: Future wireless networks facilitate
convenient reconfiguration by software-based net-
work management, hence improving network flexi-
bility and efficiency.
Machine learning (ML) was first introduced as a popular
technique of realizing artificial intelligence in the late
1950’s [6]. ML algorithms can learn from training data
without being explicitly programmed. It is beneficial for
classification/regression, prediction, clustering and deci-
sion making [7]–[9], whilst relying on the following three
basic elements [10]:
Model: Mathematical or signal models are construct-
ed from training data and expert knowledge, in order
to statistically describe the characteristics of the given
data set. Then again, relying on these trained models,
ML can be used for classification, prediction and
1
In our paper, network slices are multiple logical networks running
on the top of a shared physical network infrastructure and operated by
a control center.

3
decision making. In case the appropriate models are
not available, techniques on the feature extraction or
knowledge discovery can be developed to achieve the
same goal.
Strategy: The criteria used for training mathematical
models are called strategies. How to select an appro-
priate strategy is closely associated with training data.
Empirical risk minimization [11] and structural risk
minimization [12] constitute a pair of fundamental
strategies, where the latter can beneficially avoid the
notorious “over-fitting” phenomenon.
Algorithm: Algorithms are constructed to find so-
lutions based on predetermined model and strategy
selected, which can be viewed as an optimization
process. A powerful algorithm can find a globally
optimal solution with high probability at a low com-
putational complexity and storage.
In the last thirty years, ML has been successfully
applied to the field of computer vision [13], automatic
control [14], bioinformatics [15], etc. Considering the
aforementioned characteristics of future wireless networks,
data-driven ML is also likely to become a powerful tech-
nique of network association for substantially improving
the network performance. This is achieved by accurately
learning the near-real-time physical operating scenario,
which allows them to outperform the traditional model-
driven optimization algorithms based on more-restrictive
assumptions detailed in [16]. More specifically,
The wireless data torrance may be conveniently man-
aged by the big data processing capability of M-
L [17]. For example, the tele-traffic volume generated
by on-demand information and entertainment is pre-
dicted to substantially increase over the next decade,
and an average smart phone may generate as much
as 4.4 GB data per month by the year 2020 [18]–
[20]. The massive amount of data constitutes a large
training set, which can be statistically exploited for
data-mining as well as for classification and for
prediction with the aid of ML algorithms.
Future wireless networks require both individual node
intelligence and swarm intelligence [21]. Moreover,
as for resource allocation and management, we tend
to strike a trade-off among numerous factors, such
as the capacity, power consumption, latency, com-
plexity, etc. rather than only considering a single-
component objective function. Thanks to learning
from trial and error experiments, ML is conducive to
supporting intelligent multi-objective decision making
in the context of multi-agent collaborative network
management. Future wireless networks may hence
be expected to benefit from intelligent multi-agent
systems.
Modeling and parameter estimation play an im-
portant role in wireless networks. For instance, in
massive multiple-input and multiple-output (MIMO)
systems, an accurate estimate of the channel state
information (CSI) potentially allows us to approach
the system’s capacity. Given that traditional mathe-
matical models often fail to accurately describe the
system in typical time-varying scenarios, ML pro-
vides an alternative technique of adaptive modeling
and parameter estimation relying on learning from the
recorded history.
Future wireless networks are also expected to take
into account the human behavior, for example by
taking into account the geographic position of access
points (AP) in an ultra dense network (UDN), where
user-centric designs have been conceived for reducing
the cluster-edge effects. By mimicking human intelli-
gence, ML may be deemed to be the most appropriate
tool for adapting the network’s structure to the human
behavior observed [22], [23].
Next-generation wireless network optimization relying on
ML has emerged as an important research topic, so much
so that the standard body ITU-T has formed a dedicated
focus group to study this subject from 2018 to 2020. When
incorporating ML functionalities into the network architec-
ture, there are two fundamental mechanisms of exploiting
ML algorithms, namely online and offline ML. The online
ML family represents ML functionalities embedded into
networking algorithms or protocols. By contrast, offline
ML may be executed by a co-located computing facility
connected to the corresponding network entities. However,
offline ML can also be supported by remote computing
facilities.
In recent years, a range of surveys have been conceived
on ML paradigms. Some of them focused their scope on a
specific wireless scenario, such as WSNs [24], [25], cog-
nitive radio networks (CRN) [26]–[28], Internet of Things
(IoT) [29], wireless ad hoc networks (WANET) [30],
self-organizing cellular networks [31], etc. Specifically,
Alsheikh et al. [24] provided an extensive overview of ML
methods applied to WSNs which improved the resource
exploitation and prolonged the lifespan of the network.
Kulkarni et al. [25] surveyed some common issues of
WSNs solved by computational intelligence algorithms,
such as data fusion, routing, task scheduling, localization,
etc. Moreover, Bkassiny et al. [26] investigated decision-
making and feature classification problems solved by both
centralized and decentralized learning algorithms in CRN
in a non-Markovian environment. Gavrilovska et al. [27]
studied the nature of the CRN’s capability of reasoning
and learning. Park et al. [29] reviewed a range of learning
aided frameworks designed for adapting to the heteroge-
neous resource-constrained IoT environment. Forster [30]
portrayed the advantages of using ML for the data routing
problem of WANETs. Furthermore, a detailed literature
review of the past fifteen years of ML techniques applied
to self-configuration, self-optimization and self-healing,
was provided by Klaine et al. [31].
Some of the literature were restricted to a specific
application [32]–[38], whilst others considered a single
learning technique [39]–[44]. To elaborate, Al-Rawi et
al. [32] presented an overview of the features, meth-
ods and performance enhancement of learning-assisted
routing schemes in the context of distributed wireless
networks. Additionally, Fadlullah et al. [33] provided an
overview of the state-of-the-art in learning aided network

4
TABLE I: The Topics of Survey Papers on Different ML
Paradigms in Wireless Networks
Application-oriented ML-oriented
[24],[29] resource allocation [27] capability of learning
[25],[30],[32] routing schemes [45] supervised learning
[33] traffic control [39] unsupervised learning
[26],[34],[37] traffic classification [44] artificial neural networks
[35] intrusion detection [40] reinforcement learning
[36] software defined networking [41],[42],[43] deep learning
[31],[38] networking techniques
traffic control schemes as well as in deep learning aided
intelligent routing strategies, while Nguyen et al. [34]
focused their attention on the ML techniques conceived
for Internet traffic classification. ML and data mining
assisted cyber intrusion detection were surveyed in [35],
including the complexity comparison of each algorithm
and a set of recommendations concerning the best methods
applied to different cyber intrusion detection problems.
Moreover, ML techniques applied to software defined
networking (SDN) were investigated in [36], from the
perspective of traffic classification, routing optimization,
resource management, etc. Pacheco et al. [37] surveyed
the ML techniques based on several steps to achieve traffic
classification. Sun et al. [38] focused on the the recent
advances of ML techniques in the MAC layer, network
layer, and application layer.
As for exploring learning techniques, Usama et al. [39]
provided an overview of the recent advances of unsu-
pervised learning in the context of networking, such as
traffic classification, anomaly detection, network optimiza-
tion, etc. Yau et al. [40] investigated the employment
of reinforcement learning invoked for achieving context
awareness and intelligence in a variety of wireless network
applications such as data routing, resource allocation and
dynamic channel selection. The authors of [41] and [42]
focused their attention on the benefit of deep learning
in wireless multimedia network applications, including
ambient sensing, cyber-security, resource optimization,
etc. Mao et al. [43] provided a comprehensive survey
of the applications of deep learning algorithms in terms
of different network layers, including physical layer, da-
ta link layer and routing layer. Additionally, Chen et
al. [44] overviewed the artificial neural networks based
ML algorithms conceived for various wireless networking
problems. The main contributions of the existing ML aided
wireless networks survey and tutorial papers are contrasted
in Fig. 1 and Table I to this survey.
B. Contributions
Hence, our focus is on the comprehensive survey of
ML aided wireless networks. Inspired by above-mentioned
challenges, in this article we review the development of
ML aided wireless networks. We commence by investi-
gating a series of popular learning algorithms and their
compelling applications in wireless networks and then
provide some specific examples based on some recent
research results, followed by a range of promising open
issues in the design of future networks. Our original
contributions are summarized as follows:
We critically review the thirty-year history of ML.
Depending on how we use training data, we classify
ML algorithms into three categories, i.e. supervised
learning [45], unsupervised learning [46] and rein-
forcement learning [47]. In addition, we highlight
the family of deep learning algorithms, given their
success in the field of signal processing.
The development of wireless networks is reviewed
from their birth to the future wireless networks.
Moreover, we summarize the evolution of wireless
networking techniques, and characterize a variety of
representative scenarios for future wireless networks.
We appraise a range of typical supervised, unsuper-
vised, reinforcement learning as well as deep learning
algorithms. Moreover, their compelling applications
in wireless networks are surveyed for assisting the
readers in refining the motivation of ML in wireless
networks, all the way from the physical layer to the
application layer.
Relying on recent research results, we highlight a pair
of examples conceived for wireless networks, which
can help the readers to gain the insight into hitherto
unexplored scenarios and into their applications in
wireless networks.
C. Organization
The remainder of this article is outlined as follows. In
Section II, we provide a brief overview of the history
of ML and of the development of wireless networks. In
Section III, we introduce a range of typical supervised
learning algorithms and highlight their compelling appli-
cations in wireless networks. In Section IV, we investigate
the family of unsupervised learning algorithms and their
related applications. Some popular reinforcement learning
algorithms are elaborated on in Section V. Moreover,
we present two examples of how these reinforcement
learning algorithms can improve the performance of wire-
less networks. In Section VI, we introduce some typical
deep learning algorithms and their applications in wireless
networks. Some future research ideas and our conclusions
are provided in Section VII. The structure of this treatise
is summarized at a glance in Fig. 2.
II. A BRIEF OVERVIEW OF MACHINE LEARNING AND
WIRELESS NETWORKS
A. The Thirty-Year Development of Machine Learning
The term “machine learning” was first proposed by
Arthur Samuel in 1959 [6], which referred to computer
systems having the capability of learning from their large
amounts of previous tasks and data, as well as of self-
optimizing computer algorithms. Hard-programmed algo-
rithms are difficult to adapt to dynamically fluctuating de-
mands and constantly renewed system states. By contrast,
relying on learning from previous experiences, ML aided
algorithms are beneficial for scientific decision making
and task prediction, which is achieved by constructing a
self-adaption model from sample inputs. To elaborate a
little further, as for the concept of “learning”, Tom M.

5









Ota et al. [42] surveyed deep learning algorithms for some key techniques in mobile multimedia networks.
Klaine et al. [31] concentrated on machine learning techniques applied to self-organizing cellular networks.
Fadlullah et al. [33] studied machine learning methods used in the application of network traffic control and intelligent routing.
Usama et al. [39] focused on the unsupervised learning schemes applied to networking techniques.
Alsheikh et al. [41] explored deep learning paradigms in the application of mobile big data analysis.
Park et al. [29] surveyed learning assisted frameworks in the IoT characterized with resource-constraint and heterogeneity.
Al-Rawi et al. [32] focused on the features, methods and performance enhancement of learning enabled routing schemes.
Alsheikh et al. [24] studied machine learning methods for improving resource utilization and prolonging network’s lifespan in WSN.
Gavrilovska et al. [27] explored the nature characteristics and capability of reasoning and learning in cognitive radio networks.
Buczak et al. [35] surveyed machine learning algorithms in cyber intrusion detection problems.
Yau et al. [40] surveyed reinforcement learning algorithms used in data routing, resource allocation and channel selection.
Kulkarni et al. [25] focused on key techniques, such as routing, task scheduling, data fusion and localization in WSN.
Nguyen and Guven [34] explored machine learning algorithms for traffic classification in the Internet.
Forster [30] concentrated on machine learning enhanced data routing problems and strategies in WANET.
Bkassiny et al. [26] investigated the decision-making and feature classification problems in cognitive radio networks.
7KLV
SDSHU
Our paper surveys the applications of machine learning algorithms in wireless networks from the physical layer to the application
layer, illustrated by examples.
Mao et al. [43] surveyed the applications of deep learning algorithms for different network layers.

Xie et al. [36] focused on the machine learning techniques applied to software defined networking (SDN).
Pacheco et al. [37] surveyed the machine learning techniques for network traffic analysis.
Sun et al. [38] surveyed the recent advances of machine learning techniques in different network layers.
Chen et al. [44] overviewed the artificial neural networks based machine learning algorithms for wireless networking.

Fig. 1: The timeline of survey papers on the application of different ML paradigms in wireless networks.
Mitchell [48] provided the widely quoted description: A
computer program is said to learn from experience E with
respect to some class of tasks T and performance measure
P , if its performance at tasks in T , as measured by P ,
improves with experience E.
ML began to flourish in the 1990s [8]. Before this
era, logic- and knowledge-based schemes, such as in-
ductive logic programming, expert systems, etc. domi-
nated the artificial intelligence scene relying on high-
level human-readable symbolic representations of tasks
and logic. Thanks to the development of statistics theo-
ry and stochastic approximation, ML schemes regained
researchers’ attention leading to a range of beneficial
probabilistic models. Researchers embarked on creating
date-driven programs for analyzing a large amount of data
and tried to draw conclusions or to learn from the data.
During this era, ML algorithms such as neural networks
as well as kernel methods became mature. During the
2000s, researchers gradually renewed their interest in deep
learning with the aid of the advances in hardware-based
computational capability, which made ML indispensable
for supporting a wide range of services and applications.
Given the development of progressive learning tech-
niques [49], at present, the research focus of ML has
shifted from “learning being the purpose” to “learning
being the method”. Specifically, ML algorithms no longer
blindly pursue to imitate the learning capability of hu-
man beings, instead they focus more on the task-oriented
intelligent data-driven analysis. Nowadays, thanks to the
abundance of raw data and to the frequent interaction
between exploration and exploitation, ML algorithms have
prospered in the fields of computer vision, data mining,
intelligent control, etc. Future wireless networks aim for
providing ubiquitous information services for users in a
variety of scenarios. However, the rapid growth in the
number of users and the resulted explosive growth of tele-
traffic data pushes the limits of network-capacity. As a
remedy, ML aided network management and control can
be viewed as a corner stone of future wireless networks
in view of their limited power, spectrum and cost.
B. Classifying Machine Learning Techniques
1) A General Taxonomy: Again, depending on how
training data is used, ML algorithms can be grouped
into three categories, i.e. supervised learning, unsupervised
learning and reinforcement learning [50], [51]. In the
following, we will provide a brief description of the three
types of algorithms.
Supervised Learning: The algorithms are trained on
a certain amount of labeled data [45]. Both the input
data and its desired label are known to the computer,
resulting in a data-label pair. Their goal is to infer a
function that maps the input data to the output label
relying on the training of sample data-label pairs.
Specifically, considering a set of N sample data-label
pairs in the form of {(x
1
, y
1
), (x
2
, y
2
), . . . (x
N
, y
N
)},
where x
n
is the n-th sample input data and y
n
represents its label. Let X = {x
1
, x
2
, . . . , x
N
} denote
the input data set and Y = {y
1
, y
2
, . . . , y
N
} represent

Citations
More filters

Pattern Recognition and Machine Learning

TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Book ChapterDOI

Linear Regression Analysis

Journal ArticleDOI

Reconfigurable Intelligent Surfaces: Principles and Opportunities

TL;DR: A comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies.
Posted Content

Reconfigurable Intelligent Surfaces: Principles and Opportunities

TL;DR: A comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies is provided in this article.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)
Frequently Asked Questions (15)
Q1. What are the contributions in "Thirty years of machine learning: the road to pareto-optimal wireless networks" ?

Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Hence, in this article, the authors review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, the authors investigate their employment in the compelling applications of wireless networks, including heterogeneous networks ( HetNets ), cognitive radios ( CR ), Internet of things ( IoT ), machine to machine networks ( M2M ), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks. 

Furthermore, the authors have highlighted the development tendency of wireless network techniques and a variety of representative scenarios for future wireless networks as seen in Fig. 5 and Fig. 8. they also have provided a caseby-case description of numerous compelling applications relying on ML algorithms in wireless networks as shown in Table VII, followed by a pair of detailed application examples relying on their recent research results. In comparison with state-of-the-art survey papers seen in Fig. 1, their paper overviews all the four popular kinds of learning schemes and their applications in future wireless networks, which has a full scope of how ML algorithms bear fruits in the past decades in wireless networks. 

the network access cost function and the QoE reward were defined as the metrics of evaluating the proposed network selection schemes. 

the authors use the Euclidean distance or the Manhattan distance [202] for calculating the similarity between the object x and the training samples. 

Given a set of training samples {yn, xn1, xn2, . . . , xnM}, n = 1, 2, . . . , N , the authors are capable of estimating the regression coefficient vector w = [w0, w1, . . . , wM ] with the aid of the maximum likelihood estimation (MLE) method. 

By mimicking human intelligence, ML may be deemed to be the most appropriate tool for adapting the network’s structure to the human behavior observed [22], [23]. 

Deep reinforcement learning is eminently suitable for supporting the interaction in autonomous systems in terms of a higher level understanding of the visual world, which can be readily applied to a diverse analytically intractable problems in future wireless networks. 

By carefully considering the realistic capability of wireless sensors, the model relied on the time- and frequencylimited sensing snapshots having the duration of 12.8 µs as well as the bandwidth of 10MHz. 

given K initial cluster centroid µk, k = 1, . . . ,K, Lloyd’s algorithm arrives at the final cluster segmentation result by alternating between the following two steps,• Step 1: In the iterative round r, assign each sample to a cluster. 

the interference encountered in UDNs tends to be more severe and of higher volatility than that in traditional cellular networks because of the dense deployment of BSs and APs. 

A semi-blind received signal detection method based on ICA was proposed by Lei et al. [262], which additionally estimated the channel information of a multicell multiuser massive MIMO system. 

Given the intrinsic advantages of the reinforcement learning in environment in interactive decision making, it may play a significant role in the field of control decision [344], [345]. 

in [318], Zhang et al. constructed a four-layer DNN for extracting reliable high level features from massive WiFi data, which was pre-trained by the stacked denoising auto-encoder. 

Zhao et al. [249] conceived an efficient K-means clustering algorithm for optical signal detection in the context of burst-mode data transmission. 

The proposed CNN based wireless interference identifier was shown to have a higher identification accuracy than the state-of-the-art schemes in the context of low SNRs, such as −5dB, for example.