scispace - formally typeset
Open AccessJournal ArticleDOI

Learning-Based Context-Aware Resource Allocation for Edge-Computing-Empowered Industrial IoT

TLDR
This article proposes a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory, and proves that the proposed framework can achieve guaranteed performance.
Abstract
Edge computing provides a promising paradigm to support the implementation of Industrial Internet of Things (IIoT) by offloading computational-intensive tasks from resource-limited machine-type devices (MTDs) to powerful edge servers. However, the performance gain of edge computing may be severely compromised due to limited spectrum resources, capacity-constrained batteries, and context unawareness. In this article, we consider the optimization of channel selection that is critical for efficient and reliable task delivery. We aim at maximizing the long-term throughput subject to long-term constraints of energy budget and service reliability. We propose a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory. We provide rigorous theoretical analysis, and prove that the proposed framework can achieve guaranteed performance with a bounded deviation from the optimal performance with global state information (GSI) based on only local and causal information. Finally, simulations are conducted under both single-MTD and multi-MTD scenarios to verify the effectiveness and reliability of the proposed framework.

read more

Content maybe subject to copyright    Report

Liao, Haijun, Zhou, Zhenyu, Zhao, Xiongwen, Zhang, Lei, Mumtaz, Shahid,
Jolfaei, Alireza, Ahmed, Syed Hassan and Bashir, Ali Kashif ORCID
logoORCID: https://orcid.org/0000-0001-7595-2522 (2019) Learning-Based
Context-Aware Resource Allocation for Edge Computing-Empowered Indus-
trial IoT. IEEE Internet of Things Journal. p. 1.
Downloaded from:
https://e-space.mmu.ac.uk/625219/
Version: Accepted Version
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
DOI: https://doi.org/10.1109/jiot.2019.2963371
Please cite the published version
https://e-space.mmu.ac.uk

1
Learning-Based Context-Aware Resource Allocation
for Edge Computing-Empowered Industrial IoT
Haijun Liao, Student Member, IEEE, Zhenyu Zhou, Senior Member, IEEE, Xiongwen Zhao, Senior
Member, IEEE, Lei Zhang, Shahid Mumtaz, Senior Member, IEEE, Alireza Jolfaei,
Syed Hassan Ahmed, Member, IEEE, and Ali Kashif Bashir, Senior Member, IEEE
Abstract—Edge computing provides a promising paradigm
to support the implementation of industrial Internet of Things
(IIoT) by offloading computational-intensive tasks from resource-
limited machine-type devices (MTDs) to powerful edge servers.
However, the performance gain of edge computing may be
severely compromised due to limited spectrum resources,
capacity-constrained batteries, and context unawareness. In
this paper, we consider the optimization of channel selection
which is critical for efficient and reliable task delivery. We
aim at maximizing the long-term throughput subject to long-
term constraints of energy budget and service reliability. We
propose a learning-based channel selection framework with ser-
vice reliability awareness, energy awareness, backlog awareness,
and conflict awareness, by leveraging the combined power of
machine learning, Lyapunov optimization, and matching theory.
We provide rigorous theoretical analysis, and prove that the
proposed framework can achieve guaranteed performance with
a bounded deviation from the optimal performance with global
state information (GSI) based on only local and causal infor-
mation. Finally, simulations are conducted under both single-
MTD and multi-MTD scenarios to verify the effectiveness and
reliability of the proposed framework.
Index Terms—Industrial Internet of Things (IIoT), resource
allocation, context awareness, edge computing, machine learning,
Lyapunov optimization, matching theory.
Manuscript received August 1, 2019; revised September 10, 2019 and
October 31, 2018; accepted December 15, 2019; current version December
30, 2019. This work was partially supported by the National Natural Science
Foundation of China (NSFC) under Grant Number 61971189; the Science and
Technology Project of State Grid Corporation of China under Grant Number
SGSDDK00KJJS1900405; the Exploration Project of State Key Laboratory
of Alternate Electrical Power System with Renewable Energy Sources (North
China Electric Power University) under Grant Number LAPS2019-12; the
European Regional Development Fund (FEDER), through the Competitiveness
and Internationalization (COMPETE 2020), Regional Operational Program
of the Agarve (2020), Fundao para a ciłncia e Tecnologia, i-Five: Extenso
do acesso de espectro dinmico para rdio 5G, POCI-01-0145-FEDER-030500.
(Corresponding author: Z. Zhou).
H. Liao, Z. Zhou and X. Zhao are with State Key Laboratory
of Alternate Electrical Power System with Renewable Energy Sources
(North China Electric Power University), and School of Electrical and
Electronic Engineering, North China Electric Power University, Bei-
jing, China (e-mail: haijun liao@ncepu.edu.cn, zhenyu zhou@ncepu.edu.cn,
zhaoxw@ncepu.edu.cn).
L. Zhang is with Shandong Electric Power Research Institute for State Grid
Corporation of China, Jinan, China (e-mail: 18660130685@163.com).
S. Mumtaz is with the The Instituto de Telecomunicac¸
˜
oes,1049-001, Aveiro,
Portugal (e-mail: smumtaz@av.it.pt).
A. Jolfaei is with the Department of Computing, Macquarie University,
Sydney NSW 2113, Australia (e-mail: alireza.jolfaei@mq.edu.au).
S. H. Ahmed is with the Department of Electrical and Computer Sci-
ence, Georgia Southern University, Statesboro, GA 30460, USA (e-mail:
sh.ahmed@ieee.org).
A. K. Bashir is with the Department of Computing and Mathe-
matics, Manchester Metropolitan University, Manchester, U.K (e-mail:
dr.alikashif.b@ieee.org).
I. INTRODUCTION
T
HE fourth industrial revolution aims to realize intercon-
nected, responsive, intelligent and self-optimizing man-
ufacturing processes and systems through seamless integra-
tion of advanced manufacturing techniques with industrial
Internet of Things (IIoT) [1]. In this new paradigm, bil-
lions of machine-type devices (MTDs) will be deployed in
the field for continuously performing various tasks such as
monitoring, billing, and protection [2], [3]. Nevertheless, the
tension between resource-limited MTDs and computational-
intensive tasks has become the bottleneck for reliable service
provisioning [4].
Offloading computational-intensive tasks from resource-
limited MTDs to powerful servers provides a promising
solution for accommodating the fast-growing computational
demands. In conventional cloud computing, the remote cloud
servers are generally located far away from MTDs, and the
long-distance data transmission raises numerous issues includ-
ing unstable connection, network congestion, and unbearable
latency [5]. In comparison, edge computing [6], which shifts
the computational capabilities from remote clouds to network
edges within radio access network (RAN) [7], is a promising
paradigm to reduce latency, relieve congestion, and prolong
battery lifetime. It has attracted intensive research efforts from
both industry and academia. In [8], Fan et. al considered the
workload balancing problem in fog computing, and proposed
a distributed device association algorithm to minimize the
communication latency and the computational latency. They
also extended their work to drone-assisted communication
networks for IoT [9]. Markakis et. al developed a multi-
access edge computing based IoT framework for supporting
next-generation emergency services, and provided several use
cases of remote healthcare monitoring and management [10].
Omoniwa et. al proposed an edge computing-based IoT frame-
work to enhance smart grid with improved scalability, security,
response and less system cost [11].
Unfortunately, although edge computing provides a promis-
ing way to exploit the abundant computational resources of
edge servers, its performance gain may be severely compro-
mised due to limited spectrum resources, capacity-constrained
batteries, and context unawareness. First, to deliver a large
volume of tasks from MTDs to the edge server on a real-
time basis, channel selection has to be dynamically optimized
in accordance with time-varying context parameters such as
channel state information (CSI), energy state information

2
(ESI), server load, and service reliability requirement. Con-
ventional centralized optimization approaches [12], [13], rely
on a common presumption that there exists a central node,
e.g., the base station (BS), which has the perfect knowledge of
all the context parameters. This presumption is too optimistic
in real-world implementation considering the prohibitive cost
of signaling overhead to collect information of the entire
network. Therefore, a distributed optimization approach where
each MTD individually optimizes its channel selection strategy
based on only local information is more desirable. However,
when the number of MTDs far more exceeds that of available
channels, selection conflict w ill o ccur f requently i f multi-
ple MTDs compete for the same channel, thus making the
strategies of channel selection coupled across different MTDs.
Second, given the limited battery capacity, a MTD will be
out of service when the battery energy is exhausted. As a
result, the short-term channel selection strategy also couples
the long-term energy budget. Last but not least, industrial
applications often require that certain service reliability should
be guaranteed [14]. How to meet the stringent reliability
requirement with limited resources and information brings
another dimension of difficulty.
Matching theory provides a flexible, l ow-complexity, and
efficient t ool t o s olve t he c ombinatorial p roblem s uch as
channel selection [15], task selection [16], and server selection
[17]. However, it requires perfect knowledge of global state
information (GSI) to construct the preference list, which spec-
ifies the fundamental matching criteria [18]. There exist some
research attempts which study the optimization of channel
selection based on matching and game theory [19], [20]. How-
ever, they rely on the assumption that the uncertain context
parameters follow some well-known probability distribution,
and may suffer from severe performance loss if the practical
probability distributions of uncertain factors disagree from the
presumed statistical models.
In this paper, we propose a learning-based context-aware
channel selection framework by combining machine learning,
Lyapunov optimization, and matching theory. Specifically, we
adopt the upper confidence b ound ( UCB) a lgorithm [ 21] to
enable a MTD to learn the matching preferences and maximize
the long-term optimality performance while maintaining a
well-balanced tradeoff between exploitation and exploration.
UCB was originally developed to solve the multi-armed ban-
dit (MAB) problem [22], which involves sequential decision
making based on only local information. It was designed for
the single-player scenario and thereby inevitably leading to
selection conflicts in the multi-player scenario where multiple
MTDs are prone to select the same channel [23].
We aim at maximizing the long-term network through-
put subject to long-term constraints of energy budget and
service reliability. The stochastic optimization problem is
converted to a series of short-term deterministic problems
by leveraging Lyapunov optimization [14]. We start from
the simplified s ingle-MTD s cenario w ith p erfect G SI, and
propose a Service-reliability-aware, Energy-aware, and data-
Backlog-aware GSI (SEB-GSI) algorithm for channel selec-
tion. Then, we extend SEB-GSI to the nonideal case with
only local information, and develop a UCB-based channel
selection algorithm named SEB-UCB. It enables the MTD
to dynamically balance throughput, energy consumption, and
service reliability via online learning. Next, for the multi-MTD
scenario with GSI, we formulate the optimization problem of
channel selection as a one-to-one matching between MTDs
and channels, and propose a matching-based solution named
SEB-Matching GSI (SEB-MGSI). Afterwards, we emphasize
the multi-MTD scenario with only local information, and
develop a matching-learning based context-aware channel se-
lection algorithm named SEB Conflict-aware MUCB (SEBC-
MUCB), in which each MTD makes decision and learns the
selection conflicts by continuously observing the relationship
between matching preferences and matching results.
The main contributions are summarized as follows:
Learning-based channel selection: We propose a
learning-based channel selection framework by leverag-
ing the combined power of UCB, Lyapunov optimization
and matching theory. It can learn the long-term opti-
mal strategy and achieve guaranteed performance with
a bounded deviation while the long-term constraints of
energy budget and service reliability are satisfied in a best
effort way based on only local and causal information.
Context awareness: The proposed framework can achieve
service reliability awareness, energy awareness, and back-
log awareness by dynamically adjusting the exploitation
weights in accordance with the performances of through-
put, energy consumption and service reliability. It can
also achieve conflict awareness by continuously learning
the difference between matching preference and actual
matching result.
Multiple deployment scenarios and information availabil-
ity cases: The simplified single-MTD scenario is firstly
studied to provide some insight. Then, the more com-
plicated multi-MTD scenario where selection conflicts
exist is investigated. For both the single-MTD and the
multi-MTD scenarios, the ideal case with perfect GSI is
firstly studied as the performance benchmark. Then, the
analysis is extended to the nonideal case with only local
information where learning is considered.
Rigorous theoretical analysis and extensive performance
evaluation: We analyze the optimality performance of
the proposed framework from the perspective of net-
work throughput and learning regret. We also provide
a comprehensive analysis of computational complexity.
Extensive simulations are carried out to validate its ef-
fectiveness and reliability under various scenarios and
parameter settings.
The remaining parts of this paper are organized as follows.
The system model and the problem formulation are intro-
duced in Section II. Section III and Section IV describe the
learning-based context-aware channel selection for the single-
MTD scenario and the multi-MTD scenario, respectively.
A performance analysis from the perspective of optimality
and complexity is given Section V. Practical implementation
considerations and simulation results are provided in Section
VI and Section VII. Section VIII concludes this paper.

3
Fig. 1. System model.
II. SYSTEM MODEL AND PROBLEM FORMULATION
In this section, the system model and problem formulation
are introduced.
A. System Model
As shown in Fig. 1, we consider a single-cell scenario
where an edge server is collocated with a BS. The BS
provides connection service and the edge server provides
computing service for K MTDs within the cell, the set of
which is denoted by M = {m
1
, · · · , m
k
, · · · , m
K
}. There
exist J orthogonal subchannels, the set of which is defined
as C = {c
1
, · · · , c
j
, · · · , c
J
}. The bandwidth of subchannel
c
j
is denoted by B
j
. Channel selection conflict occurs when
more than one MTDs select the same subchannel at the same
time, and only one MTD can succeed to access the subchannel
under the coordination of the BS.
A time-slotted model is adopted where the total optimization
period is divided into T slots with equal length τ , the set
of which is denoted by T = {1, · · · , t, · · · , T }. In this
model, CSI remains unchanged within a slot and varies across
different slots. In each slot, each MTD determines its channel
selection strategy individually. Particularly, a MTD faces J +1
options, i.e., either selecting one of the J subchannels or
remaining idle. Fig. 1 shows an example of channel selection
with 4 MTDs and 2 subchannels. m
1
selects subchannel c
1
for
data transmission while m
2
remains idle. Channel selection
conflict occurs between m
3
and m
4
due to the simultaneous
selection of subchannel c
2
.
In the following, the models of task transmission, energy
consumption, delay, and service reliability are introduced.
1) Task Transmission Model: In the t-th slot, A
k
(t) new
tasks with equal size γ
k
arrive at m
k
M, which are firstly
stored in the local buffer and then are transmitted to the edge
server. Hence, the total task size is γ
k
A
k
(t). Meanwhile, it
has to retransmit Y
k
(t) amount of data, which have not been
correctly delivered due to bit error. The task data stored in
the local buffer of m
k
can be modeled as a queue, i.e., queue
k. γ
k
A
k
(t) as well as Y
k
(t) can be seen as the amount of
task data entering the queue and U
k
(t) represents the amount
of task data leaving the queue. Define Q
k
(1) as the initial
amount of data backlog. Q
k
(t) is the backlog of data queue
k in the t-th slot, i.e., an accumulation of data that are yet to
be processed. Q
k
(t) is dynamically evolved as
Q
k
(t + 1) = max{Q
k
(t) U
k
(t), 0} + γ
k
A
k
(t) + Y
k
(t + 1).
(1)
The set of channel selection indicators consists of J + 1
binary elements, which is denoted by {x
k,j,t
}, where x
k,j,t
{0, 1}. When j = 1, 2, · · · , J , x
k,j,t
= 1 represents that m
k
selects subchannel c
j
for data transmission in the t-th slot and
when j = J + 1, x
k,j,t
= 1 represents that m
k
remains idle.
Considering the powerful computational capability of the
edge server, the objective of each MTD is to offload as many
tasks as possible, which equals to maximizing the total amount
of task data that can be transmitted, i.e., the throughput.
Uplink transmission is considered here. Denote H
k,j,t
as the
uplink channel gain of subchannel c
j
between m
k
and the BS.
Given x
k,j,t
, the achievable uplink transmission rate is given
by
R
k,j,t
=
(
B
j
log
2
(1 +
P
TX
H
k,j,t
δ
2
), j = 1, 2, · · · , J
0, j = J + 1
, (2)
where δ
2
is the noise power, and P
TX
is the transmission
power. The throughput of m
k
in the t-th slot is given by
z
k,j,t
= min{Q
k
(t), τ R
k,j,t
}. (3)
The amount of data transmitted to the edge server can be
U
k
(t) =
J+1
X
j=1
x
k,j,t
z
k,j,t
. (4)
Denote the bit error rate (BER) for m
k
transmitting data
through subchannel c
j
in the t-th slot as P
e
k,j,t
. We consider
the noncoherent binary phase shift keying (BPSK) modulation
and the corresponding BER [24] of it can be derived as
P
e
k,j,t
=
1
2
erfc
r
P
TX
H
k,j,t
δ
2
!
. (5)
Here, BPSK is just used as an example to derive the queue
evolution model, which can be naturally extended to other
modulation schemes such as quadrature amplitude modula-
tion (QAM) and orthogonal frequency division multiplexing
(OFDM).
Therefore, Y
k
(t + 1), the amount of data that has to be
retransmitted in the next slot can be calculated as
Y
k
(t + 1) = U
k
(t)P
e
k,j,t
. (6)
2) Energy Consumption Model: In the t-th slot, the energy
consumption of m
k
for data transmission is the transmission
power multiplied by the transmission delay, i.e.,
E
k,j,t
=
(
P
TX
min{
Q
k
(t)
R
k,j,t
, τ }, j = 1, 2, · · · , J.
0, j = J + 1.
(7)
The limited battery capacity exerts a direct impact on the
total energy budget of m
k
over T slots, which is denoted by

4
E
k,max
. Therefore, the long-term energy consumption of m
k
must satisfy
E
k
=
T
X
t=1
J+1
X
j=1
x
k,j,t
E
k,j,t
E
k,max
. (8)
3) Delay Model: In IIoT, the data size of computational
results is generally smaller than that of the computational
tasks. Therefore, for the sake of simplicity, we can neglect the
downlink transmission delay. Some previous works, e.g., [25]–
[27], also ignore the downlink transmission time. On the other
hand, our work can be easily extended to the scenario where
the downlink transmission time is considered. Therefore, the
total offloading delay is the sum of the transmission delay and
computational delay, which can be given by
d
total
k,j,t
= d
tra
k,j,t
+ d
com
k,j,t
. (9)
Given x
k,j,t
and z
k,j,t
, the transmission delay is calculated
by dividing throughput z
k,j,t
with transmission rate R
k,j,t
, i.e.,
d
tra
k,j,t
=
(
z
k,j,t
R
k,j,t
= min{
Q
k
(t)
R
k,j,t
, τ }, j = 1, 2, · · · , J.
+, j = J + 1.
(10)
Based on the computational intensity model in [28], assum-
ing that the computational intensity of the task data transmitted
by m
k
in the t-th slot is λ
k,t
(CPU cycles/bit), it requires
z
k,j,t
λ
k,t
CPU cycles to process the task data. It is noted
that although a linear relationship between workload and data
size is employed, our work is compatible with other nonlinear
models and can be used for different kinds of IIoT applications
with different computing intensities. Denoting the available
computational resources for m
k
in the t-th slot as ξ
k,t
, the
computational delay is calculated as
d
com
k,j,t
=
(
z
k,j,t
λ
k,t
ξ
k,t
, j = 1, 2, · · · , J.
+, j = J + 1.
(11)
4) Service Reliability Requirement Model: We model the
service reliability requirement in terms of delay. Denoting
the task delay requirement as d
k,t
, the task offloading is
unsuccessful if the offloaded task cannot be processed within
the specified delay requirement, i.e., d
total
k,j,t
> d
k,t
. Denote
X
k,T
as the number of successful task offloading for m
k
over
T slots, which is given by
X
k,T
=
T
X
t=1
J+1
X
j=1
I{d
total
k,j,t
d
k,t
}x
k,j,t
. (12)
I{x} is an indicator function with I{x} = 1 if event x
is true and I{x} = 0 otherwise. The edge server performs
computational resource optimization at the end of each slot
and feeds back the result of whether the delay requirement of
m
k
can be satisfied or not.
The service reliability requirement is defined as
X
k,T
T
η
k
, (13)
where η
k
(0, 1] represents the minimum successful proba-
bility of task offloading.
B. Problem Formulation
The objective is to maximize the long-term network
throughput under the long-term constraints of energy budget
and service reliability. Therefore, network throughput maxi-
mization problem is formulated as
P1 : max
{x
k,j,t
}
T
X
t=1
K
X
k=1
J+1
X
j=1
x
k,j,t
z
k,j,t
,
s.t. C
1
:
K
X
k=1
x
k,j,t
1, j = 1, 2, · · · , J, t T ,
C
2
:
J+1
X
j=1
x
k,j,t
1, m
k
M, t T ,
C
3
:
T
X
t=1
J+1
X
j=1
x
k,j,t
E
k,j,t
E
k,max
, m
k
M,
C
4
:
X
k,T
T
η
k
, m
k
M, (14)
where C
1
and C
2
are the channel selection constraints, i.e.,
in each slot, each subchannel can be selected by at most one
MTD, and each MTD can select only one subchannel at most
or remains idle. C
3
and C
4
correspond to the constraints of
energy consumption and service reliability, respectively. Here,
we focus on optimizing channel selection strategy while the
optimization of computational resource allocation is left to the
future work. The reason is that the proposed algorithm is nat-
urally compatible with any computational resource allocation
scheme. Similarly, some previous works also only consider
the channel selection problem [28]–[30]. On the other hand,
the joint optimization of channel selection and computational
resource allocation is a completely different problem, which
requires different system modeling, problem formulation, and
optimization design. Utilizing learning algorithms to solve
the joint optimization problem of integer channel selection
and continuous computational resource allocation is also a
worthwhile research direction which will be investigated in
the future work.
III. LEARNING-BASED CONTEXT-AWARE CHANNEL
SELECTION FOR THE SINGLE-MTD SCENARIO
In this section, we consider the single-MTD scenario with
only one MTD, e.g., m
k
, and propose a learning-based
context-aware channel selection algorithm.
A. Problem Transformation
Problem P1 cannot be directly solved due to the long-term
optimization objective and constraints. To provide a tractable
solution, we leverage Lyapunov optimization to transform
a coupled long-term stochastic optimization problem into a
series of short-term deterministic problems [31], [32], which
can be solved in low complexity while the data backlog, energy
consumption, and service reliability are balanced over time.
Based on the concept of virtual queue [33], the long-term
energy budget and service reliability constraints, i.e., C
3
and
C
4
, can be transformed to queue stability constraints. We

Citations
More filters
Journal ArticleDOI

6G Wireless Systems: A Vision, Architectural Elements, and Future Directions

TL;DR: A taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies is devised, and practical guidelines to cope with open research challenges are proposed.
Journal ArticleDOI

Energy-Efficient Random Access for LEO Satellite-Assisted 6G Internet of Remote Things

TL;DR: In this article, the authors proposed a novel impulse-like timing metric based on length-alterable differential cross-correlation (LDCC), which is immune to carrier frequency offset (CFO) and capable of mitigating the impact of noise on timing estimation.
Journal ArticleDOI

A Secured Framework for SDN-Based Edge Computing in IoT-Enabled Healthcare System

TL;DR: In this paper, a secure framework for SDN-based Edge computing in IoT-enabled healthcare system is designed using a lightweight authentication scheme and the results demonstrate that the proposed framework provides better solutions for IoT- enabled healthcare systems.
Journal ArticleDOI

Secure and Energy Efficient-Based E-Health Care Framework for Green Internet of Things

TL;DR: The proposed framework encrypts and decrypts images at a higher speed, therefore, it can be used to secure the communication in green IoT networks.
Journal ArticleDOI

Machine Learning Techniques for 5G and Beyond

TL;DR: In this article, the authors present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model and conclude with some future applications and research challenges in the area of ML and AI in 6G networks.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI

Finite-time Analysis of the Multiarmed Bandit Problem

TL;DR: This work shows that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.
Journal ArticleDOI

Mobile Edge Computing: A Survey on Architecture and Computation Offloading

TL;DR: This paper describes major use cases and reference scenarios where the mobile edge computing (MEC) is applicable and surveys existing concepts integrating MEC functionalities to the mobile networks and discusses current advancement in standardization of the MEC.
Book

Stochastic Network Optimization with Application to Communication and Queueing Systems

TL;DR: In this article, the authors present a modern theory of analysis, control, and optimization for dynamic networks, including wireless networks with time-varying channels, mobility, and randomly arriving traffic.
Journal ArticleDOI

Mobile Edge Computing: A Survey on Architecture and Computation Offloading

TL;DR: In this paper, the authors present a survey of the research on computation offloading in mobile edge computing (MEC), focusing on user-oriented use cases and reference scenarios where the MEC is applicable.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions mentioned in the paper "Learning-based context-aware resource allocation for edge computing-empowered industrial iot" ?

In this paper, the authors consider the optimization of channel selection which is critical for efficient and reliable task delivery. The authors aim at maximizing the long-term throughput subject to longterm constraints of energy budget and service reliability. The authors propose a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory. The authors provide rigorous theoretical analysis, and prove that the proposed framework can achieve guaranteed performance with a bounded deviation from the optimal performance with global state information ( GSI ) based on only local and causal information. 

Their future work will focus on the online cross-layer resource optimization including local computation, rate control, channel selection, and resource allocation in the edge server under information uncertainty. 

Simulation results demonstrate that the proposed SEB-UCB can improve throughput by 30% and 36% compared with UCB and random selection. 

Due to the limited computational capability and battery capacity of MTDs, the authors only consider the scenario of task offloading, while local13computing is ignored. 

In the t-th slot, the energy consumption of mk for data transmission is the transmission power multiplied by the transmission delay, i.e.,Ek,j,t = { PTX min{Qk(t)Rk,j,t , τ}, j = 1, 2, · · · , J . 0, j = J + 1. (7)The limited battery capacity exerts a direct impact on the total energy budget of mk over T slots, which is denoted by4 Ek,max. 

The available computational resource for mk in the t-th slot ξk,t is randomly distributed within the interval [0.9ξ̄k, 1.1ξ̄k] CPU cycles, where ξ̄k = 18× 109 CPU cycles represents the timeaverage amount of computational resource. 

The service reliability deficit of EBC-MUCB increases dramatically after t = 700 due to the negligence of service reliability awareness. 

The throughput of mk in the t-th slot is given byzk,j,t = min{Qk(t), τRk,j,t}. (3)The amount of data transmitted to the edge server can beUk(t) = J+1∑ j=1 xk,j,tzk,j,t. (4)Denote the bit error rate (BER) for mk transmitting data through subchannel cj in the t-th slot as P ek,j,t. 

Instead of directly calculating θk,j,t in SEB-GSI, SEB-UCB estimates θk,j,t based on historical observations while simultaneously taking into account the uncertainty of estimation via confidence bound. 

Mt transmits it to the edge server for resolving matching conflicts based on the following procedures:3) Iterative Matching: Step 1: Initialization • Initialize φ = ∅ and Ω = ∅. 

Denote the preference list of mk towards all the J + 1 options as Fk, which is obtained by sorting all the Lk,j,t, j = 1, 2, · · · , J + 1, in a descending order. 

the information required to solve P2 can be classified into two categories, i.e., • Local Information: information that can be possessedby mk without additional information exchange, e.g., the queue backlog Qk(t), the transmission power PTX , the total energy budget Ek,max, the computational intensity of task data λk,t, the task delay requirement dk,t, and the service reliability requirement ηk. 

The achievable transmission rate of subchannel sj in each slot follows a uniform distribution within the range [0.8R̄j , 1.2R̄j ], where R̄j represents the average transmission rate. 

When ω = 1, the learning regret of the SEBCMUCB is upper bounded asR ≤ 8(J + 1) K∑ k=1 (∆θk,j̈,j̆) 3 ln(T ) +K(J + 1)∆θk,j̈,j̆+ (J + 1) K∑ k=1 +∞∑ t=1 [2t−4K+2∆θk,j̈,j̆ ] (28)Proof: See Appendix B. Based on the definition of learning regret, the cumulative throughput achieved by SEBC-MUCB can be derived as the cumulative throughput achieved by SEB-MGSI minus learning regret.