scispace - formally typeset
Journal ArticleDOI: 10.1089/BIG.2020.0284

Advanced Deep Learning for Resource Allocation and Security Aware Data Offloading in Industrial Mobile Edge Computing.

02 Mar 2021-Vol. 9, Iss: 4, pp 265-278
Abstract: The internet of things (IoT) is permeating our daily lives through continuous environmental monitoring and data collection. The promise of low latency communication, enhanced security, and efficien...

... read more

Topics: Computation offloading (62%), Resource allocation (57%), Mobile edge computing (57%) ... read more

9 results found

Open accessJournal ArticleDOI: 10.3390/FI13050118
30 Apr 2021-Future Internet
Abstract: Intelligence Edge Computing (IEC) is the key enabler of emerging 5G technologies networks and beyond. IEC is considered to be a promising backbone of future services and wireless communication systems in 5G integration. In addition, IEC enables various use cases and applications, including autonomous vehicles, augmented and virtual reality, big data analytic, and other customer-oriented services. Moreover, it is one of the 5G technologies that most enhanced market drivers in different fields such as customer service, healthcare, education methods, IoT in agriculture and energy sustainability. However, 5G technological improvements face many challenges such as traffic volume, privacy, security, digitization capabilities, and required latency. Therefore, 6G is considered to be promising technology for the future. To this end, compared to other surveys, this paper provides a comprehensive survey and an inclusive overview of Intelligence Edge Computing (IEC) technologies in 6G focusing on main up-to-date characteristics, challenges, potential use cases and market drivers. Furthermore, we summarize research efforts on IEC in 5G from 2014 to 2021, in which the integration of IEC and 5G technologies are highlighted. Finally, open research challenges and new future directions in IEC with 6G networks will be discussed.

... read more

Topics: Edge computing (60%), Big data (52%), Mobile cloud computing (51%)

1 Citations

Open accessJournal ArticleDOI: 10.3390/EN14196384
06 Oct 2021-Energies
Abstract: The role of the Internet of Things (IoT) networks and systems in our daily life cannot be underestimated. IoT is among the fastest evolving innovative technologies that are digitizing and interconnecting many domains. Most life-critical and finance-critical systems are now IoT-based. It is, therefore, paramount that the Quality of Service (QoS) of IoTs is guaranteed. Traditionally, IoTs use heuristic, game theory approaches and optimization techniques for QoS guarantee. However, these methods and approaches have challenges whenever the number of users and devices increases or when multicellular situations are considered. Moreover, IoTs receive and generate huge amounts of data that cannot be effectively handled by the traditional methods for QoS assurance, especially in extracting useful features from this data. Deep Learning (DL) approaches have been suggested as a potential candidate in solving and handling the above-mentioned challenges in order to enhance and guarantee QoS in IoT. In this paper, we provide an extensive review of how DL techniques have been applied to enhance QoS in IoT. From the papers reviewed, we note that QoS in IoT-based systems is breached when the security and privacy of the systems are compromised or when the IoT resources are not properly managed. Therefore, this paper aims at finding out how Deep Learning has been applied to enhance QoS in IoT by preventing security and privacy breaches of the IoT-based systems and ensuring the proper and efficient allocation and management of IoT resources. We identify Deep Learning models and technologies described in state-of-the-art research and review papers and identify those that are most used in handling IoT QoS issues. We provide a detailed explanation of QoS in IoT and an overview of commonly used DL-based algorithms in enhancing QoS. Then, we provide a comprehensive discussion of how various DL techniques have been applied for enhancing QoS. We conclude the paper by highlighting the emerging areas of research around Deep Learning and its applicability in IoT QoS enhancement, future trends, and the associated challenges in the application of Deep Learning for QoS in IoT.

... read more

Topics: Quality of service (51%)

Open accessJournal ArticleDOI: 10.1007/S10586-021-03401-5
02 Nov 2021-Cluster Computing
Abstract: Due to the advancements of high-speed networks, mobile edge computing (MEC) has received significant attention to bring processing and storage resources in client’s proximity. The MEC is also a form of Edge Network or In-network computing where the resources are brought closer to the user end (edge) of the network while increasing QoE. On the other hand, the increase in the utilization of the internet of things (IoT) gadgets results in the generation of cybersecurity issues. In recent times, the advent of machine learning (ML) and deep learning (DL) techniques paves way in the detection of existing traffic conditions, data offloading, and cyberattacks in MEC. With this motivation, this study designs an effective deep learning based data offloading and cyberattack detection (DL-DOCAD) technique for MEC. The goal of the DL-DOCAD technique is to enhance the QoE in MEC systems. The proposed DL-DOCAD technique comprises traffic prediction, data offloading, and attack detection. The DL-DOCAD model applies a gated recurrent unit (GRU) based predictive model for traffic detection. In addition, an adaptive sampling cross entropy (ASCE) approach is employed for the maximization of throughput and decision making for offloading users. Moreover, the birds swarm algorithm based feed forward neural network (BSA-FFNN) model is used as a detector for cyberattacks in MEC. The utilization of BSA to appropriately tune the parameters of the FFNN helps to boost the classification performance to a maximum extent. A comprehensive set of simulations are performed and the resultant experimental values highlight the improved performance of the DL-DOCAD technique with the maximum detection accuracy of 0.992.

... read more

Open accessJournal ArticleDOI: 10.1155/2021/9911332
Runfu Liang1, Gaocai Wang1, Jintian Hu2Institutions (2)
Abstract: As computing-intensive mobile applications become increasingly diversified, mobile devices’ computing power is hard to keep up with demand. Mobile devices migrate tasks to the Mobile Edge Computing (MEC) platform and improve the performance of task processing through reasonable allocation and caching of resources on the platform. Small cellular networks (SCN) have excellent short-distance communication capabilities, and the combination of MEC and SCN is a promising research direction. This paper focuses on minimizing energy consumption for task migration in small cellular networks and proposes a task migration energy optimization strategy with resource caching by combining optimal stopping theory with migration decision-making. Firstly, the process of device finding the MEC platform with the required task processing resources is formulated as the optimal stopping problem. Secondly, we prove an optimal stopping rule’s existence, obtain the optimal processing energy consumption threshold, and compare it with the device energy consumption. Finally, the platform with the best energy consumption is selected to process the task. In the simulation experiment, the optimization strategy has lower average migration energy consumption and higher average data execution energy efficiency and average distance execution energy efficiency, which improves task migration performance by 10% ∼ 60%.

... read more

Topics: Energy consumption (63%), Mobile edge computing (60%), Efficient energy use (55%) ... read more


35 results found

Open accessBook
Richard S. Sutton1, Andrew G. BartoInstitutions (1)
01 Jan 1988-
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

... read more

Topics: Learning classifier system (69%), Reinforcement learning (69%), Apprenticeship learning (65%) ... read more

32,257 Citations

Open accessPosted Content
Abstract: TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at

... read more

Topics: Interface (computing) (57%), Deep learning (55%), Information extraction (54%) ... read more

9,253 Citations

Open accessBook
14 Feb 2002-
Abstract: 1. The Advanced Encryption Standard Process.- 2. Preliminaries.- 3. Specification of Rijndael.- 4. Implementation Aspects.- 5. Design Philosophy.- 6. The Data Encryption Standard.- 7. Correlation Matrices.- 8. Difference Propagation.- 9. The Wide Trail Strategy.- 10. Cryptanalysis.- 11. Related Block Ciphers.- Appendices.- A. Propagation Analysis in Galois Fields.- A.1.1 Difference Propagation.- A.l.2 Correlation.- A. 1.4 Functions that are Linear over GF(2).- A.2.1 Difference Propagation.- A.2.2 Correlation.- A.2.4 Functions that are Linear over GF(2).- A.3.3 Dual Bases.- A.4.2 Relationship Between Trace Patterns and Selection Patterns.- A.4.4 Illustration.- A.5 Rijndael-GF.- B. Trail Clustering.- B.1 Transformations with Maximum Branch Number.- B.2 Bounds for Two Rounds.- B.2.1 Difference Propagation.- B.2.2 Correlation.- B.3 Bounds for Four Rounds.- B.4 Two Case Studies.- B.4.1 Differential Trails.- B.4.2 Linear Trails.- C. Substitution Tables.- C.1 SRD.- C.2 Other Tables.- C.2.1 xtime.- C.2.2 Round Constants.- D. Test Vectors.- D.1 KeyExpansion.- D.2 Rijndael(128,128).- D.3 Other Block Lengths and Key Lengths.- E. Reference Code.

... read more

3,288 Citations

Open accessJournal ArticleDOI: 10.1109/COMST.2017.2682318
Pavel Mach1, Zdenek Becvar1Institutions (1)
Abstract: Technological evolution of mobile user equipment (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. A suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud. Nevertheless, this option introduces significant execution delay consisting of delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such a delay is inconvenient and makes the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling it to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: 1) decision on computation offloading; 2) allocation of computing resource within the MEC; and 3) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

... read more

Topics: Mobile edge computing (68%), Computation offloading (66%), Edge computing (58%) ... read more

1,417 Citations

Open accessJournal ArticleDOI: 10.1109/TNET.2015.2487344
Xu Chen1, Lei Jiao2, Wenzhong Li3, Xiaoming Fu1Institutions (3)
Abstract: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.

... read more

Topics: Computation offloading (81%), Mobile edge computing (59%), Cloud computing (56%) ... read more

1,370 Citations