scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Resource Caching and Task Migration Strategy of Small Cellular Networks under Mobile Edge Computing

21 Jun 2021-Mobile Information Systems (Hindawi Limited)-Vol. 2021, pp 1-16
TL;DR: In this paper, the authors proposed a task migration energy optimization strategy with resource caching by combining optimal stopping theory with migration decision-making, and proved an optimal stopping rule's existence, obtained the optimal processing energy consumption threshold, and compared it with device energy consumption.
Abstract: As computing-intensive mobile applications become increasingly diversified, mobile devices’ computing power is hard to keep up with demand. Mobile devices migrate tasks to the Mobile Edge Computing (MEC) platform and improve the performance of task processing through reasonable allocation and caching of resources on the platform. Small cellular networks (SCN) have excellent short-distance communication capabilities, and the combination of MEC and SCN is a promising research direction. This paper focuses on minimizing energy consumption for task migration in small cellular networks and proposes a task migration energy optimization strategy with resource caching by combining optimal stopping theory with migration decision-making. Firstly, the process of device finding the MEC platform with the required task processing resources is formulated as the optimal stopping problem. Secondly, we prove an optimal stopping rule’s existence, obtain the optimal processing energy consumption threshold, and compare it with the device energy consumption. Finally, the platform with the best energy consumption is selected to process the task. In the simulation experiment, the optimization strategy has lower average migration energy consumption and higher average data execution energy efficiency and average distance execution energy efficiency, which improves task migration performance by 10% ∼ 60%.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors proposed an energy-efficient dynamic task migration algorithm (EDTM) that minimizes the total energy consumption of the system while ensuring UAVs system load balance.
Abstract: With the rapid development of unmanned aerial vehicles (UAVs) technology and the advent of the 5G era, the role of UAV-enabled mobile edge computing (MEC) system has attracted much attention, especially in the event of some emergencies. However, considering the limited battery life and computing capabilities of UAVs, it is challenging to provide energy-efficient services for mobile devices. To solve this challenge, we propose an energy-efficient dynamic task migration algorithm (EDTM) that minimizes the total energy consumption of the system while ensuring UAVs system load balance. Based on the improved ant colony algorithm and path elimination strategy, the proposed algorithm comprehensively considers task migration distance between UAVs, the load situation of UAVs, and environmental factors (e.g., wind speed and air density) and finally plans a reasonable task migration path. The simulation results show that the performance of the proposed EDTM is superior to the benchmark schemes.

5 citations

Journal ArticleDOI
TL;DR: In this paper , the authors proposed ubiquitous connectivity between users at the cell edge and offloading the macro cells so as to provide features the macro cell itself cannot cope with, such as extreme changes in the required user data rate and energy efficiency.
Abstract: Nowadays, data caching is being used as a high-speed data storage layer in mobile edge computing networks employing flow control methodologies at an exponential rate. This study shows how to discover the best architecture for backhaul networks with caching capability using a distributed offloading technique. This article used a continuous power flow analysis to achieve the optimum load constraints, wherein the power of macro base stations with various caching capacities is supplied by either an intelligent grid network or renewable energy systems. This work proposes ubiquitous connectivity between users at the cell edge and offloading the macro cells so as to provide features the macro cell itself cannot cope with, such as extreme changes in the required user data rate and energy efficiency. The offloading framework is then reformed into a neural weighted framework that considers convergence and Lyapunov instability requirements of mobile-edge computing under Karush Kuhn Tucker optimization restrictions in order to get accurate solutions. The cell-layer performance is analyzed in the boundary and in the center point of the cells. The analytical and simulation results show that the suggested method outperforms other energy-saving techniques. Also, compared to other solutions studied in the literature, the proposed approach shows a two to three times increase in both the throughput of the cell edge users and the aggregate throughput per cluster.
References
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel two-tier computation offloading framework in heterogeneous networks, and formulates joint computation off loading and user association problem for multi-task mobile edge computing system to minimize overall energy consumption.
Abstract: Computation intensive and delay-sensitive applications impose severe requirements on mobile devices of providing required computation capacity and ensuring latency. Mobile edge computing (MEC) is a promising technology that can alleviate computation limitation of mobile users and prolong their lifetime through computation offloading. However, computation offloading in an MEC environment faces severe issues due to dense deployment of MEC servers. Moreover, a mobile user has multiple mutually dependent tasks, which make offloading policy design even more challenging. To address the above-mentioned problems in this paper, we first propose a novel two-tier computation offloading framework in heterogeneous networks. Then, we formulate joint computation offloading and user association problem for multi-task mobile edge computing system to minimize overall energy consumption. To solve the optimization problem, we develop an efficient computation offloading algorithm by jointly optimizing user association and computation offloading where computation resource allocation and transmission power allocation are also considered. Numerical results illustrate fast convergence of the proposed algorithm, and demonstrate the superior performance of our proposed algorithm compared to state of the art solutions.

260 citations

Journal ArticleDOI
TL;DR: Simulation results show that the distributed JCORAO scheme can effectively decrease the energy consumption and task completion time with lower complexity.
Abstract: In this paper, we propose a distributed joint computation offloading and resource allocation optimization (JCORAO) scheme in heterogeneous networks with mobile edge computing. An optimization problem is formulated to provide the optimal computation offloading strategy policy, uplink subchannel allocation, uplink transmission power allocation, and computation resource scheduling. The optimization problem is decomposed into two sub-problems due to the NP-hard property. In order to analyze the offloading strategy, a sub-algorithm named distributed potential game is built. The existence of Nash equilibrium is proved. To jointly allocate uplink subchannel, uplink transmission power, and computation resource for the offloading mobile terminals, a sub-algorithm named cloud and wireless resource allocation algorithm is designed. The solutions for subchannel allocation consist of uniform zero frequency reuse method without interference and fractional frequency reuse method based on Hungarian and graph coloring with interference. A distributed JCORAO scheme is proposed to solve the optimization problem by the mutual iteration of the two sub-algorithms. Simulation results show that the distributed JCORAO scheme can effectively decrease the energy consumption and task completion time with lower complexity.

189 citations

Journal ArticleDOI
TL;DR: A single edge server that assists a mobile user in executing a sequence of computation tasks is considered, and a mixed integer non-linear programming (MINLP) is formulated that jointly optimizes the service caching placement, computation offloading decisions, and system resource allocation.
Abstract: In mobile edge computing (MEC) systems, edge service caching refers to pre-storing the necessary programs for executing computation tasks at MEC servers. Service caching effectively reduces the real-time delay/bandwidth cost on acquiring and initializing service applications when computation tasks are offloaded to the MEC servers. The limited caching space at resource-constrained edge servers calls for careful design of caching placement to determine which programs to cache over time. This is in general a complicated problem that highly correlates to the computation offloading decisions of computation tasks, i.e., whether or not to offload a task for edge execution. In this paper, we consider a single edge server that assists a mobile user (MU) in executing a sequence of computation tasks. In particular, the MU can upload and run its customized programs at the edge server, while the server can selectively cache the previously generated programs for future reuse. To minimize the computation delay and energy consumption of the MU, we formulate a mixed integer non-linear programming (MINLP) that jointly optimizes the service caching placement, computation offloading decisions, and system resource allocation (e.g., CPU processing frequency and transmit power of MU). To tackle the problem, we first derive the closed-form expressions of the optimal resource allocation solutions, and subsequently transform the MINLP into an equivalent pure 0-1 integer linear programming (ILP) that is much simpler to solve. To further reduce the complexity in solving the ILP, we exploit the underlying structures of caching causality and task dependency models, and accordingly devise a reduced-complexity alternating minimization technique to update the caching placement and offloading decision alternately. Extensive simulations show that the proposed joint optimization techniques achieve substantial resource savings of the MU compared to other representative benchmark methods considered.

182 citations

Journal ArticleDOI
TL;DR: This study proposes an offloading model for a multi-user MEC system with multi-task, and an equivalent form of reinforcement learning is created where the state spaces are defined based on all possible solutions and the actions are defined on the basis of movement between the different states.
Abstract: Computation offloading at mobile edge computing (MEC) servers can mitigate the resource limitation and reduce the communication latency for mobile devices Thereby, in this study, we proposed an offloading model for a multi-user MEC system with multi-task In addition, a new caching concept is introduced for the computation tasks, where the application program and related code for the completed tasks are cached at the edge server Furthermore, an efficient model of task offloading and caching integration is formulated as a nonlinear problem whose goal is to reduce the total overhead of time and energy However, solving these types of problems is computationally prohibitive, especially for large-scale of mobile users Thus, an equivalent form of reinforcement learning is created where the state spaces are defined based on all possible solutions and the actions are defined on the basis of movement between the different states Afterwards, two effective Q-learning and Deep-Q-Network-based algorithms are proposed to derive the near-optimal solution for this problem Finally, experimental evaluations verify that our proposed model can substantially minimize the mobile devices’ overhead by deploying computation offloading and task caching strategy reasonably

115 citations

Journal ArticleDOI
TL;DR: A multiuser resource allocation and computation offloading model with data security to address the limitations of mobile users and IoT devices and can significantly improve the performance of the entire system compared with local execution and full offloading schemes.

99 citations