scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Towards Revenue-Driven Multi-User Online Task Offloading in Edge Computing

01 May 2022-IEEE Transactions on Parallel and Distributed Systems (Institute of Electrical and Electronics Engineers (IEEE))-Vol. 33, Iss: 5, pp 1185-1198
TL;DR: In this paper, the authors formulated the revenue-driven online task offloading problem as a linear fractional programming problem and proposed a Level Balanced Allocation (LBA) algorithm to solve it.
Abstract: Mobile Edge Computing (MEC) has become an attractive solution to enhance the computing and storage capacity of mobile devices by leveraging available resources on edge nodes. In MEC, the arrivals of tasks are highly dynamic and are hard to predict precisely. It is of great importance yet very challenging to assign the tasks to edge nodes with guaranteed system performance. In this article, we aim to optimize the revenue earned by each edge node by optimally offloading tasks to the edge nodes. We formulate the revenue-driven online task offloading (ROTO) problem, which is proved to be NP-hard. We first relax ROTO to a linear fractional programming problem, for which we propose the Level Balanced Allocation (LBA) algorithm. We then show the performance guarantee of LBA through rigorous theoretical analysis, and present the LB-Rounding algorithm for ROTO using the primal-dual technique. The algorithm achieves an approximation ratio of $2(1+\xi)\ln (d+1)$ 2 ( 1 + ξ ) ln ( d + 1 ) with a considerable probability, where $d$ d is the maximum number of process slots of an edge node and $\xi$ ξ is a small constant. The performance of the proposed algorithm is validated through both trace-driven simulations and testbed experiments. Results show that our proposed scheme is more efficient compared to baseline algorithms.
Citations
More filters
Journal ArticleDOI
TL;DR: A taxonomy of recent literature on scheduling IoT applications in Fog computing is presented, based on new classification schemes, current works in the literature are analyzed, research gaps of each category are identified, and respective future directions are described.
Abstract: Fog computing, as a distributed paradigm, offers cloud-like services at the edge of the network with low latency and high-access bandwidth to support a diverse range of IoT application scenarios. To fully utilize the potential of this computing paradigm, scalable, adaptive, and accurate scheduling mechanisms and algorithms are required to efficiently capture the dynamics and requirements of users, IoT applications, environmental properties, and optimization targets. This article presents a taxonomy of recent literature on scheduling IoT applications in Fog computing. Based on our new classification schemes, current works in the literature are analyzed, research gaps of each category are identified, and respective future directions are described.

16 citations

Journal ArticleDOI
15 Feb 2022-PeerJ
TL;DR: This paper first formulated the task scheduling problem as a binary nonlinear programming, and proposed an integer particle swarm optimization method (IPSO) to solve the problem in a reasonable time, and achieved better performance than several classical and state-of-the-art task scheduling methods in SLA satisfaction and resource efficiency, respectively.
Abstract: Task scheduling helps to improve the resource efficiency and the user satisfaction for Device-Edge-Cloud Cooperative Computing (DE3C), by properly mapping requested tasks to hybrid device-edge-cloud resources. In this paper, we focused on the task scheduling problem for optimizing the Service-Level Agreement (SLA) satisfaction and the resource efficiency in DE3C environments. Existing works only focused on one or two of three sub-problems (offloading decision, task assignment and task ordering), leading to a sub-optimal solution. To address this issue, we first formulated the problem as a binary nonlinear programming, and proposed an integer particle swarm optimization method (IPSO) to solve the problem in a reasonable time. With integer coding of task assignment to computing cores, our proposed method exploited IPSO to jointly solve the problems of offloading decision and task assignment, and integrated earliest deadline first scheme into the IPSO to solve the task ordering problem for each core. Extensive experimental results showed that our method achieved upto 953% and 964% better performance than that of several classical and state-of-the-art task scheduling methods in SLA satisfaction and resource efficiency, respectively.

10 citations

Journal ArticleDOI
18 Jan 2022-PeerJ
TL;DR: This article formulates the task scheduling problem into a binary nonlinear programming, and proposes a heuristic scheduling method with three stages to solve the problem in polynomial time that has up to 59% better performance in service level agreement satisfaction without decreasing the resource efficiency.
Abstract: Device-edge-cloud cooperative computing is increasingly popular as it can effectively address the problem of the resource scarcity of user devices. It is one of the most challenging issues to improve the resource efficiency by task scheduling in such computing environments. Existing works used limited resources of devices and edge servers in preference, which can lead to not full use of the abundance of cloud resources. This article studies the task scheduling problem to optimize the service level agreement satisfaction in terms of the number of tasks whose hard-deadlines are met for device-edge-cloud cooperative computing. This article first formulates the problem into a binary nonlinear programming, and then proposes a heuristic scheduling method with three stages to solve the problem in polynomial time. The first stage is trying to fully exploit the abundant cloud resources, by pre-scheduling user tasks in the resource priority order of clouds, edge servers, and local devices. In the second stage, the proposed heuristic method reschedules some tasks from edges to devices, to provide more available shared edge resources for other tasks cannot be completed locally, and schedules these tasks to edge servers. At the last stage, our method reschedules as many tasks as possible from clouds to edges or devices, to improve the resource cost. Experiment results show that our method has up to 59% better performance in service level agreement satisfaction without decreasing the resource efficiency, compared with eight of classical methods and state-of-the-art methods.

8 citations

Journal ArticleDOI
TL;DR: In this paper , a greedy algorithm was proposed to minimize the response latency in the proposed SDN-assisted MEC architecture, where the rule-based forwarding policy in SDN can help determine the most suitable offloading path and CAP for undertaking the computation.
Abstract: Mobile edge computing (MEC) can provision augmented computational capacity in proximity so as to better support Industrial Internet of Things (IIoT). Tasks from the IIoT devices can be outsourced and executed at the accessible computational access point (CAP). This computing paradigm enables the computing resources much closer to the IIoT devices, and thus satisfy the stringent latency requirement of the IIoT tasks. However, existing works in MEC that focus on task offloading and resource allocation seldom consider the load balancing issue. Therefore, load balance aware task offloading strategies for IIoT devices in MEC are urgently needed. In this article, software-defined network (SDN) technology is adopted to address this issue, since the rule-based forwarding policy in SDN can help determine the most suitable offloading path and CAP for undertaking the computation. To this end, we formulate an optimization problem to minimize the response latency in the proposed SDN-assisted MEC architecture. A greedy algorithm is put forward to obtain the approximate optimal solution in polynomial time. Simulation has been carried out to evaluate the performance of the proposed approach. The simulation results reveal that our approach outstands other approaches in terms of the response latency.

1 citations

Journal ArticleDOI
TL;DR: In this article , the authors investigated a DNN model placement problem for AIoT applications, where the trained DNN models were selected and placed on UAVs to execute inference tasks locally.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The results from a proof-of-concept prototype suggest that VM technology can indeed help meet the need for rapid customization of infrastructure for diverse applications, and this article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them.
Abstract: Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the service. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement.

3,599 citations

Journal ArticleDOI
TL;DR: In this article, the backhaul network capacity and energy efficiency of ultra-dense cellular networks are investigated to answer how much densification can be deployed for 5G ultra-density cellular networks.
Abstract: Traditional ultra-dense wireless networks are recommended as a complement for cellular networks and are deployed in partial areas, such as hotspot and indoor scenarios. Based on the massive multiple-input multi-output antennas and the millimeter wave communication technologies, the 5G ultra-dense cellular network is proposed to deploy in overall cellular scenarios. Moreover, a distribution network architecture is presented for 5G ultra-dense cellular networks. Furthermore, the backhaul network capacity and the backhaul energy efficiency of ultra-dense cellular networks are investigated to answer an important question, that is, how much densification can be deployed for 5G ultra-dense cellular networks. Simulation results reveal that there exist densification limits for 5G ultra-dense cellular networks with backhaul network capacity and backhaul energy efficiency constraints.

845 citations

Journal ArticleDOI
TL;DR: This paper investigates the task offloading problem in ultra-dense network aiming to minimize the delay while saving the battery life of user’s equipment and proposes an efficient offloading scheme which can reduce 20% of the task duration with 30% energy saving.
Abstract: With the development of recent innovative applications (e.g., augment reality, self-driving, and various cognitive applications), more and more computation-intensive and data-intensive tasks are delay-sensitive. Mobile edge computing in ultra-dense network is expected as an effective solution for meeting the low latency demand. However, the distributed computing resource in edge cloud and energy dynamics in the battery of mobile device makes it challenging to offload tasks for users. In this paper, leveraging the idea of software defined network, we investigate the task offloading problem in ultra-dense network aiming to minimize the delay while saving the battery life of user’s equipment. Specifically, we formulate the task offloading problem as a mixed integer non-linear program which is NP-hard. In order to solve it, we transform this optimization problem into two sub-problems, i.e., task placement sub-problem and resource allocation sub-problem. Based on the solution of the two sub-problems, we propose an efficient offloading scheme. Simulation results prove that the proposed scheme can reduce 20% of the task duration with 30% energy saving, compared with random and uniform task offloading schemes.

821 citations

Proceedings Article
22 Jun 2010
TL;DR: An analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing and measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing are presented.
Abstract: Energy efficiency is a fundamental consideration for mobile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the additional communication. In this paper we provide an analysis of the critical factors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measurements about the central characteristics of contemporary mobile handheld devices that define the basic balance between local and remote computing. We also describe a concrete example, which demonstrates energy savings. We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communication patterns and technologies used, and discuss the implications for the design and engineering of energy efficient mobile cloud computing solutions.

738 citations

Journal ArticleDOI
TL;DR: This paper develops an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint.
Abstract: Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method ; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\left [{O\left ({1 / V}\right), O\left ({V}\right) }\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.

576 citations