scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2017"


Journal ArticleDOI
TL;DR: This paper develops an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint.
Abstract: Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method ; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\left [{O\left ({1 / V}\right), O\left ({V}\right) }\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.

576 citations


Journal ArticleDOI
TL;DR: In this paper, the scheduling problem of DERs is studied from various aspects such as modeling techniques, solving methods, reliability, emission, uncertainty, stability, demand response (DR), and multi-objective standpoint in the microgrid and VPP frameworks.
Abstract: Due to different viewpoints, procedures, limitations, and objectives, the scheduling problem of distributed energy resources (DERs) is a very important issue in power systems. This problem can be solved by considering different frameworks. Microgrids and Virtual Power Plants (VPPs) are two famous and suitable concepts by which this problem is solved within their frameworks. Each of these two solutions has its own special significance and may be employed for different purposes. Therefore, it is necessary to assess and review papers and literature in this field. In this paper, the scheduling problem of DERs is studied from various aspects such as modeling techniques, solving methods, reliability, emission, uncertainty, stability, demand response (DR), and multi-objective standpoint in the microgrid and VPP frameworks. This review enables researchers with different points of view to look for possible applications in the area of microgrid and VPP scheduling.

385 citations


Journal ArticleDOI
TL;DR: The scheduling problem in Fog computing is analyzed, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics.
Abstract: Fog computing provides a distributed infrastructure at the edges of the network, resulting in low-latency access and faster response to application requests when compared to centralized clouds. With this new level of computing capacity introduced between users and the data center-based clouds, new forms of resource allocation and management can be developed to take advantage of the Fog infrastructure. A wide range of applications with different requirements run on end-user devices, and with the popularity of cloud computing many of them rely on remote processing or storage. As clouds are primarily delivered through centralized data centers, such remote processing/storage usually takes place at a single location that hosts user applications and data. The distributed capacity provided by Fog computing allows execution and storage to be performed at different locations. The combination of distributed capacity, the range and types of user applications, and the mobility of smart devices require resource management and scheduling strategies that takes into account these factors altogether. We analyze the scheduling problem in Fog computing, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics.

337 citations


Journal ArticleDOI
TL;DR: Efficient job caching is proposed to better schedule jobs based on the information collected on neighboring vehicles, including GPS information, and a scheduling algorithm based on ant colony optimization is designed to solve this job assignment problem.
Abstract: With the emergence of in-vehicle applications, providing the required computational capabilities is becoming a crucial problem. This paper proposes a framework named autonomous vehicular edge (AVE) for edge computing on the road, with the aim of increasing the computational capabilities of vehicles in a decentralized manner. By managing the idle computational resources on vehicles and using them efficiently, the proposed AVE framework can provide computation services in dynamic vehicular environments without requiring particular infrastructures to be deployed. Specifically, this paper introduces a workflow to support the autonomous organization of vehicular edges. Efficient job caching is proposed to better schedule jobs based on the information collected on neighboring vehicles, including GPS information. A scheduling algorithm based on ant colony optimization is designed to solve this job assignment problem. Extensive simulations are conducted, and the simulation results demonstrate the superiority of this approach over competing schemes in typical urban and highway scenarios.

266 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: The first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, is derived, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1).
Abstract: In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + e)-speed O(1/e)-competitive for any constant e ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.

258 citations


Journal ArticleDOI
TL;DR: This paper gives an extensive literature review on energy-efficient train control (EETC), from the first simple models from the 1960s of a train running on a level track to the advanced models and algorithms of the last decade dealing with varying gradients and speed limits, and including regenerative braking.

255 citations


Journal ArticleDOI
TL;DR: This paper investigates energy efficiency improvement for a downlink NOMA single-cell network by considering imperfect CSI, and proposes an iterative algorithm for user scheduling and power allocation to maximize the system energy efficiency.
Abstract: Non-orthogonal multiple access (NOMA) exploits successive interference cancellation technique at the receivers to improve the spectral efficiency. By using this technique, multiple users can be multiplexed on the same subchannel to achieve high sum rate. Most previous research works on NOMA systems assume perfect channel state information (CSI). However, in this paper, we investigate energy efficiency improvement for a downlink NOMA single-cell network by considering imperfect CSI. The energy efficient resource scheduling problem is formulated as a non-convex optimization problem with the constraints of outage probability limit, the maximum power of the system, the minimum user data rate, and the maximum number of multiplexed users sharing the same subchannel. Different from previous works, the maximum number of multiplexed users can be greater than two, and the imperfect CSI is first studied for resource allocation in NOMA. To efficiently solve this problem, the probabilistic mixed problem is first transformed into a non-probabilistic problem. An iterative algorithm for user scheduling and power allocation is proposed to maximize the system energy efficiency. The optimal user scheduling based on exhaustive search serves as a system performance benchmark, but it has high computational complexity. To balance the system performance and the computational complexity, a new suboptimal user scheduling scheme is proposed to schedule users on different subchannels. Based on the user scheduling scheme, the optimal power allocation expression is derived by the Lagrange approach. By transforming the fractional-form problem into an equivalent subtractive-form optimization problem, an iterative power allocation algorithm is proposed to maximize the system energy efficiency. Simulation results demonstrate that the proposed user scheduling algorithm closely attains the optimal performance.

250 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: This paper forms a Markov decision process (MDP) to find dynamic transmission scheduling schemes, with the purpose of minimizing the long-run average age, and proposes both optimal off-line and online scheduling algorithms for the finite-approximate MDPs, depending on knowledge of time-varying arrivals.
Abstract: Age of information is a newly proposed metric that captures delay from an application layer perspective. The age measures the amount of time that elapsed from the moment the mostly recently received update was generated until the present time. In this paper, we study an age minimization problem over a wireless broadcast network with many users, where only one user can be served at a time. We formulate a Markov decision process (MDP) to find dynamic transmission scheduling schemes, with the purpose of minimizing the long-run average age. While showing that an optimal scheduling algorithm for the MDP is a simple stationary switch-type, we propose a sequence of finite-state approximations for our infinite-state MDP and prove its convergence. We then propose both optimal off-line and online scheduling algorithms for the finite-approximate MDPs, depending on knowledge of time-varying arrivals.

241 citations


Journal ArticleDOI
TL;DR: This paper generates asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks and able to dramatically reduce feedbacks at no cost of optimality.
Abstract: Mobile edge computing is of particular interest to Internet of Things (IoT), where inexpensive simple devices can get complex tasks offloaded to and processed at powerful infrastructure. Scheduling is challenging due to stochastic task arrivals and wireless channels, congested air interface, and more prominently, prohibitive feedbacks from thousands of devices. In this paper, we generate asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks. A perturbed Lyapunov function is designed to stochastically maximize a network utility balancing throughput and fairness. A knapsack problem is solved per slot for the optimal schedule, provided up-to-date knowledge on the data and energy backlogs of all devices. The knapsack problem is relaxed to accommodate out-of-date network states. Encapsulating the optimal schedule under up-to-date network knowledge, the solution under partial out-of-date knowledge preserves asymptotic optimality, and allows devices to self-nominate for feedback. Corroborated by simulations, our approach is able to dramatically reduce feedbacks at no cost of optimality. The number of devices that need to feed back is reduced to less than 60 out of a total of 5000 IoT devices.

230 citations


Journal ArticleDOI
TL;DR: The proposed algorithm uses the advantages of evolutionary genetic algorithm along with heuristic approaches and outperformed the makespans of the three well-known heuristic algorithms and also the execution time of the recently meta-heuristics algorithm.

221 citations


Journal ArticleDOI
TL;DR: In this paper, a modular energy management system and its integration to a grid-connected battery-based microgrid is presented, where a power generation-side strategy is defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units.
Abstract: Microgrids are energy systems that aggregate distributed energy resources, loads, and power electronics devices in a stable and balanced way. They rely on energy management systems to schedule optimally the distributed energy resources. Conventionally, many scheduling problems have been solved by using complex algorithms that, even so, do not consider the operation of the distributed energy resources. This paper presents the modeling and design of a modular energy management system and its integration to a grid-connected battery-based microgrid. The scheduling model is a power generation-side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data. The operation of the microgrid is complemented with a supervisory control stage that compensates any mismatch between the offline scheduling process and the real time microgrid operation. The proposal has been tested experimentally in a hybrid microgrid at the Microgrid Research Laboratory, Aalborg University.

Journal ArticleDOI
TL;DR: In this article, a power-efficient resource allocation for multicarrier non-orthogonal multiple access (NOMA) systems is studied, which jointly designs the power allocation, rate allocation, user scheduling, and successive interference cancellation (SIC) decoding policy for minimizing the total transmit power.
Abstract: In this paper, we study power-efficient resource allocation for multicarrier non-orthogonal multiple access systems. The resource allocation algorithm design is formulated as a non-convex optimization problem which jointly designs the power allocation, rate allocation, user scheduling, and successive interference cancellation (SIC) decoding policy for minimizing the total transmit power. The proposed framework takes into account the imperfection of channel state information at transmitter and quality of service requirements of users. To facilitate the design of optimal SIC decoding policy on each subcarrier, we define a channel-to-noise ratio outage threshold . Subsequently, the considered non-convex optimization problem is recast as a generalized linear multiplicative programming problem, for which a globally optimal solution is obtained via employing the branch-and-bound approach. The optimal resource allocation policy serves as a system performance benchmark due to its high computational complexity. To strike a balance between system performance and computational complexity, we propose a suboptimal iterative resource allocation algorithm based on difference of convex programming. Simulation results demonstrate that the suboptimal scheme achieves a close-to-optimal performance. Also, both proposed schemes provide significant transmit power savings than that of conventional orthogonal multiple access schemes.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a tractable approach to analyze the delay in the heterogeneous cellular networks with spatio-temporal random arrival of traffic, and evaluated the effect of different scheduling policies on the delay performance.
Abstract: Emergence of new types of services has led to various traffic and diverse delay requirements in fifth-generation (5G) wireless networks. Meeting diverse delay requirements is one of the most critical goals for the design of 5G wireless networks. Though the delay of point-to-point communications has been well investigated, the delay of multi-point to multi-point communications has not been thoroughly studied, since it is a complicated function of all links in the network. In this paper, we propose a novel tractable approach to analyze the delay in the heterogeneous cellular networks with spatio-temporal random arrival of traffic. Specifically, we propose the notion of delay outage and evaluate the effect of different scheduling policies on the delay performance. Our numerical analysis reveals that offloading policy based on the cell range expansion greatly reduces the macrocell traffic, while bringing a small amount of growth for the picocell traffic. Our results also show that the delay performance of round-robin scheduling outperforms first-in first-out scheduling for heavy traffic, and it is reversed for light traffic. In summary, this analytical framework provides an understanding and a rule-of-thumb for the practical deployment of 5G systems, where delay requirement is increasingly becoming a key concern.

Journal ArticleDOI
TL;DR: Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of the proposed cooperative MCR-NOMA scheme, and it is demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.
Abstract: Non-orthogonal multiple access (NOMA) is emerging as a promising, yet challenging, multiple access technology to improve spectrum utilization for the fifth generation (5G) wireless networks. In this paper, the application of NOMA to multicast cognitive radio networks (termed as MCR-NOMA) is investigated. A dynamic cooperative MCR-NOMA scheme is proposed, where the multicast secondary users serve as relays to improve the performance of both primary and secondary networks. Based on the available channel state information (CSI), three different secondary user scheduling strategies for the cooperative MCR-NOMA scheme are presented. To evaluate the system performance, we derive the closed-form expressions of the outage probability and diversity order for both networks. Furthermore, we introduce a new metric, referred to as mutual outage probability to characterize the cooperation benefit compared to non-cooperative MCR-NOMA scheme. Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of our proposed cooperative MCR-NOMA scheme. It is also demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.

Journal ArticleDOI
TL;DR: The survey explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards.
Abstract: This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and Earliest Deadline First (EDF) scheduling, shared resources, and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards.

Proceedings ArticleDOI
12 Oct 2017
TL;DR: LAVEA is a system built on top of an edge computing platform, which offloads computation between clients and edge node, collaborates nearby edge nodes, to provide low-latency video analytics at places closer to the users.
Abstract: Along the trend pushing computation from the network core to the edge where the most of data are generated, edge computing has shown its potential in reducing response time, lowering bandwidth usage, improving energy efficiency and so on. At the same time, low-latency video analytics is becoming more and more important for applications in public safety, counter-terrorism, self-driving cars, VR/AR, etc. As those tasks are either computation intensive or bandwidth hungry, edge computing fits in well here with its ability to flexibly utilize computation and bandwidth from and between each layer. In this paper, we present LAVEA, a system built on top of an edge computing platform, which offloads computation between clients and edge nodes, collaborates nearby edge nodes, to provide low-latency video analytics at places closer to the users. We have utilized an edge-first design and formulated an optimization problem for offloading task selection and prioritized offloading requests received at the edge node to minimize the response time. In case of a saturating workload on the front edge node, we have designed and compared various task placement schemes that are tailed for inter-edge collaboration. We have implemented and evaluated our system. Our results reveal that the client-edge configuration has a speedup ranging from 1.3x to 4x (1.2x to 1.7x) against running in local (client-cloud configuration). The proposed shortest scheduling latency first scheme outputs the best overall task placement performance for inter-edge collaboration.

Journal ArticleDOI
TL;DR: This work identifies challenges and studies existing algorithms from the perspective of the scheduling models they adopt as well as the resource and application model they consider, and a detailed taxonomy that focuses on features particular to clouds is presented.
Abstract: Summary Large-scale scientific problems are often modeled as workflows. The ever-growing data and compute requirements of these applications has led to extensive research on how to efficiently schedule and deploy them in distributed environments. The emergence of the latest distributed systems paradigm, cloud computing, brings with it tremendous opportunities to run scientific workflows at low costs without the need of owning any infrastructure. It provides a virtually infinite pool of resources that can be acquired, configured, and used as needed and are charged on a pay-per-use basis. However, along with these benefits come numerous challenges that need to be addressed to generate efficient schedules. This work identifies these challenges and studies existing algorithms from the perspective of the scheduling models they adopt as well as the resource and application model they consider. A detailed taxonomy that focuses on features particular to clouds is presented, and the surveyed algorithms are classified according to it. In this way, we aim to provide a comprehensive understanding of existing literature and aid researchers by providing an insight into future directions and open issues.

Journal ArticleDOI
TL;DR: An online optimal operation approach for CCHP microgrids based on model predictive control with feedback correction to compensate for prediction error is proposed and demonstrates the effectiveness of the proposed approach with better matching between demand and supply.
Abstract: Combined cooling, heating, and power (CCHP) systems have been widely applied in various kinds of buildings. Most operation strategies for CCHP microgrids are designed based on day-ahead profiles. However, prediction error for renewable energy resources (RES) and load leads to suboptimal operation in dispatch scheduling. In this paper, we propose an online optimal operation approach for CCHP microgrids based on model predictive control with feedback correction to compensate for prediction error. This approach includes two hierarchies: 1) rolling optimization; and 2) feedback correction. In the rolling part, a hybrid algorithm based on integrating time series analysis and Kalman filters is used to forecast the power for RES and load. A rolling optimization model is established to schedule operation according to the latest forecast information. The rolling dispatch scheduling is then adjusted based on ultra-short-term error prediction. The feedback correction model is applied to minimize the adjustments and to compensate for prediction error. A case study demonstrates the effectiveness of the proposed approach with better matching between demand and supply.

Journal ArticleDOI
TL;DR: This paper presents an in-depth analysis of the Particle Swarm Optimization-based task and workflow scheduling schemes proposed for the cloud environment in the literature and provides a classification of the proposed scheduling schemes based on the type of the PSO algorithms which have been applied and illuminates their objectives, properties and limitations.
Abstract: Cloud computing provides effective mechanisms for distributing the computing tasks to the virtual resources. To provide cost-effective executions and achieve objectives such as load balancing, availability and reliability in the cloud environment, appropriate task and workflow scheduling solutions are needed. Various metaheuristic algorithms are applied to deal with the problem of scheduling, which is an NP-hard problem. This paper presents an in-depth analysis of the Particle Swarm Optimization (PSO)-based task and workflow scheduling schemes proposed for the cloud environment in the literature. Moreover, it provides a classification of the proposed scheduling schemes based on the type of the PSO algorithms which have been applied in these schemes and illuminates their objectives, properties and limitations. Finally, the critical future research directions are outlined.

Journal ArticleDOI
01 Feb 2017
TL;DR: A non-dominance sort based Hybrid Particle Swarm Optimization (HPSO) algorithm to handle the workflow scheduling problem with multiple conflicting objective functions on IaaS clouds and the performance of proposed heuristic is compared with state-of-art multi-objective meta-heuristics.
Abstract: Now-a-days, Cloud computing is a technology which eludes provision cost while providing scalability and elasticity to accessible resources on a pay-per-use basis. To satisfy the increasing demand of the computing power to execute large scale scientific workflow applications, workflow scheduling is the main challenging issue in Infrastructure-as-a-Service (IaaS) clouds. As workflow scheduling belongs to NP-complete problem, so, meta-heuristic approaches are more preferred option. Users often specified deadline and budget constraint for scheduling these workflow applications over cloud resources. But these constraints are in conflict with each other, i.e., the cheaper resources are slow as compared to the expensive resources. Most of the existing studies try to optimize only one of the objectives, i.e., either time minimization or cost minimization under user specified Quality of Service (QoS) constraints. But due to the complexity of workflows and dynamic nature of cloud, a trade-off solution is required to make a balance between execution time and processing cost. To address these issues, this paper presents a non-dominance sort based Hybrid Particle Swarm Optimization (HPSO) algorithm to handle the workflow scheduling problem with multiple conflicting objective functions on IaaS clouds. The proposed algorithm is a hybrid of our previously proposed Budget and Deadline constrained Heterogeneous Earliest Finish Time (BDHEFT) algorithm and multi-objective PSO. The HPSO heuristic tries to optimize two conflicting objectives, namely, makespan and cost under the deadline and budget constraints. Along with these two conflicting objectives, energy consumed of created workflow schedule is also minimized. The proposed algorithm gives a set of Pareto Optimal solutions from which the user can choose the best solution. The performance of proposed heuristic is compared with state-of-art multi-objective meta-heuristics like NSGA-II, MOPSO, and e -FDPSO. The simulation analysis substantiates that the solutions obtained with proposed heuristic deliver better convergence and uniform spacing among the solutions as compared to others. Hence it is applicable to solve a wide class of multi-objective optimization problems for scheduling scientific workflows over IaaS clouds.

Journal ArticleDOI
TL;DR: Reinforcement learning with a Q-factor algorithm is used to enhance performance of the scheduling method proposed for dynamic job shop scheduling (DJSS) problem which considers random job arrivals and machine breakdowns.

Proceedings ArticleDOI
19 Mar 2017
TL;DR: Simulation results show that task offloading scheduling is more critical when the available radio and computational resources in MEC systems are relatively balanced, and it is shown that the proposed algorithm achieves near-optimal execution delay along with a substantial device energy saving.
Abstract: Mobile-edge computing (MEC) has emerged as a prominent technique to provide mobile services with high computation requirement, by migrating the computation- intensive tasks from the mobile devices to the nearby MEC servers. To reduce the execution latency and device energy consumption, in this paper, we jointly optimize task offloading scheduling and transmit power allocation for MEC systems with multiple independent tasks. A low-complexity sub-optimal algorithm is proposed to minimize the weighted sum of the execution delay and device energy consumption based on alternating minimization. Specifically, given the transmit power allocation, the optimal task offloading scheduling, i.e., to determine the order of offloading, is obtained with the help of flow shop scheduling theory. Besides, the optimal transmit power allocation with a given task offloading scheduling decision will be determined using convex optimization techniques. Simulation results show that task offloading scheduling is more critical when the available radio and computational resources in MEC systems are relatively balanced. In addition, it is shown that the proposed algorithm achieves near-optimal execution delay along with a substantial device energy saving.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.
Abstract: The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.

Journal ArticleDOI
TL;DR: A novel architecture for task selection and scheduling at the edge of the network using container-as-a-service (CoaaS) is presented and a multi-objective function is developed in order to reduce the energy consumption and makespan by considering different constraints such as memory, CPU, and the user's budget.
Abstract: In the last few years, we have witnessed the huge popularity of one of the most promising technologies of the modern era: the Internet of Things. In IoT, various smart objects (smart sensors, embedded devices, PDAs, and smartphones) share their data with one another irrespective of their geographical locations using the Internet. The amount of data generated by these connected smart objects will be on the order of zettabytes in the coming years. This huge amount of data creates challenges with respect to storage and analytics given the resource constraints of these smart devices. Additionally, to process the large volume of information generated, the traditional cloud-based infrastructure may lead to long response time and higher bandwidth consumption. To cope up with these challenges, a new powerful technology, edge computing, promises to support data processing and service availability to end users at the edge of the network. However, the integration of IoT and edge computing is still in its infancy. Task scheduling will play a pivotal role in this integrated architecture. To handle all the above mentioned issues, we present a novel architecture for task selection and scheduling at the edge of the network using container-as-a-service (CoaaS). We solve the problem of task selection and scheduling by using cooperative game theory. For this purpose, we developed a multi-objective function in order to reduce the energy consumption and makespan by considering different constraints such as memory, CPU, and the user's budget. We also present a real-time internal and external container migration technique for minimizing the energy consumption. For task selection and scheduling, we have used lightweight containers instead of the conventional virtual machines to reduce the overhead and response time as well as the overall energy consumption of fog devices, that is, nano data centers (nDCs). Our empirical results demonstrate that the proposed scheme reduces the energy consumption and the average number of SLA violations by 21.75 and 11.82 percent, respectively.

Journal ArticleDOI
Changsheng Yu1, Li Yu1, Yuan Wu1, Yanfei He1, Qun Lu1 
TL;DR: The results show that the proposed uplink link adaptation scheme for NB-IoT systems outperforms the repetition-dominated method and the straightforward method, particularly for good channel conditions and larger packet sizes, and can save more than 14% of the active time and resource consumption.
Abstract: Narrowband Internet of Things (NB-IoT) is a new narrow-band radio technology introduced in the Third Generation Partnership Project release 13 to the 5th generation evolution for providing low-power wide-area IoT. In NB-IoT systems, repeating transmission data or control signals has been considered as a promising approach for enhancing coverage. Considering the new feature of repetition, link adaptation for NB-IoT systems needs to be performed in 2-D, i.e., the modulation and coding scheme (MCS) and the repetition number. Therefore, existing link adaptation schemes without consideration of the repetition number are no longer applicable. In this paper, a novel uplink link adaptation scheme with the repetition number determination is proposed, which is composed of the inner loop link adaptation and the outer loop link adaptation, to guarantee transmission reliability and improve throughput of NB-IoT systems. In particular, the inner loop link adaptation is designed to cope with block error ratio variation by periodically adjusting the repetition number. The outer loop link adaptation coordinates the MCS level selection and the repetition number determination. Besides, key technologies of uplink scheduling, such as power control and transmission gap, are analyzed, and a simple single-tone scheduling scheme is proposed. Link-level simulations are performed to validate the performance of the proposed uplink link adaptation scheme. The results show that our proposed uplink link adaptation scheme for NB-IoT systems outperforms the repetition-dominated method and the straightforward method, particularly for good channel conditions and larger packet sizes. Specifically, it can save more than 14% of the active time and resource consumption compared with the repetition-dominated method and save more than 46% of the active time and resource consumption compared with the straightforward method.

Journal ArticleDOI
TL;DR: A novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively is proposed.
Abstract: Multi-tenancy is one of the key features of cloud computing, which provides scalability and economic benefits to the end-users and service providers by sharing the same cloud platform and its underlying infrastructure with the isolation of shared network and compute resources. However, resource management in the context of multi-tenant cloud computing is becoming one of the most complex task due to the inherent heterogeneity and resource isolation. This paper proposes a novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively. The proposed algorithm is compared with the state-of-the-art algorithms, i.e., First Come First Served (FCFS), EASY Backfilling, and Minimum Completion Time (MCT) scheduling policies to evaluate the performance. Further, a proof-of-concept experiment of real-world scientific workflow applications is performed to demonstrate the scalability of the CWSA, which verifies the effectiveness of the proposed solution. The simulation results show that the proposed scheduling policy improves the workflow performance and outperforms the aforementioned alternative scheduling policies under typical deployment scenarios.

Posted Content
TL;DR: In this paper, a power-efficient resource allocation for multicarrier non-orthogonal multiple access (MC-NOMA) systems is studied, which jointly designs the power allocation, rate allocation, user scheduling, and successive interference cancellation (SIC) decoding policy for minimizing the total transmit power.
Abstract: In this paper, we study power-efficient resource allocation for multicarrier non-orthogonal multiple access (MC-NOMA) systems The resource allocation algorithm design is formulated as a non-convex optimization problem which jointly designs the power allocation, rate allocation, user scheduling, and successive interference cancellation (SIC) decoding policy for minimizing the total transmit power The proposed framework takes into account the imperfection of channel state information at transmitter (CSIT) and quality of service (QoS) requirements of users To facilitate the design of optimal SIC decoding policy on each subcarrier, we define a channel-to-noise ratio outage threshold Subsequently, the considered non-convex optimization problem is recast as a generalized linear multiplicative programming problem, for which a globally optimal solution is obtained via employing the branch-and-bound approach The optimal resource allocation policy serves as a system performance benchmark due to its high computational complexity To strike a balance between system performance and computational complexity, we propose a suboptimal iterative resource allocation algorithm based on difference of convex programming Simulation results demonstrate that the suboptimal scheme achieves a close-to-optimal performance Also, both proposed schemes provide significant transmit power savings than that of conventional orthogonal multiple access (OMA) schemes

Journal ArticleDOI
TL;DR: A hybrid multi-objective discrete grey wolf optimizer (HMOGWO) is proposed to solve the dynamic welding scheduling problem and outperforms other algorithms in terms of convergence, spread and coverage.

Proceedings ArticleDOI
14 Oct 2017
TL;DR: ZYGOS is presented, a system optimized for μs-scale, in-memory computing on multicore servers that implements a work-conserving scheduler within a specialized operating system designed for high request rates and a large number of network connections.
Abstract: This paper focuses on the efficient scheduling on multicore systems of very fine-grain networked tasks, which are the typical building block of online data-intensive applications. The explicit goal is to deliver high throughput (millions of remote procedure calls per second) for tail latency service-level objectives that are a small multiple of the task size. We present ZYGOS, a system optimized for μs-scale, in-memory computing on multicore servers. It implements a work-conserving scheduler within a specialized operating system designed for high request rates and a large number of network connections. ZYGOS uses a combination of shared-memory data structures, multi-queue NICs, and inter-processor interrupts to rebalance work across cores. For an aggressive service-level objective expressed at the 99th percentile, ZYGOS achieves 75% of the maximum possible load determined by a theoretical, zero-overhead model (centralized queueing with FCFS) for 10μs tasks, and 88% for 25μs tasks. We evaluate ZYGOS with a networked version of Silo, a state-of-the-art in-memory transactional database, running TPC-C. For a service-level objective of 1000μs latency at the 99th percentile, ZYGOS can deliver a 1.63x speedup over Linux (because of its dataplane architecture) and a 1.26x speedup over IX, a state-of-the-art dataplane (because of its work-conserving scheduler).

Journal ArticleDOI
TL;DR: Experimental results show that compared with traditional algorithms, the performance of ProLiS is very competitive and L-ACO performs the best in terms of execution costs and success ratios of meeting deadlines.
Abstract: Nowadays it is becoming more and more attractive to execute workflow applications in the cloud because it enables workflow applications to use computing resources on demand. Meanwhile, it also challenges traditional workflow scheduling algorithms that only concentrate on optimizing the execution time. This paper investigates how to minimize execution cost of a workflow in clouds under a deadline constraint and proposes a metaheuristic algorithm L-ACO as well as a simple heuristic ProLiS. ProLiS distributes the deadline to each task, proportionally to a novel definition of probabilistic upward rank, and follows a two-step list scheduling methodology: rank tasks and sequentially allocates each task a service which meets the sub-deadline and minimizes the cost. L-ACO employs ant colony optimization to carry out deadline-constrained cost optimization: the ant constructs an ordered task list according to the pheromone trail and probabilistic upward rank, and uses the same deadline distribution and service selection methods as ProLiS to build solutions. Moreover, the deadline is relaxed to guide the search of L-ACO towards constrained optimization. Experimental results show that compared with traditional algorithms, the performance of ProLiS is very competitive and L-ACO performs the best in terms of execution costs and success ratios of meeting deadlines.