scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2019"


Journal ArticleDOI
TL;DR: Simulation results show that the proposed edge VM allocation and task scheduling approach can achieve near-optimal performance with very low complexity and the proposed learning-based computing offloading algorithm not only converges fast but also achieves a lower total cost compared with other offloading approaches.
Abstract: Internet of Things (IoT) computing offloading is a challenging issue, especially in remote areas where common edge/cloud infrastructure is unavailable. In this paper, we present a space-air-ground integrated network (SAGIN) edge/cloud computing architecture for offloading the computation-intensive applications considering remote energy and computation constraints, where flying unmanned aerial vehicles (UAVs) provide near-user edge computing and satellites provide access to the cloud computing. First, for UAV edge servers, we propose a joint resource allocation and task scheduling approach to efficiently allocate the computing resources to virtual machines (VMs) and schedule the offloaded tasks. Second, we investigate the computing offloading problem in SAGIN and propose a learning-based approach to learn the optimal offloading policy from the dynamic SAGIN environments. Specifically, we formulate the offloading decision making as a Markov decision process where the system state considers the network dynamics. To cope with the system dynamics and complexity, we propose a deep reinforcement learning-based computing offloading approach to learn the optimal offloading policy on-the-fly, where we adopt the policy gradient method to handle the large action space and actor-critic method to accelerate the learning process. Simulation results show that the proposed edge VM allocation and task scheduling approach can achieve near-optimal performance with very low complexity and the proposed learning-based computing offloading algorithm not only converges fast but also achieves a lower total cost compared with other offloading approaches.

537 citations


Proceedings ArticleDOI
19 Aug 2019
TL;DR: Decima as discussed by the authors uses reinforcement learning and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective, such as minimizing average job completion time, and shows that RL techniques can generate highly-efficient policies automatically.
Abstract: Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems use simple, generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective, such as minimizing average job completion time. However, off-the-shelf RL techniques cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves average job completion time by at least 21% over hand-tuned scheduling heuristics, achieving up to 2x improvement during periods of high cluster load.

310 citations


Journal ArticleDOI
TL;DR: A model-free approach based on deep reinforcement learning is proposed to determine the optimal strategy for charging strategy due to the existence of randomness in traffic conditions, user's commuting behavior, and the pricing process of the utility.
Abstract: Driven by the recent advances in electric vehicle (EV) technologies, EVs have become important for smart grid economy. When EVs participate in demand response program which has real-time pricing signals, the charging cost can be greatly reduced by taking full advantage of these pricing signals. However, it is challenging to determine an optimal charging strategy due to the existence of randomness in traffic conditions, user’s commuting behavior, and the pricing process of the utility. Conventional model-based approaches require a model of forecast on the uncertainty and optimization for the scheduling process. In this paper, we formulate this scheduling problem as a Markov Decision Process (MDP) with unknown transition probability. A model-free approach based on deep reinforcement learning is proposed to determine the optimal strategy for this problem. The proposed approach can adaptively learn the transition probability and does not require any system model information. The architecture of the proposed approach contains two networks: a representation network to extract discriminative features from the electricity prices and a Q network to approximate the optimal action-value function. Numerous experimental results demonstrate the effectiveness of the proposed approach.

277 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of task scheduling strategies and the associated metrics suitable for cloud computing environments is presented and the various issues related to scheduling methodologies and the limitations to overcome are discussed.

272 citations


Journal ArticleDOI
TL;DR: An energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time is provided and the eDors algorithm can effectively reduce EEC by optimally adjusting CPU clock frequency of SMDs in local computing, and adapting the transmission power for wireless channel conditions in cloud computing.
Abstract: Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy for smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto resource-rich cloud. However, how to achieve energy-efficient computation offloading under hard constraint for application completion time remains a challenge. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into an energy-efficiency cost (EEC) minimization problem while satisfying task-dependency requirement and completion time deadline constraint. We then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control, and transmission power allocation. Next, we show that computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, we provide experimental results in a real testbed and demonstrate that the eDors algorithm can effectively reduce EEC by optimally adjusting CPU clock frequency of SMDs in local computing, and adapting the transmission power for wireless channel conditions in cloud computing.

261 citations


Proceedings ArticleDOI
27 Oct 2019
TL;DR: This work introduces a unified abstraction and a Dependency Proxy mechanism to enable communication scheduling without breaking the original dependencies in framework engines, and introduces a Bayesian Optimization approach to auto-tune tensor partition size and other parameters for different training models under various networking conditions.
Abstract: We present ByteScheduler, a generic communication scheduler for distributed DNN training acceleration. ByteScheduler is based on our principled analysis that partitioning and rearranging the tensor transmissions can result in optimal results in theory and good performance in real-world even with scheduling overhead. To make ByteScheduler work generally for various DNN training frameworks, we introduce a unified abstraction and a Dependency Proxy mechanism to enable communication scheduling without breaking the original dependencies in framework engines. We further introduce a Bayesian Optimization approach to auto-tune tensor partition size and other parameters for different training models under various networking conditions. ByteScheduler now supports TensorFlow, PyTorch, and MXNet without modifying their source code, and works well with both Parameter Server (PS) and all-reduce architectures for gradient synchronization, using either TCP or RDMA. Our experiments show that ByteScheduler accelerates training with all experimented system configurations and DNN models, by up to 196% (or 2.96X of original speed).

257 citations


Journal ArticleDOI
TL;DR: A novel thoughtful decomposition based on the technique of the Logic-Based Benders Decomposition is designed, which solves a relaxed master, with fewer constraints, and a subproblem, whose resolution allows the generation of cuts which will, iteratively, guide the master to tighten its search space.
Abstract: Multi-access edge computing (MEC) has recently emerged as a novel paradigm to facilitate access to advanced computing capabilities at the edge of the network, in close proximity to end devices, thereby enabling a rich variety of latency sensitive services demanded by various emerging industry verticals. Internet-of-Things (IoT) devices, being highly ubiquitous and connected, can offload their computational tasks to be processed by applications hosted on the MEC servers due to their limited battery, computing, and storage capacities. Such IoT applications providing services to offloaded tasks of IoT devices are hosted on edge servers with limited computing capabilities. Given the heterogeneity in the requirements of the offloaded tasks (different computing requirements, latency, and so on) and limited MEC capabilities, we jointly decide on the task offloading (tasks to application assignment) and scheduling (order of executing them), which yields a challenging problem of combinatorial nature. Furthermore, we jointly decide on the computing resource allocation for the hosted applications, and we refer this problem as the Dynamic Task Offloading and Scheduling problem, encompassing the three subproblems mentioned earlier. We mathematically formulate this problem, and owing to its complexity, we design a novel thoughtful decomposition based on the technique of the Logic-Based Benders Decomposition. This technique solves a relaxed master, with fewer constraints, and a subproblem, whose resolution allows the generation of cuts which will, iteratively, guide the master to tighten its search space. Ultimately, both the master and the sub-problem will converge to yield the optimal solution. We show that this technique offers several order of magnitude (more than 140 times) improvements in the run time for the studied instances. One other advantage of this method is its capability of providing solutions with performance guarantees. Finally, we use this method to highlight the insightful performance trends for different vertical industries as a function of multiple system parameters with a focus on the delay-sensitive use cases.

238 citations


Posted Content
TL;DR: An analytical model is developed to characterize the performance of federated learning in wireless networks and shows that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low.
Abstract: Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unreliable. Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration. Due to the shared nature of the wireless medium, transmissions are subjected to interference and are not guaranteed. The performance of FL system in such a setting is not well understood. In this paper, an analytical model is developed to characterize the performance of FL in wireless networks. Particularly, tractable expressions are derived for the convergence rate of FL in a wireless setting, accounting for effects from both scheduling schemes and inter-cell interference. Using the developed analysis, the effectiveness of three different scheduling policies, i.e., random scheduling (RS), round robin (RR), and proportional fair (PF), are compared in terms of FL convergence rate. It is shown that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Moreover, the FL convergence rate decreases rapidly as the SINR threshold increases, thus confirming the importance of compression and quantization of the update parameters. The analysis also reveals a trade-off between the number of scheduled UEs and subchannel bandwidth under a fixed amount of available spectrum.

234 citations


Journal ArticleDOI
TL;DR: This paper studies an unmanned aerial vehicle-assisted mobile edge computing (MEC) architecture, in which a UAV roaming around the area may serve as a computing server to help user equipment (UEs) compute their tasks or act as a relay for offloading their computation tasks to the access point (AP).
Abstract: In this paper, we study an unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) architecture, in which a UAV roaming around the area may serve as a computing server to help user equipment (UEs) compute their tasks or act as a relay for further offloading their computation tasks to the access point (AP). We aim to minimize the weighted sum energy consumption of the UAV and UEs subject to the task constraints, the information-causality constraints, the bandwidth allocation constraints and the UAV’s trajectory constraints. The required optimization is nonconvex, and an alternating optimization algorithm is proposed to jointly optimize the computation resource scheduling, bandwidth allocation, and the UAV’s trajectory in an iterative fashion. The numerical results demonstrate that significant performance gain is obtained over conventional methods. Also, the advantages of the proposed algorithm are more prominent when handling computation-intensive latency-critical tasks.

231 citations


Journal ArticleDOI
TL;DR: A systematic review as well as classification of proposed scheduling techniques along with their advantages and limitations of cloud computing are provided.

220 citations


Journal ArticleDOI
TL;DR: This paper investigates the joint problem of partial offloading scheduling and resource allocation for MEC systems with multiple independent tasks, and proposes iterative algorithms for the joint issue of POSP.
Abstract: Mobile edge computing (MEC) is a promising technique to enhance computation capacity at the edge of mobile networks. The joint problem of partial offloading decision, offloading scheduling, and resource allocation for MEC systems is a challenging issue. In this paper, we investigate the joint problem of partial offloading scheduling and resource allocation for MEC systems with multiple independent tasks. A partial offloading scheduling and power allocation (POSP) problem in single-user MEC systems is formulated. The goal is to minimize the weighted sum of the execution delay and energy consumption while guaranteeing the transmission power constraint of the tasks. The execution delay of tasks running at both MEC and mobile device is considered. The energy consumption of both the task computing and task data transmission is considered as well. The formulated problem is a nonconvex mixed-integer optimization problem. In order to solve the formulated problem, we propose a two-level alternation method framework based on Lagrangian dual decomposition. The task offloading decision and offloading scheduling problem, given the allocated transmission power, is solved in the upper level using flow shop scheduling theory or greedy strategy, and the suboptimal power allocation with the partial offloading decision is obtained in the lower level using convex optimization techniques. We propose iterative algorithms for the joint problem of POSP. Numerical results demonstrate that the proposed algorithms achieve near-optimal delay performance with a large energy consumption reduction.

Journal ArticleDOI
TL;DR: A deep-Q-network model in a multi-agent reinforcement learning setting to guide the scheduling of multi-workflows over infrastructure-as-a-service clouds and experimental results suggest that the proposed approach outperforms traditional ones, e.g., non-dominated sorting genetic algorithm-II, multi-objective particle swarm optimization, and game-theoretic-based greedy algorithms, in terms of optimality of scheduling plans generated.
Abstract: Cloud Computing provides an effective platform for executing large-scale and complex workflow applications with a pay-as-you-go model. Nevertheless, various challenges, especially its optimal scheduling for multiple conflicting objectives, are yet to be addressed properly. The existing multi-objective workflow scheduling approaches are still limited in many ways, e.g., encoding is restricted by prior experts' knowledge when handling a dynamic real-time problem, which strongly influences the performance of scheduling. In this paper, we apply a deep-Q-network model in a multi-agent reinforcement learning setting to guide the scheduling of multi-workflows over infrastructure-as-a-service clouds. To optimize multi-workflow completion time and user's cost, we consider a Markov game model, which takes the number of workflow applications and heterogeneous virtual machines as state input and the maximum completion time and cost as rewards. The game model is capable of seeking for correlated equilibrium between make-span and cost criteria without prior experts' knowledge and converges to the correlated equilibrium policy in a dynamic real-time environment. To validate our proposed approach, we conduct extensive case studies based on multiple well-known scientific workflow templates and Amazon EC2 cloud. The experimental results clearly suggest that our proposed approach outperforms traditional ones, e.g., non-dominated sorting genetic algorithm-II, multi-objective particle swarm optimization, and game-theoretic-based greedy algorithms, in terms of optimality of scheduling plans generated.

Journal ArticleDOI
TL;DR: It is proved that the task offloading scheduling problem is NP-hard, and centralized and distributed Greedy Maximal Scheduling algorithms are introduced to resolve the problem efficiently.
Abstract: Mobile Edge Cloud Computing (MECC) has becoming an attractive solution for augmenting the computing and storage capacity of Mobile Devices (MDs) by exploiting the available resources at the network edge. In this work, we consider computation offloading at the mobile edge cloud that is composed of a set of Wireless Devices (WDs), and each WD has an energy harvesting equipment to collect renewable energy from the environment. Moreover, multiple MDs intend to offload their tasks to the mobile edge cloud simultaneously. We first formulate the multi-user multi-task computation offloading problem for green MECC, and use Lyaponuv Optimization Approach to determine the energy harvesting policy: how much energy to be harvested at each WD; and the task offloading schedule: the set of computation offloading requests to be admitted into the mobile edge cloud, the set of WDs assigned to each admitted offloading request, and how much workload to be processed at the assigned WDs. We then prove that the task offloading scheduling problem is NP-hard, and introduce centralized and distributed Greedy Maximal Scheduling algorithms to resolve the problem efficiently. Performance bounds of the proposed schemes are also discussed. Extensive evaluations are conducted to test the performance of the proposed algorithms.

Journal ArticleDOI
TL;DR: This article constructs an energy-efficient scheduling framework for MEC-enabled IoVs to minimize the energy consumption of RSUs under task latency constraints to satisfy heterogeneous requirements of communication, computation and storage in IoVs.
Abstract: Although modern transportation systems facilitate the daily life of citizens, the ever-increasing energy consumption and air pollution challenge the establishment of green cities. Current studies on green IoV generally concentrate on energy management of either battery-enabled RSUs or electric vehicles. However, computing tasks and load balancing among RSUs have not been fully investigated. In order to satisfy heterogeneous requirements of communication, computation and storage in IoVs, this article constructs an energy-efficient scheduling framework for MEC-enabled IoVs to minimize the energy consumption of RSUs under task latency constraints. Specifically, a heuristic algorithm is put forward by jointly considering task scheduling among MEC servers and downlink energy consumption of RSUs. To the best of our knowledge, this is a prior work to focus on the energy consumption control issues of MEC-enabled RSUs. Performance evaluations demonstrate the effectiveness of our framework in terms of energy consumption, latency and task blocking possibility. Finally, this article elaborates some major challenges and open issues toward energy-efficient scheduling in IoVs.

Journal ArticleDOI
TL;DR: This article constructs an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning and develops a two-sided matching scheme and a deep reinforcementLearning approach to schedule offloading requests and allocate network resources.
Abstract: The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users’ traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users’ Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.

Journal ArticleDOI
TL;DR: A novel multiobjective ant colony system based on a co-evolutionary multiple populations for multiple objectives framework is proposed, which adopts two colonies to deal with these two objectives, respectively.
Abstract: Cloud workflow scheduling is significantly challenging due to not only the large scale of workflow but also the elasticity and heterogeneity of cloud resources. Moreover, the pricing model of clouds makes the execution time and execution cost two critical issues in the scheduling. This paper models the cloud workflow scheduling as a multiobjective optimization problem that optimizes both execution time and execution cost. A novel multiobjective ant colony system based on a co-evolutionary multiple populations for multiple objectives framework is proposed, which adopts two colonies to deal with these two objectives, respectively. Moreover, the proposed approach incorporates with the following three novel designs to efficiently deal with the multiobjective challenges: 1) a new pheromone update rule based on a set of nondominated solutions from a global archive to guide each colony to search its optimization objective sufficiently; 2) a complementary heuristic strategy to avoid a colony only focusing on its corresponding single optimization objective, cooperating with the pheromone update rule to balance the search of both objectives; and 3) an elite study strategy to improve the solution quality of the global archive to help further approach the global Pareto front. Experimental simulations are conducted on five types of real-world scientific workflows and consider the properties of Amazon EC2 cloud platform. The experimental results show that the proposed algorithm performs better than both some state-of-the-art multiobjective optimization approaches and the constrained optimization approaches.

Journal ArticleDOI
TL;DR: A novel scheme to guarantee the security of UAV-relayed wireless networks with caching via jointly optimizing the UAV trajectory and time scheduling and a benchmark scheme in which the minimum average secrecy rate among all users is maximized and no user has the caching ability.
Abstract: Unmanned aerial vehicle (UAV) can be utilized as a relay to connect nodes with long distance, which can achieve significant throughput gain owing to its mobility and line-of-sight (LoS) channel with ground nodes. However, such LoS channels make UAV transmission easy to eavesdrop. In this paper, we propose a novel scheme to guarantee the security of UAV-relayed wireless networks with caching via jointly optimizing the UAV trajectory and time scheduling. For every two users that have cached the required file for the other, the UAV broadcasts the files together to these two users, and the eavesdropping can be disrupted. For the users without caching, we maximize their minimum average secrecy rate by jointly optimizing the trajectory and scheduling, with the secrecy rate of the caching users satisfied. The corresponding optimization problem is difficult to solve due to its non-convexity, and we propose an iterative algorithm via successive convex optimization to solve it approximately. Furthermore, we also consider a benchmark scheme in which we maximize the minimum average secrecy rate among all users by jointly optimizing the UAV trajectory and time scheduling when no user has the caching ability. Simulation results are provided to show the effectiveness and efficiency of our proposed scheme.

Journal ArticleDOI
TL;DR: A single-hop wireless network with a number of nodes transmitting time-sensitive information to a base station is considered and the problem of minimizing the expected weighted sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes is addressed.
Abstract: Age of Information (AoI) is a performance metric that captures the freshness of the information from the perspective of the destination. The AoI measures the time that elapsed since the generation of the packet that was most recently delivered to the destination. In this paper, we consider a single-hop wireless network with a number of nodes transmitting time-sensitive information to a base station and address the problem of minimizing the expected weighted sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes. We develop four low-complexity transmission scheduling policies that attempt to minimize AoI subject to minimum throughput requirements and evaluate their performance against the optimal policy. In particular, we develop a randomized policy, a Max-Weight policy, a Drift-Plus-Penalty policy, and a Whittle’s Index policy, and show that they are guaranteed to be within a factor of two, four, two, and eight, respectively, away from the minimum AoI possible. The simulation results show that Max-Weight and Drift-Plus-Penalty outperform the other policies, both in terms of AoI and throughput, in every network configuration simulated, and achieve near-optimal performance.

Journal ArticleDOI
TL;DR: This paper studies the joint optimization of cost and makespan of scheduling workflows in IaaS clouds, and proposes a novel workflow scheduling scheme which closely integrates the fuzzy dominance sort mechanism with the list scheduling heuristic HEFT.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the system performance obtained by the proposed scheme can outperform the benchmark schemes, and the optimal parameter selections are concluded in the experimental discussion.
Abstract: Unmanned aerial vehicle (UAV) has been witnessed as a promising approach for offering extensive coverage and additional computation capability to smart mobile devices (SMDs), especially in the scenario without available infrastructures. In this paper, a UAV-assisted mobile edge computing system with stochastic computation tasks is investigated. The system aims to minimize the average weighted energy consumption of SMDs and the UAV, subject to the constraints on computation offloading, resource allocation, and flying trajectory scheduling of the UAV. Due to nonconvexity of the problem and the time coupling of variables, a Lyapunov-based approach is applied to analyze the task queue, and the energy consumption minimization problem is decomposed into three manageable subproblems. Furthermore, a joint optimization algorithm is proposed to iteratively solve the problem. Simulation results demonstrate that the system performance obtained by the proposed scheme can outperform the benchmark schemes, and the optimal parameter selections are concluded in the experimental discussion.

Journal ArticleDOI
TL;DR: In this paper, a distributively executed dynamic power allocation scheme is developed based on model-free deep RL for transmit power control in wireless networks, where each transmitter collects CSI and quality of service (QoS) information from several neighbors and adapts its own transmit power accordingly.
Abstract: This work demonstrates the potential of deep reinforcement learning techniques for transmit power control in wireless networks. Existing techniques typically find near-optimal power allocations by solving a challenging optimization problem. Most of these algorithms are not scalable to large networks in real-world scenarios because of their computational complexity and instantaneous cross-cell channel state information (CSI) requirement. In this paper, a distributively executed dynamic power allocation scheme is developed based on model-free deep reinforcement learning. Each transmitter collects CSI and quality of service (QoS) information from several neighbors and adapts its own transmit power accordingly. The objective is to maximize a weighted sum-rate utility function, which can be particularized to achieve maximum sum-rate or proportionally fair scheduling. Both random variations and delays in the CSI are inherently addressed using deep ${Q}$ -learning. For a typical network architecture, the proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed CSI measurements available to the agents. The proposed scheme is especially suitable for practical scenarios where the system model is inaccurate and CSI delay is non-negligible.

Journal ArticleDOI
TL;DR: In this article, an integer linear programming approach is proposed for scheduling the observations of time-domain imaging surveys, assigning targets to temporal blocks, enabling strict control of the number of exposures obtained per field and minimizing filter changes.
Abstract: We present a novel algorithm for scheduling the observations of time-domain imaging surveys. Our integer linear programming approach optimizes an observing plan for an entire night by assigning targets to temporal blocks, enabling strict control of the number of exposures obtained per field and minimizing filter changes. A subsequent optimization step minimizes slew times between each observation. Our optimization metric self-consistently weights contributions from time-varying airmass, seeing, and sky brightness to maximize the transient discovery rate. We describe the implementation of this algorithm on the surveys of the Zwicky Transient Facility and present its on-sky performance.

Journal ArticleDOI
TL;DR: A fundamental tradeoff between energy consumption and service delay when provisioning mobile services in vehicular networks is explored and a novel model is proposed to depict the users’ willingness of contributing their resources to the public is proposed.
Abstract: In the past decade, network data communication has experienced a rapid growth, which has led to explosive congestion in heterogeneous networks. Moreover, the emerging industrial applications, such as automatic driving put forward higher requirements on both networks and devices. On the contrary, running computation-intensive industrial applications locally are constrained by the limited resources of devices. Correspondingly, fog computing has recently emerged to reduce the congestion of content-centric networks. It has proven to be a good way in industry and traffic for reducing network delay and processing time. In addition, device-to-device offloading is viewed as a promising paradigm to transmit network data in mobile environment, especially for autodriving vehicles. In this paper, jointly taking both the network traffic and computation workload of industrial traffic into consideration, we explore a fundamental tradeoff between energy consumption and service delay when provisioning mobile services in vehicular networks. In particular, when the available resource in mobile vehicles becomes a bottleneck, we propose a novel model to depict the users’ willingness of contributing their resources to the public. We then formulate a cost minimization problem by exploiting the framework of Markov decision progress (MDP) and propose the dynamic reinforcement learning scheduling algorithm and the deep dynamic scheduling algorithm to solve the offloading decision problem. By adopting different mobile trajectory traces, we conduct extensive simulations to evaluate the performance of the proposed algorithms. The results show that our proposed algorithms outperform other benchmark schemes in the mobile edge networks.

Journal ArticleDOI
TL;DR: A new time division multiple access (TDMA) based workflow model is proposed, which allows parallel transmissions and executions in the UAV-assisted system, and an alternative algorithm is developed to set the initial point closer to the optimal solution.
Abstract: This paper considers a UAV-enabled mobile edge computing (MEC) system, where a UAV first powers the Internet of things device (IoTD) by utilizing Wireless Power Transfer (WPT) technology. Then each IoTD sends the collected data to the UAV for processing by using the energy harvested from the UAV. In order to improve the energy efficiency of the UAV, we propose a new time division multiple access (TDMA) based workflow model, which allows parallel transmissions and executions in the UAV-assisted system. We aim to minimize the total energy consumption of the UAV by jointly optimizing the IoTDs association, computing resources allocation, UAV hovering time, wireless powering duration and the services sequence of the IoTDs. The formulated problem is a mixed-integer non-convex problem, which is very difficult to solve in general. We transform and relax it into a convex problem and apply flow-shop scheduling techniques to address it. Furthermore, an alternative algorithm is developed to set the initial point closer to the optimal solution. Simulation results show that the total energy consumption of the UAV can be effectively reduced by the proposed scheme compared with the conventional systems.

Journal ArticleDOI
TL;DR: The proposed strategies have demonstrated the excellent real-time, satisfaction degree (SD), and energy consumption performance of computing services in smart manufacturing with edge computing.
Abstract: At present, smart manufacturing computing framework has faced many challenges such as the lack of an effective framework of fusing computing historical heritages and resource scheduling strategy to guarantee the low-latency requirement. In this paper, we propose a hybrid computing framework and design an intelligent resource scheduling strategy to fulfill the real-time requirement in smart manufacturing with edge computing support. First, a four-layer computing system in a smart manufacturing environment is provided to support the artificial intelligence task operation with the network perspective. Then, a two-phase algorithm for scheduling the computing resources in the edge layer is designed based on greedy and threshold strategies with latency constraints. Finally, a prototype platform was developed. We conducted experiments on the prototype to evaluate the performance of the proposed framework with a comparison of the traditionally-used methods. The proposed strategies have demonstrated the excellent real-time, satisfaction degree (SD), and energy consumption performance of computing services in smart manufacturing with edge computing.

Journal ArticleDOI
TL;DR: This paper extends the classical DQN to address the decisions of multiple edge devices, and shows that the proposed method performs better than the other methods using only one dispatching rule.
Abstract: Manufacturing is involved with complex job shop scheduling problems (JSP). In smart factories, edge computing supports computing resources at the edge of production in a distributed way to reduce response time of making production decisions. However, most works on JSP did not consider edge computing. Therefore, this paper proposes a smart manufacturing factory framework based on edge computing, and further investigates the JSP under such a framework. With recent success of some AI applications, the deep Q network (DQN), which combines deep learning and reinforcement learning, has showed its great computing power to solve complex problems. Therefore, we adjust the DQN with an edge computing framework to solve the JSP. Different from the classical DQN with only one decision, this paper extends the DQN to address the decisions of multiple edge devices. Simulation results show that the proposed method performs better than the other methods using only one dispatching rule.

Journal ArticleDOI
01 Jul 2019
TL;DR: Computational results show that selection of genetic operation type has a great influence on the quality of solutions, and the proposed algorithm could generate better solutions compared to other developed algorithms in terms of computational times and objective values.
Abstract: Open-shop scheduling problem (OSSP) is a well-known topic with vast industrial applications which belongs to one of the most important issues in the field of engineering. OSSP is a kind of NP problems and has a wider solution space than other basic scheduling problems, i.e., Job-shop and flow-shop scheduling. Due to this fact, this problem has attracted many researchers over the past decades and numerous algorithms have been proposed for that. This paper investigates the effects of crossover and mutation operator selection in Genetic Algorithms (GA) for solving OSSP. The proposed algorithm, which is called EGA_OS, is evaluated and compared with other existing algorithms. Computational results show that selection of genetic operation type has a great influence on the quality of solutions, and the proposed algorithm could generate better solutions compared to other developed algorithms in terms of computational times and objective values.

Journal ArticleDOI
TL;DR: An evolutionary multiobjective robust scheduling algorithm is suggested, in which solutions obtained by a variant of single-objective heuristic are incorporated into population initialization and two novel crossover operators are proposed to take advantage of nondominated solutions.
Abstract: In various flow shop scheduling problems, it is very common that a machine suffers from breakdowns. Under this situation, a robust and stable suboptimal scheduling solution is of more practical interest than a global optimal solution that is sensitive to environmental changes. However, blocking lot-streaming flow shop (BLSFS) scheduling problems with machine breakdowns have not yet been well studied up to date. This paper presents, for the first time, a multiobjective model of the above problem including robustness and stability criteria. Based on this model, an evolutionary multiobjective robust scheduling algorithm is suggested, in which solutions obtained by a variant of single-objective heuristic are incorporated into population initialization and two novel crossover operators are proposed to take advantage of nondominated solutions. In addition, a rescheduling strategy based on the local search is presented to further reduce the negative influence resulted from machine breakdowns.The proposed algorithm is applied to 22 test sets, and compared with the state-of-the-art algorithms without machine breakdowns. Our empirical results demonstrate that the proposed algorithm can effectively tackle BLSFS scheduling problems in the presence of machine breakdowns by obtaining scheduling strategies that are robust and stable.

Journal ArticleDOI
TL;DR: An alternative method for cloud task scheduling problem which aims to minimize makespan that required to schedule a number of tasks on different Virtual Machines (VMs) is presented and the proposed MSDE algorithm outperformed other algorithms according to the performance measures.
Abstract: This paper presents an alternative method for cloud task scheduling problem which aims to minimize makespan that required to schedule a number of tasks on different Virtual Machines (VMs). The proposed method is based on the improvement of the Moth Search Algorithm (MSA) using the Differential Evolution (DE). The MSA simulates the behavior of moths to fly towards the source of light in nature through using two concepts, the phototaxis and Levy flights that represent the exploration and exploitation ability respectively. However, the exploitation ability is still needed to be improved, therefore, the DE can be used as local search method. In order to evaluate the performance of the proposed MSDE algorithm, a set of three experimental series are performed. The first experiment aims to compare the traditional MSA and the proposed algorithm to solve a set of twenty global optimization problems. Meanwhile, in second and third experimental series the performance of the proposed algorithm to solve the cloud task scheduling problem is compared against other heuristic and meta-heuristic algorithms for synthetical and real trace data, respectively. The results of the two experimental series show that the proposed algorithm outperformed other algorithms according to the performance measures.

Journal ArticleDOI
TL;DR: This article proposes a joint collaborative caching and processing framework that supports Adaptive Bitrate (ABR)-video streaming in MEC networks and proposes practically efficient solutions, including a novel heuristic ABR-aware proactive cache placement algorithm when video popularity is available.
Abstract: Mobile-Edge Computing (MEC) is a promising paradigm that provides storage and computation resources at the network edge in order to support low-latency and computation-intensive mobile applications. In this article, we propose a joint collaborative caching and processing framework that supports Adaptive Bitrate (ABR)-video streaming in MEC networks. We formulate an Integer Linear Program (ILP) that determines the placement of video variants in the caches and the scheduling of video requests to the cache servers so as to minimize the expected delay cost of video retrieval. The considered problem is challenging due to its NP-completeness and to the lack of a-priori knowledge about video request arrivals. Our approach decomposes the original problem into a cache placement problem and a video request scheduling problem while preserving the interplay between the two. We then propose practically efficient solutions, including: (i) a novel heuristic ABR-aware proactive cache placement algorithm when video popularity is available, and (ii) an online low-complexity video request scheduling algorithm that performs very closely to the optimal solution. Simulation results show that our proposed solutions achieve significant increase in terms of cache hit ratio and decrease in backhaul traffic and content access delay compared to the traditional approaches.