scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2018"


Journal ArticleDOI
TL;DR: In this paper, the minimum throughput over all ground users in the downlink communication was maximized by optimizing the multiuser communication scheduling and association jointly with the UAV's trajectory and power control.
Abstract: Due to the high maneuverability, flexible deployment, and low cost, unmanned aerial vehicles (UAVs) have attracted significant interest recently in assisting wireless communication. This paper considers a multi-UAV enabled wireless communication system, where multiple UAV-mounted aerial base stations are employed to serve a group of users on the ground. To achieve fair performance among users, we maximize the minimum throughput over all ground users in the downlink communication by optimizing the multiuser communication scheduling and association jointly with the UAV’s trajectory and power control. The formulated problem is a mixed integer nonconvex optimization problem that is challenging to solve. As such, we propose an efficient iterative algorithm for solving it by applying the block coordinate descent and successive convex optimization techniques. Specifically, the user scheduling and association, UAV trajectory, and transmit power are alternately optimized in each iteration. In particular, for the nonconvex UAV trajectory and transmit power optimization problems, two approximate convex optimization problems are solved, respectively. We further show that the proposed algorithm is guaranteed to converge. To speed up the algorithm convergence and achieve good throughput, a low-complexity and systematic initialization scheme is also proposed for the UAV trajectory design based on the simple circular trajectory and the circle packing scheme. Extensive simulation results are provided to demonstrate the significant throughput gains of the proposed design as compared to other benchmark schemes.

1,361 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients through unreliable channels and formulate a discrete-time decision problem to find a transmission scheduling policy that minimizes the expected weighted sum AoI of the clients in the network.
Abstract: In this paper, we consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients through unreliable channels. The Age of Information (AoI), namely the amount of time that elapsed since the most recently delivered packet was generated, captures the freshness of the information. We formulate a discrete-time decision problem to find a transmission scheduling policy that minimizes the expected weighted sum AoI of the clients in the network. We first show that in symmetric networks, a greedy policy, which transmits the packet for the client with the highest current age, is optimal. For general networks, we develop three low-complexity scheduling policies: a randomized policy, a Max-Weight policy and a Whittle’s Index policy, and derive performance guarantees as a function of the network configuration. To the best of our knowledge, this is the first work to derive performance guarantees for scheduling policies that attempt to minimize AoI in wireless networks with unreliable channels. Numerical results show that both the Max-Weight and Whittle’s Index policies outperform the other scheduling policies in every configuration simulated, and achieve near optimal performance.

379 citations


Posted Content
TL;DR: It is shown that modern machine learning techniques can generate highly-efficient policies automatically and improve average job completion time by at least 21% over hand-tuned scheduling heuristics, achieving up to 2x improvement during periods of high cluster load.
Abstract: Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load.

303 citations


Journal ArticleDOI
TL;DR: The results showed that the proposed task-scheduling algorithm and reallocation scheme can effectively reduce task delays and improve the concurrency number of the tasks in fog nodes.
Abstract: Fog computing has been proposed as an extension of cloud computing to provide computation, storage, and network services in network edge. For smart manufacturing, fog computing can provide a wealth of computational and storage services, such as fault detection and state analysis of devices in assembly lines, if the middle layer between the industrial cloud and the terminal device is considered. However, limited resources and low-delay services hinder the application of new virtualization technologies in the task scheduling and resource management of fog computing. Thus, we build a new task-scheduling model by considering the role of containers. Then, we construct a task-scheduling algorithm to ensure that the tasks are completed on time and the number of concurrent tasks for the fog node is optimized. Finally, we propose a reallocation mechanism to reduce task delays in accordance with the characteristics of the containers. The results showed that our proposed task-scheduling algorithm and reallocation scheme can effectively reduce task delays and improve the concurrency number of the tasks in fog nodes.

260 citations


Journal ArticleDOI
TL;DR: This paper investigates the optimal policy for user scheduling and resource allocation in HetNets powered by hybrid energy with the purpose of maximizing energy efficiency of the overall network and demonstrates the convergence property of the proposed algorithm.
Abstract: Densely deployment of various small-cell base stations in cellular networks to increase capacity will lead to heterogeneous networks (HetNets), and meanwhile, embedding the energy harvesting capabilities in base stations as an alternative energy supply is becoming a reality. How to make efficient utilization of radio resource and renewable energy is a brand-new challenge. This paper investigates the optimal policy for user scheduling and resource allocation in HetNets powered by hybrid energy with the purpose of maximizing energy efficiency of the overall network. Since wireless channel conditions and renewable energy arrival rates have stochastic properties and the environment’s dynamics are unknown, the model-free reinforcement learning approach is used to learn the optimal policy through interactions with the environment. To solve our problem with continuous-valued state and action variables, a policy-gradient-based actor-critic algorithm is proposed. The actor part uses the Gaussian distribution as the parameterized policy to generate continuous stochastic actions, and the policy parameters are updated with the gradient ascent method. The critic part uses compatible function approximation to estimate the performance of the policy and helps the actor learn the gradient of the policy. The advantage function is used to further reduce the variance of the policy gradient. Using the numerical simulations, we demonstrate the convergence property of the proposed algorithm and analyze network energy efficiency.

256 citations


Proceedings ArticleDOI
16 Apr 2018
TL;DR: In this article, a joint eMBB and ultra-low-latency (URLLC) scheduler is proposed to maximize the utility for eMBBs while satisfying instantaneous URLLC demands.
Abstract: Emerging 5G systems will need to efficiently support both broadband traffic (eMBB) and ultra-low-latency (URLLC) traffic. In these systems, time is divided into slots which are further sub-divided into minislots. From a scheduling perspective, eMBB resource allocations occur at slot boundaries, whereas to reduce latency URLLC traffic is pre-emptively overlapped at the minislot timescale, resulting in selective superposition/puncturing of eMBB allocations. This approach enables minimal URLLC latency at a potential rate loss to eMBB traffic. We study joint eMBB and URLLC schedulers for such systems, with the dual objectives of maximizing utility for eMBB traffic while satisfying instantaneous URLLC demands. For a linear rate loss model (loss to eMBB is linear in the amount of superposition/puncturing), we derive an optimal joint scheduler. Somewhat counter-intuitively, our results show that our dual objectives can be met by an iterative gradient scheduler for eMBB traffic that anticipates the expected loss from URLLC traffic, along with an URLLC demand scheduler that is oblivious to eMBB channel states, utility functions and allocations decisions of the eMBB scheduler. Next we consider a more general class of (convex) loss models and study optimal online joint eMBB/URLLC schedulers within the broad class of channel state dependent but time-homogeneous policies. We validate the characteristics and benefits of our schedulers via simulation.

246 citations


Journal ArticleDOI
TL;DR: The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence.
Abstract: This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multicell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Furthermore, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

235 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed CSI measurements available to the agents and is especially suitable for practical scenarios where the system model is inaccurate and CSI delay is non-negligible.
Abstract: This work demonstrates the potential of deep reinforcement learning techniques for transmit power control in wireless networks. Existing techniques typically find near-optimal power allocations by solving a challenging optimization problem. Most of these algorithms are not scalable to large networks in real-world scenarios because of their computational complexity and instantaneous cross-cell channel state information (CSI) requirement. In this paper, a distributively executed dynamic power allocation scheme is developed based on model-free deep reinforcement learning. Each transmitter collects CSI and quality of service (QoS) information from several neighbors and adapts its own transmit power accordingly. The objective is to maximize a weighted sum-rate utility function, which can be particularized to achieve maximum sum-rate or proportionally fair scheduling. Both random variations and delays in the CSI are inherently addressed using deep Q-learning. For a typical network architecture, the proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed CSI measurements available to the agents. The proposed scheme is especially suitable for practical scenarios where the system model is inaccurate and CSI delay is non-negligible.

234 citations


Journal ArticleDOI
TL;DR: The proposed hybrid whale algorithm (HWA) is incorporated with Nawaz–Enscore–Ham (NEH) to improve the performance of the algorithm and it is observed that HWA gives competitive results compared to the existing algorithms.

230 citations


Journal ArticleDOI
TL;DR: This paper considers a system where a remote estimator receives the data packet sent by a sensor over a wireless network at each time instant, and an energy-constrained attacker designs the optimal DoS attack scheduling to maximize the attacking effect on the remote estimation performance.
Abstract: The recent years have seen a surge of security issues of cyber-physical systems (CPS). In this paper, denial-of-service (DoS) attack scheduling is investigated in depth. Specifically, we consider a system where a remote estimator receives the data packet sent by a sensor over a wireless network at each time instant, and an energy-constrained attacker that cannot launch DoS attacks all the time designs the optimal DoS attack scheduling to maximize the attacking effect on the remote estimation performance. Most of the existing works concerning DoS attacks focus on the ideal scenario in which data packets can be received successfully if there is no DoS attack. To capture the unreliability nature of practical networks, we study the packet-dropping network in which packet dropouts may occur even in the absence of attack. We derive the optimal attack scheduling scheme that maximizes the average expected estimation error, and the one which maximizes the expected terminal estimation error over packet-dropping networks. We also present some countermeasures against DoS attacks, and discuss the optimal defense strategy, and how the optimal attack schedule can serve for more effective and resource-saving countermeasures. We further investigate the optimal attack schedule with multiple sensors. The optimality of the theoretical results is demonstrated by numerical simulations.

225 citations


Journal ArticleDOI
TL;DR: The proposed ELBS method provides optimal scheduling and load balancing for the mixing work robots by using the improved particle swarm optimization algorithm and a multiagent system to achieve the distributed scheduling of manufacturing cluster.
Abstract: Due to the development of modern information technology, the emergence of the fog computing enhances equipment computational power and provides new solutions for traditional industrial applications. Generally, it is impossible to establish a quantitative energy-aware model with a smart meter for load balancing and scheduling optimization in smart factory. With the focus on complex energy consumption problems of manufacturing clusters, this paper proposes an energy-aware load balancing and scheduling (ELBS) method based on fog computing. First, an energy consumption model related to the workload is established on the fog node, and an optimization function aiming at the load balancing of manufacturing cluster is formulated. Then, the improved particle swarm optimization algorithm is used to obtain an optimal solution, and the priority for achieving tasks is built toward the manufacturing cluster. Finally, a multiagent system is introduced to achieve the distributed scheduling of manufacturing cluster. The proposed ELBS method is verified by experiments with candy packing line, and experimental results showed that proposed method provides optimal scheduling and load balancing for the mixing work robots.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation.
Abstract: This paper investigates the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation. Random beamforming is invoked for reducing the feedback overhead of the considered system. A non-convex optimization problem for maximizing the sum rate is formulated, which is proved to be NP-hard. The branch and bound approach is invoked to obtain the $\epsilon$ -optimal power allocation policy, which is proved to converge to a global optimal solution. To elaborate further, a low-complexity suboptimal approach is developed for striking a good computational complexity-optimality tradeoff, where the matching theory and successive convex approximation techniques are invoked for tackling the user scheduling and power allocation problems, respectively. Simulation results reveal that: 1) the proposed low complexity solution achieves a near-optimal performance and 2) the proposed mm-Wave NOMA system is capable of outperforming conventional mm-Wave orthogonal multiple access systems in terms of sum rate and the number of served users.

Journal ArticleDOI
TL;DR: A taxonomy for the load balancing algorithms in the cloud is presented and a brief explanation of considered performance parameters in the literature and their effects is presented in this paper.

Journal ArticleDOI
TL;DR: An online dynamic tasks assignment scheduling to investigate the tradeoff between energy consumption and execution delay for an MEC system with EH capability and the dynamic online tasks offloading strategy is developed to modify the data backlogs of queues.
Abstract: Mobile-edge computing (MEC) has aroused significant attention for its performance to accelerate application's operation and enrich user's experience. With the increasing development of green computing, energy harvesting (EH) is considered as an available technology to capture energy from circumambient environment to supply extra energy for mobile devices. In this paper, we propose an online dynamic tasks assignment scheduling to investigate the tradeoff between energy consumption and execution delay for an MEC system with EH capability. We formulate it into an average weighted sum of energy consumption and execution delay minimization problem of mobile device with the stability of buffer queues and battery level as constraints. Based on the Lyapunov optimization method, we obtain the optimal scheduling about the CPU-cycle frequencies of mobile device and transmit power for data transmission. Besides, the dynamic online tasks offloading strategy is developed to modify the data backlogs of queues. The performance analysis shows the stability of the battery energy level and the tradeoff between energy consumption and execution delay. Moreover, the MEC system with EH devices and task buffers implements the high energy efficient and low latency communications. The performance of the proposed online algorithm is validated with extensive trace-driven simulations.

Journal ArticleDOI
TL;DR: An energy-aware multi-objective optimization algorithm for solving the hybrid flow shop (HFS) scheduling problem with consideration of the setup energy consumptions with the highly effective proposed EA-MOA algorithm compared with several efficient algorithms from the literature.

Journal ArticleDOI
TL;DR: EABS, an event-aware backpressure scheduling scheme for EIoT, which combines the shortest path with backpressure scheme in the process of next-hop node selecting and can reduce the average end-to-end delay and increase the average forwarding percentage.
Abstract: The backpressure scheduling scheme has been applied in Internet of Things, which can control the network congestion effectively and increase the network throughput. However, in large-scale Emergency Internet of Things (EIoT), emergency packets may exist because of the urgent events or situations. The traditional backpressure scheduling scheme will explore all the possible routes between the source and destination nodes that cause a superfluous long path for packets. Therefore, the end-to-end delay increases and the real-time performance of emergency packets cannot be guaranteed. To address this shortcoming, this paper proposes EABS, an event-aware backpressure scheduling scheme for EIoT. A backpressure queue model with emergency packets is first devised based on the analysis of the arrival process of different packets. Meanwhile, EABS combines the shortest path with backpressure scheme in the process of next-hop node selecting. The emergency packets are forwarded in the shortest path and avoid the network congestion according to the queue backlog difference. The extensive experiment results verify that EABS can reduce the average end-to-end delay and increase the average forwarding percentage. For the emergency packets, the real-time performance is guaranteed. Moreover, we compare EABS with two existing backpressure scheduling schemes, showing that EABS outperforms both of them.

Proceedings ArticleDOI
02 Jul 2018
TL;DR: A constant-factor approximation algorithm for the homogeneous case and efficient heuristics for the general case are developed, which show that while the problem is polynomial-time solvable without storage constraints, it is NP-hard even if each edge cloud has unlimited communication or computation resources.
Abstract: Mobile edge computing is an emerging technology to offer resource-intensive yet delay-sensitive applications from the edge of mobile networks, where a major challenge is to allocate limited edge resources to competing demands. While prior works often make a simplifying assumption that resources assigned to different users are non-sharable, this assumption does not hold for storage resources, where users interested in services (e.g., data analytics) based on the same set of data/code can share storage resource. Meanwhile, serving each user request also consumes non-sharable resources (e.g., CPU cycles, bandwidth). We study the optimal provisioning of edge services with non-trivial demands of both sharable (storage) and non-sharable (communication, computation) resources via joint service placement and request scheduling. In the homogeneous case, we show that while the problem is polynomial-time solvable without storage constraints, it is NP-hard even if each edge cloud has unlimited communication or computation resources. We further show that the hardness is caused by the service placement subproblem, while the request scheduling subproblem is polynomial-time solvable via maximum-flow algorithms. In the general case, both subproblems are NP-hard. We develop a constant-factor approximation algorithm for the homogeneous case and efficient heuristics for the general case. Our trace-driven simulations show that the proposed algorithms, especially the approximation algorithm, can achieve near-optimal performance, serving 2–3 times more requests than a baseline solution that optimizes service placement and request scheduling separately.

Journal ArticleDOI
TL;DR: This work investigates age minimization in a wireless network and proposes a novel approach of optimizing the scheduling strategy to deliver all messages as fresh as possible and proves it is NP-hard in general.
Abstract: Information age is a recently introduced metric to represent the freshness of information in communication systems. We investigate age minimization in a wireless network and propose a novel approach of optimizing the scheduling strategy to deliver all messages as fresh as possible. Specifically, we consider a set of links that share a common channel. The transmitter at each link contains a given number of packets with time stamps from an information source that generated them. We address the link transmission scheduling problem with the objective of minimizing the overall age. This minimum age scheduling problem (MASP) is different from minimizing the time or the delay for delivering the packets in question. We model the MASP mathematically and prove it is NP-hard in general. We also identify tractable cases as well as optimality conditions. An integer linear programming formulation is provided for performance benchmarking. Moreover, a steepest age descent algorithm with better scalability is developed. Numerical study shows that, by employing the optimal schedule, the overall age is significantly reduced in comparison to other scheduling strategies.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed scheduling method has effectively reduced the total electricity cost and improved load balancing process, and the comparison with the particle swarm optimization algorithm proves that the present method has a promising effect on energy management to save cost.
Abstract: The residential energy scheduling of solar energy is an important research area of smart grid. On the demand side, factors such as household loads, storage batteries, the outside public utility grid and renewable energy resources, are combined together as a nonlinear, time-varying, indefinite and complex system, which is difficult to manage or optimize. Many nations have already applied the residential real-time pricing to balance the burden on their grid. In order to enhance electricity efficiency of the residential micro grid, this paper presents an action dependent heuristic dynamic programming U+0028 ADHDP U+0029 method to solve the residential energy scheduling problem. The highlights of this paper are listed below. First, the weather-type classification is adopted to establish three types of programming models based on the features of the solar energy. In addition, the priorities of different energy resources are set to reduce the loss of electrical energy transmissions. Second, three ADHDP-based neural networks, which can update themselves during applications, are designed to manage the flows of electricity. Third, simulation results show that the proposed scheduling method has effectively reduced the total electricity cost and improved load balancing process. The comparison with the particle swarm optimization algorithm further proves that the present method has a promising effect on energy management to save cost.

Journal ArticleDOI
TL;DR: This paper investigates the dynamic user scheduling and power allocation problem as a stochastic optimization problem with the objective to minimize the total power consumption of the whole network under the constraint of all users’ long-term rate requirements and devise an efficient algorithm which can obtain the optimal control policies with a low complexity.
Abstract: Nonorthogonal multiple access (NOMA) exhibits superiority in spectrum efficiency and device connections in comparison with the traditional orthogonal multiple access technologies. However, the nonorthogonality of NOMA also introduces intracell interference that has become the bottleneck limiting the performance to be further improved. To coordinate the intracell interference, we investigate the dynamic user scheduling and power allocation problem in this paper. Specifically, we formulate this problem as a stochastic optimization problem with the objective to minimize the total power consumption of the whole network under the constraint of all users’ long-term rate requirements. To tackle this challenging problem, we first transform it into a series of static optimization problems based on the stochastic optimization theory. Afterward, we exploit the special structure of the reformulated problem and adopt the branch-and-bound technique to devise an efficient algorithm, which can obtain the optimal control policies with a low complexity. As a good feature, the proposed algorithm can make decisions only according to the instantaneous system state and can guarantee the long-term network performance. Simulation results demonstrate that the proposed algorithm has good performance in convergence and outperforms other schemes in terms of power consumption and user satisfaction.

Journal ArticleDOI
TL;DR: This paper proposes a new MAC layer—RS-LoRa—to improve reliability and scalability of LoRa wide-area networks (LoRaWANs) and implement it in NS-3 and demonstrates the benefit of RS-Lo Ra over the legacy LoRaWan, in terms of packet error ratio, throughput, and fairness.
Abstract: Providing low power and long range (LoRa) connectivity is the goal of most Internet of Things networks, e.g., LoRa, but keeping communication reliable is challenging. LoRa networks are vulnerable to the capture effect. Cell-edge nodes have a high chance of losing packets due to collisions, especially when high spreading factors (SFs) are used that increase time on air. Moreover, LoRa networks face the problem of scalability when they connect thousands of nodes that access the shared channels randomly. In this paper, we propose a new MAC layer—RS-LoRa—to improve reliability and scalability of LoRa wide-area networks (LoRaWANs). The key innovation is a two-step lightweight scheduling : 1) a gateway schedules nodes in a coarse-grained manner through dynamically specifying the allowed transmission powers and SFs on each channel and 2) based on the coarse-grained scheduling information, a node determines its own transmission power, SF, and when and on which channel to transmit. Through the proposed lightweight scheduling, nodes are divided into different groups, and within each group, nodes use similar transmission power to alleviate the capture effect. The nodes are also guided to select different SFs to increase the network reliability and scalability. We have implemented RS-LoRa in NS-3 and evaluated its performance through extensive simulations. Our results demonstrate the benefit of RS-LoRa over the legacy LoRaWAN, in terms of packet error ratio, throughput, and fairness. For instance, in a single-cell scenario with 1000 nodes, RS-LoRa can reduce the packet error ratio of the legacy LoRaWAN by nearly 20%.

Journal ArticleDOI
TL;DR: The proper control mode for the current instant is updated in order to adapt time-varying situations once if the underlying joint-distribution-type changes, and thus previous implementations of control tasks with an unchanged control mode can be further relaxed in this paper.
Abstract: This paper is focused on the issue of scheduling stabilization of Takagi–Sugeno fuzzy control systems by the aid of digging down much deeper of implicit information in the underlying systems. An event-triggered real-time scheduler that decides which control mode should be executed at any given instant is constructed by periodically evaluating the joint-distribution-type of multi-instant normalized fuzzy weighting functions at every sampled instant. Profiting from the proposed event-triggered scheduling policy, the proper control mode for the current instant is updated in order to adapt time-varying situations once if the underlying joint-distribution-type changes, and thus previous implementations of control tasks with an unchanged control mode can be further relaxed in this paper. The effectiveness of our approach is verified by several simulation examples in the end.

Journal ArticleDOI
TL;DR: A detailed overview of CP Optimizer for scheduling is given: typical applications, modeling concepts, examples, automatic search, tools and performance.
Abstract: IBM ILOG CP Optimizer is a generic CP-based system to model and solve scheduling problems. It provides an algebraic language with simple mathematical concepts to capture the temporal dimension of scheduling problems in a combinatorial optimization framework. CP Optimizer implements a model-and-run paradigm that vastly reduces the burden on the user to understand CP or scheduling algorithms: modeling is by far the most important. The automatic search provides good performance out of the box and it is continuously improving. This article gives a detailed overview of CP Optimizer for scheduling: typical applications, modeling concepts, examples, automatic search, tools and performance.

Proceedings Article
25 Apr 2018
TL;DR: This work presents a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic and can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic.
Abstract: Modern mobile networks are facing unprecedented growth in demand due to a new class of traffic from Internet of Things (IoT) devices such as smart wearables and autonomous cars. Future networks must schedule delay-tolerant software updates, data backup, and other transfers from IoT devices while maintaining strict service guarantees for conventional real-time applications such as voice-calling and video. This problem is extremely challenging because conventional traffic is highly dynamic across space and time, so its performance is significantly impacted if all IoT traffic is scheduled immediately when it originates. In this paper, we present a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic. Using 4 weeks of real network data from downtown Melbourne, Australia spanning diverse traffic patterns, we demonstrate that our RL scheduler can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic, and outpeforms heuristic schedulers by more than 2x. Our work is a valuable step towards designing autonomous, "self-driving" networks that learn to manage themselves from past data.

Proceedings ArticleDOI
26 Jun 2018
TL;DR: In this article, the problem of minimizing average and peak AoI in wireless networks under general interference constraints is considered, and a stationary scheduling policy is shown to be peak age optimal when fresh information is always available for transmission.
Abstract: Age of information (AoI) is a recently proposed metric for measuring information freshness. AoI measures the time that elapsed since the last received update was generated. We consider the problem of minimizing average and peak AoI in wireless networks under general interference constraints. When fresh information is always available for transmission, we show that a stationary scheduling policy is peak age optimal. We also prove that this policy achieves average age that is within a factor of two of the optimal average age. In the case where fresh information is not always available, and packet/information generation rate has to be controlled along with scheduling links for transmission, we prove an important separation principle: the optimal scheduling policy can be designed assuming fresh information, and independently, the packet generation rate control can be done by ignoring interference. Peak and average AoI for discrete time G/Ber/1 queue is analyzed for the first time, which may be of independent interest.

Journal ArticleDOI
TL;DR: This paper presents a polynomial-time algorithm that combines a set of heuristic rules and a resource allocation technique in order to get good solutions on an affordable time scale and concludes that the method is suitable for run-time scheduling.

Journal ArticleDOI
TL;DR: The divide-and-conquer approach improves the proposed system, as is proven experimentally through comparison with the existing BATS and improved differential evolution algorithm (IDEA) frameworks when turnaround time and response time are used as performance metrics.
Abstract: Cloud computing is required by modern technology. Task scheduling and resource allocation are important aspects of cloud computing. This paper proposes a heuristic approach that combines the modified analytic hierarchy process (MAHP), bandwidth aware divisible scheduling (BATS) + BAR optimization, longest expected processing time preemption (LEPT), and divide-and-conquer methods to perform task scheduling and resource allocation. In this approach, each task is processed before its actual allocation to cloud resources using a MAHP process. The resources are allocated using the combined BATS + BAR optimization method, which considers the bandwidth and load of the cloud resources as constraints. In addition, the proposed system preempts resource intensive tasks using LEPT preemption. The divide-and-conquer approach improves the proposed system, as is proven experimentally through comparison with the existing BATS and improved differential evolution algorithm (IDEA) frameworks when turnaround time and response time are used as performance metrics.

Proceedings ArticleDOI
10 Jan 2018
TL;DR: In this article, a transmission scheduling algorithm for minimizing the long-run average age is proposed for a wireless broadcast network where a base-station updates many users on stochastic information arrivals.
Abstract: Age of information is a new concept that characterizes the freshness of information at end devices. This paper studies the age of information from a scheduling perspective. We consider a wireless broadcast network where a base-station updates many users on stochastic information arrivals. Suppose that only one user can be updated for each time. In this context, we aim at developing a transmission scheduling algorithm for minimizing the long-run average age. To develop a low-complexity transmission scheduling algorithm, we apply the Whittle's framework for restless bandits. We successfully derive the Whittle index in a closed form and establish the indexability. Based on the Whittle index, we propose a scheduling algorithm, while experimentally showing that it closely approximates an age-optimal scheduling algorithm.

Journal ArticleDOI
TL;DR: A mathematical model which can solve small instances to optimality, and also serves as a problem representation is presented, and a tabu search algorithm with specific neighborhood functions and a diversification structure is developed.

Proceedings ArticleDOI
16 Apr 2018
TL;DR: This paper presents a way to dynamically re-schedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory, and evaluates the proposed dynamic scheduler over a simulated nation-wide backbone network using real-world ISP latency characteristics.
Abstract: Future networks are expected to support low-latency, context-aware and user-specific services in a highly flexible and efficient manner One approach to support emerging use cases such as, eg, virtual reality and in-network image processing is to introduce virtualized network functions (vNF)s at the edge of the network, placed in close proximity to the end users to reduce end-to-end latency, time-to-response, and unnecessary utilisation in the core network While placement of vNFs has been studied before, it has so far mostly focused on reducing the utilisation of server resources (ie, minimising the number of servers required in the network to run a specific set of vNFs), and not taking network conditions into consideration such as, eg, end-to-end latency, the constantly changing network dynamics, or user mobility patterns In this paper, we formulate the Edge vNF placement problem to allocate vNFs to a distributed edge infrastructure, minimising end-to-end latency from all users to their associated vNFs We present a way to dynamically re-schedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory We then evaluate our dynamic scheduler over a simulated nation-wide backbone network using real-world ISP latency characteristics We show that our proposed dynamic placement scheduler minimises vNF migrations compared to other schedulers (eg, periodic and always-on scheduling of a new placement), and offers Quality of Service guarantees by not exceeding a maximum number of latency violations that can be tolerated by certain applications