scispace - formally typeset
Open AccessBook ChapterDOI

Online scheduling via learned weights

Reads0
Chats0
TLDR
This work studies how predictive techniques can be used to break through worst case barriers in online scheduling, and gives algorithms that, equipped with predictions with error η, achieve O(log η (log logm) competitive ratios, breaking the Ω(logm) lower bound even for moderately accurate predictions.
Abstract
Online algorithms are a hallmark of worst case optimization under uncertainty. On the other hand, in practice, the input is often far from worst case, and has some predictable characteristics. A recent line of work has shown how to use machine learned predictions to circumvent strong lower bounds on competitive ratios in classic online problems such as ski rental and caching. We study how predictive techniques can be used to break through worst case barriers in online scheduling. The makespan minimization problem with restricted assignments is a classic problem in online scheduling theory. Worst case analysis of this problem gives Ω(log m) lower bounds on the competitive ratio in the online setting. We identify a robust quantity that can be predicted and then used to guide online algorithms to achieve better performance. Our predictions are compact in size, having dimension linear in the number of machines, and can be learned using standard off the shelf methods. The performance guarantees of our algorithms depend on the accuracy of the predictions, given predictions with error η, we show how to construct O(log η) competitive fractional assignments. We then give an online algorithm that rounds any fractional assignment into an integral schedule. Our algorithm is O((log log m)3)-competitive and we give a nearly matching Ω(log log m) lower bound for online rounding algorithms.1 Altogether, we give algorithms that, equipped with predictions with error η, achieve O(log η (log log m)3) competitive ratios, breaking the Ω(log m) lower bound even for moderately accurate predictions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

The Primal-Dual method for Learning Augmented Algorithms

TL;DR: This paper extends the primal-dual method for online algorithms in order to incorporate predictions that advise the online algorithm about the next action to take and uses this framework to obtain novel algorithms for a variety of online covering problems.
Proceedings ArticleDOI

Online Algorithms for Weighted Paging with Predictions

TL;DR: In this article, Lykouris and Vassilvitski showed that neither a fixed lookahead nor knowledge of the next request for every page is sufficient information for an algorithm to overcome existing lower bounds in weighted paging.
Proceedings ArticleDOI

Better and Simpler Learning-Augmented Online Caching

Alexander Wei
TL;DR: This work considers combining the BlindOracle algorithm, which just naively follows the predictions, with an optimal competitive algorithm for online caching in a black-box manner and shows that combining BlindOracle with LRU is in fact optimal among deterministic algorithms for this problem.
Proceedings Article

Customizing ML Predictions for Online Algorithms

TL;DR: This paper shows that incorporating optimization benchmarks in ML loss functions leads to better performance, while maintaining a worst-case adversarial result when the advice is completely wrong, both through theoretical bounds and numerical simulations.
Posted Content

Secretaries with Advice

TL;DR: This paper presents a tight analysis of optimal algorithms for secretaries with samples, optimal algorithms when secretaries' qualities are drawn from a known distribution, and a new noisy binary advice model.
References
More filters
Book

Foundations of Machine Learning

TL;DR: This graduate-level textbook introduces fundamental concepts and methods in machine learning, and provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application.
Proceedings ArticleDOI

Probabilistic computations: Toward a unified measure of complexity

TL;DR: Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.
Journal ArticleDOI

Approximation algorithms for scheduling unrelated parallel machines

TL;DR: It is proved that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unlessP = NP, and a complexity classification for all special cases with a fixed number of processing times is obtained.
Journal ArticleDOI

An approximation algorithm for the generalized assignment problem

TL;DR: The generalized assignment problem can be viewed as the following problem of scheduling parallel machines with costs; each job is to be processed by exactly one machine; processing jobj on machinei requires timepij and incurs a cost ofcij; each machinei is available forTi time units, and the objective is to minimize the total cost incurred.
Book

The Design of Approximation Algorithms

TL;DR: In this paper, the authors present a survey of the central algorithmic techniques for designing approximation algorithms, including greedy and local search algorithms, dynamic programming, linear and semidefinite programming, and randomization.