scispace - formally typeset
Open AccessJournal ArticleDOI

Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems

Reads0
Chats0
TLDR
The adversarial stochastic input model, where an adversary controls the distributions from which the requests are drawn at each step, is introduced and an algorithm that obtains a 1−ε fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence is designed.
Abstract
We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1−e fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor e approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast e approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain.Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1−e fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary’s distribution, then we design an algorithm that achieves a 1−e fraction of the weighted average of the optimal profit of each distribution the adversary picks.In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+e) the mixed packing-covering problem with O(γ m log (n/δ)/e2) oracle calls where the constraint matrix of this LP has dimension n× m, the success probability of the algorithm is 1−δ, and γ quantifies how significant a single request is when compared to the sum total of all requests.We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem.

read more

Content maybe subject to copyright    Report

Near Optimal Online Algorithms and Fast Approximation
Algorithms for Resource Allocation Problems
Nikhil R. Devanur
Microsoft Research
Redmond, WA, USA
nikdev@microsoft.com
Kamal Jain
Microsoft Research
Redmond, WA, USA
kamalj@microsoft.com
Balasubramanian Sivan
Computer Sciences Dept.,
Univ. of Wisconsin-Madison
Madison, WI, USA
balu2901@cs.wisc.edu
Christopher A. Wilkens
Computer Science Division,
Univ. of California at Berkeley
Berkeley, CA, USA
cwilkens@cs.berkeley.edu
ABSTRACT
We present algorithms for a class of resource allocation problems
both in the online setting with stochastic input and in the offline
setting. This class of problems contains many i nteresti ng special
cases such as the Adwords problem. In the online setting we in-
troduce a new distributional model call ed the adversarial stochastic
input model, which is a generalization of the i.i.d model with un-
known distributions, where the distributions can change over time.
In t his model we give a 1 O(ǫ) approximation algorithm for the
resource allocation problem, with almost the weakest possible as-
sumption: the ratio of the maximum amount of resource consumed
by any single request to the total capacity of the resource, and the
ratio of the profit contributed by any single request to the optimal
profit is at most
ǫ
2
/ log(1)
2
log n+log(1)
where n is the number of resources
available. There are i nstances where this ratio is ǫ
2
/ log n such that
no randomized algorithm can have a competitive ratio of 1 o(ǫ)
even in the i.i.d model. The upper bound on ratio that we require
improves on the previous upper-bound for the i.i.d case by a factor
of n.
Our proof technique also gives a very simple proof that the greedy
algorithm has a competitive ratio of 1 1/e for the Adwords prob-
lem in the i.i.d model with unknown distributions, and more gen-
erally in the adversarial stochastic input model, when there is no
bound on the bid to budget rati o. All the previous proofs assume
A full version of this paper, w ith all the proofs, is available at
http://arxiv.org
Part of this work was done while the author was at Microsoft Re-
search, Redmond
Part of this work was done while the author was at Microsoft Re-
search, Redmond
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
EC’11, June 5–9, 2011, San Jose, California, USA.
Copyright 2011 ACM 978-1-4503-0261-6/11/06 ...$10.00.
that either bids are very small compared to budgets or something
very similar to this.
In t he offline setting we give a fast algorithm to solve very large
LPs with both packing and covering constraints. We give algo-
rithms to approximately solve (within a factor of 1 + ǫ) the mixed
packing-covering problem wit h O(
γm log n
ǫ
2
) oracle calls where the
constraint matrix of this LP has dimension n × m, and γ is a pa-
rameter which is very si mi lar to the ratio described for the online
setting.
We discuss several applications, and how our algorithms improve
existing results in some of these applications.
Categories and Subject Descriptors
F.2.0 [Analysis of Algorithms and Problem Complexity]: Gen-
eral; J.4 [Social and Behavioral Sciences]: Economics
General Terms
Algorithms, Economics, Theory
Keywords
Online algorithms, St ochastic input, Packing-Covering
1. INTRODUCTION
The results in this paper fall into distinct categories of compet-
itive algorithms for online problems and fast approximation algo-
rithms for offline problems. We have two main results in the online
framework and one result in the offline setting. However they all
share common techniques.
There has been an increasing interest in online algorithms moti-
vated by applications to online advertising. The most well known
is the Adwords problem introduced by Mehta et. al. [MSVV05],
where the algorithm needs to assign keywords arriving online to
bidders to maximize profit, subject to budget constraints for the
bidders. The problem has been analyzed in the traditional frame-
work for online algorithms: worst-case competitive analysis. As
with many online problems, the worst-case competitive analysis is
not entirely satisfactory and there has been a drive in the last few
years to go beyond the worst-case analysis. The predominant ap-
proach has been to assume that the input satisfies some stochastic
property. For instance the random permutation model (introduced
29

by Goel and Mehta [GM08]) assumes t hat the adversary picks the
set of keywords, but the order in which the keywords arrive is cho-
sen uniformly at random. A closely related model is the i.i.d model:
assume that the keywords are i.i.d samples from a fixed distribution,
which is unknown to the algorithm. Stronger assumptions such as
i.i.d samples from a known distri bution have also been considered.
First Result.
A key parameter on which many of the algorithms for Adwords
depend is the bid to budget ratio. For instance in Mehta et. al.
[MSVV05] and Buchbinder, Jain and Naor [BJN07] the algorithm
achieves a worst case competitive ratio that tends to 1 1/e as the
bid to budget ratio (let’s call it γ) tends to 0. (1 1/e is also the
best competitive ratio that any randomized algorithm can achieve
in the worst case.) Devanur and Hayes [DH09] showed that in the
random permutation model, the competitive ratio tends t o 1 as γ
tends to 0. This result showed that competitive ratio of algorithms
in stochastic models could be much better than that of algorithms
in the worst case. The important question since then has been to
determine the optimal trade-off between γ and the competitive ra-
tio. [DH09] showed how to get a 1- O(ǫ) competitive ratio when γ
is at most
ǫ
3
n log(mn/ǫ)
where n is the number of advertisers and m
is the number of keywords. Subsequently Agrawal, Wang and Ye
[AWY09] improved the bound on γ to
ǫ
2
n log(mn/ǫ)
. The papers of
Feldman et. al. [FHK
+
10] and Agrawal, Wang and Ye [AWY09]
have al so shown that the technique of [DH09] can be extended to
other online problems.
The first main result i n this paper is the following 3-fold improve-
ment of previous results: (Theorems 2 - 4)
1. We give an algorithm which improves the bound on γ to
ǫ
2
/ log(1)
2
log(n)+log(1)
. This is almost optimal; we show a lower
bound of
ǫ
2
log(n)
.
2. The bound applies to a more general model of stochastic in-
put, called the adversarial stochastic input model. This is a
generalization of the i.i.d model with unknown distribution,
but is incomparable to the random permutation model.
3. It applies to a more general class of online problems that we
call the resource allocation framework. A formal definiti on
of the f ramework is presented in Section 2.2 and a discussion
of many interesting special cases is presented in Section 7.
Regarding the bound on γ, the removal of the factor of n is sig-
nificant. Consider for instance the Adwords problem and suppose
that the bids are all in [0,1]. The earlier bound implies that the
budgets need to be of the order of n/ǫ
2
in order to get a 1 ǫ
competitive algorithm, where n is the number of advertisers. With
realistic values for these parameters, it seems unlikely that this con-
dition would be met. While wi th the improved bounds presented in
this paper, we only need the budget t o be of the order of log n/ǫ
2
and this condition is met for reasonable values of the parameters.
We note here that so far, all the algorithms for the i.i.d model
(with unknown distribution) were actually designed for the random
permutation model. It seems that any algorithm that works for one
should also work for the other. However we can only show that our
algorithm works in the i.i.d model, so the natural question is if our
algorithm works for the random permutation model. It would be
very surprising if it didn’t.
One drawback of the stochastic models considered so far is that
they are time invariant, that is the input distribution does not change
over time. The adversarial stochastic input model allows the input
distribution to change over time. The model is as follows: in every
step the adversary picks a distribution, possibly adaptively depend-
ing on what the algorithm has done so far, and the actual keyword
in that step is drawn from this distribution. The competitive ratio
is defined w ith respect to the optimum fr actional solution for an of-
fline instance of the problem, called the distribution instance, which
is defined by the distribution (see Section 2.2). In Section 2.2,
where we define the distribution instance, we also prove that the
optimal fractional solution for the distribution instance is at least as
good as the commonly used benchmark of expected value of opti-
mal fractional solution, where the expectation is with respect to the
distribution. A detailed description of t his model, how t he adver-
sary is constrained to pick its distributions and how it differs from
the worst-case model is presented in Section 2.2.
Second Result.
Another important open problem is to improve the competitive
ratio for the Adwords problem when there is no bound on γ. The
best competitive ratio known for this problem is 1/2 in the worst
case. Nothing better was known, even in the stochastic models.
(For the special case of online bipartite matching, in the case of i.i.d
input with a known distribution, recent series of results achieve a ra-
tio of better than 1-1/e, for instance by Feldman et. al. [FMMM09]
and Bahmani and K apralov [B K 10]. The best ratio so far is .702 by
Manshadi, Gharan and Saberi [MGS11].) The second result in this
paper is that for the Adwords problem in the adversarial stochastic
input model, with no assumption on γ, the greedy algorithm gets a
competitive ratio of 1 1/e against t he optimal fractional solution
to the distribution instance (Theorem 5). The greedy algorithm is
particularly interesting since i t is a natural algorithm that is used
widely for its simplicity. Because of its wide use, previously the
performance of the greedy algorithm has been analyzed by Goel
and Mehta [GM08] who showed that in the r andom permutation
and the i. i.d models, it has a competitive rati o of 1 1/e with an
assumption which is essentially that γ tends to 0.
Third Result.
Charles et. al. [CCD
+
10] considered the following (offline)
problem: given a lopsided bipartite graph G = (L, R, E), that
is a bipartite graph where m = |L| |R| = n, does there exist
an assignment M : L R with (j, M(j)) E for all j L, and
such that for every vertex i R, |M
1
(i)| B
i
for some given
values B
i
. Even though this is a classic problem in combinato-
rial optimization wi th well known polynomial time algorithms, the
instances of interest are too large to use traditional approaches to
solve this problem. (The value of m in parti cular is very large.) The
approach used by [CCD
+
10] was to essentially design an online
algorithm in the i.i.d model: choose vertices from L uniformly at
random and assign them to vertices in R i n an online fashion. The
online algorithm is guaranteed to be close to optimal, as long as suf-
ficiently many samples are drawn. Therefore it can be used to solve
the original problem (approximately): the online algorithm gets an
almost satisfying assignment if and only if the original graph has a
satisfying assignment (with high probability).
The third result in this paper is a generalization of this result to
get fast approximation algorithms for a wide class of problems in
the resource allocation framework ( Theorem 6). Problems in the
resource allocation framework where the instances are t oo large to
use traditional algorithms occur fairly often, especially in the con-
text of online advertising. Formal statements and a more detailed
discussion are presented in Section 2.3.
The underlying idea used for all these results can be summarized
at a high l evel as thus: consider a hypothetical algorithm called
30

Pure-random that knows the distribution from which the input is
drawn and uses an optimal solution w.r.t this distribution. Now
suppose that we can analyze t he performance of Pure-random by
considering a potential function and showing that it decreases by a
certain amount in each step. Now we can design an algorithm that
does not know the distribution as follows: consider the same po-
tential function, and in every step choose the option that minimizes
the potential function. Since the algorithm minimizes the potential
in each step, the decrease in the potential for this algorithm is better
than that for Pure-random and hence we obtain the same guarantee
as that for Pure-random.
For instance, for the case where γ is small, the performance
of Pure-random is analyzed using Chernoff bounds. The Cher-
noff bounds are proven by showing bounds on the expectation of
the moment generating function of a random variable. Thus the
potential function is the sum of the moment generating functions
for all the random variables that we apply the Chernoff bounds to.
The proof shows that in each step this potential function decreases
by some multiplicative factor. The algorithm is then designed to
achieve the same decrease in the potential function. A particularly
pleasing aspect about this technique is that we obtain very simple
proofs. For instance, the proof of Theorem 5 is extremely sim-
ple: the potential function in this case is simply the total amount
of unused budgets and we show that this amount (in expectation)
decreases by a factor of 1 1/m in each step where there are m
steps in all.
On the surface, this technique and the resulting algorithms
1
bear
a close resemblance to the algorithms of Young [You95] for de-
randomizing randomized rounding and the fast approximation al-
gorithms for solving covering/packing LPs of Plotkin, Shmoys and
Tardos [PST91], Garg and K
¨
onemann [GK98] and Fleischer [Fl e00].
In fact Arora, Hazan and Kale [AHK05] showed that all these algo-
rithms are related to the multiplicative weights update method for
solving the experts problem and especially highlighted the similar-
ity between the potential function used in the analysis of the multi-
plicative update method and the moment generating function used
in the proof of Chernoff bounds and Young’s algorithms. Hence
it is no surprise that our algorithm is also a multiplicative update
algorithm. It seems that our algorithm is closer in spirit to Youngs
algorithms than others. It is possible that our algorithm can also be
interpreted as an algorithm for the experts problem. In fact Mehta
et. al. [MSVV05] asked if there is a 1o(1) competitive algorithm
for Adwords in the i.i.d model with small bid to budget r at io, and
in particular if the algorithms for experts could be used. They also
conjectured that such an algorithm would iteratively adjust a bud-
get discount factor based on the rate at which the budget is spent.
Our algorithms for resource al location problem when specialized
for Adwords look exactly like that and with the connections to the
experts framework, we answer the questions in [MSVV05] in the
positive.
Organization: The rest of the paper is organized as follows. In
Section 2, we define the resource allocation f ramework, the adver-
sarial stochastic model and state our results formally as theorems.
We also discuss one special case of the resource allocation frame-
work the adwords problem and formally state our results. In
Section 3, we consider a simplified “min-max” version of the re-
source allocation framework and present the proofs for this version.
The other results build upon this simple version. In Section 4 we
give a fast approximation algorithm for the mixed covering-packing
problem (Theorem 6). The 1 O(ǫ) competitive online algorithm
for the resource allocation framework with stochastic i nput (Theo-
1
F
or the case of small γ. It is not clear i f this discussion applies to
the case of large γ, that is to Theorem 5
rem 2) is in Section 5. T he 11/e competitive algorithm (Theorem
5) for the Adwords problem is in Section 6. Several special cases
of the resource allocation framework are considered in Section 7.
Section 8 concludes with some open problems and directions for
future research.
2. PRELIMINARIES & MAIN RESULTS
2.1 Resource allocation framework
We consider the following framework of optimization problems.
There are n resources, with resource i having a capacity of c
i
.
There are m requests; each request j can be satisfied by a vector x
j
that i s constrained to be in a polytope P
j
. (We refer to the vector
x
j
as an option to satisfy a request, and the polytope P
j
as the set
of options.) The vector x
j
consumes a
i,j
· x
j
amount of resource
i, and gives a profit of w
j
· x
j
. Note that a
i,j
, w
j
and x
j
are all
vectors. The objective is to maximize t he total profit subject to the
capacity constraints on the r esources. The following LP describes
the problem:
maximize
X
j
w
j
· x
j
s.t.
i,
X
j
a
i,j
· x
j
c
i
j, x
j
P
j
.
We assume that we have the following oracle available to us: given
a request j and a vector v, the oracle returns the vector x
j
that max-
imizes v.x
j
among all vectors in P
j
. Let γ = max({
a
i,j
.x
j
c
i
}
i,j
{
w
j
.x
j
W
}
j
) be the notion corresponding to the bid to budget ratio
for Adwords. Here W
is the optimal offline objective to the dis-
tribution instance, defined in Section 2.2.
The canonical case is where each P
j
is a unit simplex in R
K
, i.e.
P
j
= {x
j
R
K
:
P
k
x
j,k
= 1}. This captures the case where
there are K discrete options, each with a given profit and consump-
tion. This case captures most of the applications we are interested
in, which are described in Section 7. All the proofs will be pre-
sented for this special case, for ease of exposition. The co-ordinates
of the vectors a
i,j
and w
j
will be denoted by a(i, j, k) and w
j,k
respectively, i.e., the k
th
option consumes a(i, j, k) amount of re-
source i and gives a profit of w
j,k
. For an example of an application
that needs more general polytopes see Section 7.4.
We consider two versions of t he above problem. The rst is an
online version with stochastic input: requests are drawn from an
unknown distribution. The second is when the number of requests
is much larger t han the number of r esources, and our goal is to
design a fast PTAS for the problem.
2.2 Online Algorithms with Stochastic Input
We now consider an online version of the resource allocation
framework. Here requests arrive online. We consider the i.i.d.
model, where each request is drawn independently from a given
distribution. The distribution is unknown to the algorithm. The al-
gorithm knows m, the total number of requests. The competitive
ratios we give for resource allocation problems with bounded γ are
with respect to an upper bound on the expected value of fractional
optimal solution, namely, the fractional optimal solution of the dis-
tribution instance, defined below.
Consider the following distribution instance of the problem. It is
an offline instance defined for any given distribution over requests
and t he total number of requests m. The capacities of the resources
in this instance are the same as in the original instance. Every
request in the support of the distribution is also a request in this
31

instance. Suppose request j occurs with probability p
j
. Assume
w.l.o.g that p
j
1/m. (I f p
j
> 1/m for some request then repeat
that request mp
j
times with probability 1/m and one more time
with probability p
j
mp
j
/m. This “breaking up” of a request
j with mp
j
> 1 is done in order not to increase γ.) The resource
consumption of j in t he distribution instance is given by mp
j
a
i,j
for all i and the profit is mp
j
w
j
. The intuition is that if the re-
quests were drawn from this distribution then the expected number
of times request j is seen is mp
j
and this is represented in the dis-
tribution instance by scaling the consumption and the profit vectors
by mp
j
. To summarize, the distribution instance is as follows.
maximize
X
j in the support
mp
j
w
j
.x
j
s.t.
i,
X
j
mp
j
a
i,j
.x
j
c
i
j, x
j
P
j
.
We now prove that the fractional optimal solution to the distribution
instance is an upper bound on the expectation of OPT, where OPT
is the offline fractional optimum of the actual sequence of requests.
LEMMA 1. OPT[Distribution instance] E[OPT]
PROOF. The average of optimal solutions for all possible se-
quences of requests should give a feasible solution to the distribu-
tion instance with a profit equal to E[OPT]. Thus the optimal profit
for the distri bution instance could only be larger.
The competitive ratio of an algorithm in the i.i.d model is defined
as the ratio of the expected profit of the algorithm to the fractional
optimal profit for the distribution instance. The main result is that
as γ tends to zero, the competitive ratio tends to 1. In fact, we give
the almost optimal trade-off.
THEOREM 2. For any ǫ > 0, we give an algorithm such that if
γ =
ǫ
2
/ log(1)
2
log(n)+log(1)
then the competitive ratio of the algorithm is
1 O(ǫ).
THEOREM 3. There exist instances with γ =
ǫ
2
log(n)
such that
no algorithm can get a competitive ratio of 1 o(ǫ).
2
Also, our algorithm works when the polytope P
j
is obtained as
an LP relaxation of the actual problem.
3
To be precise, suppose
that the set of options that could be used to satisfy a given request
corresponds to some set of vectors, say I
j
. Let the polytope P
j
I
j
be an α approximate relaxation of I
j
if for the profit vector
w
j
and for all x
j
P
j
, there is an oracle that returns a y
j
I
j
such that w
j
.y
j
αw
j
.x
j
. Given such an oracle, our algorithm
achieves a competitive ratio of α O(ǫ).
THEOREM 4. Given a resource allocation problem with an α
approximate relaxation, and for any ǫ > 0, we give an algorithm
such that if γ =
ǫ
2
/ log(1)
2
log(n)+log(1)
then the competitive ratio of the
algorithm is α O(ǫ).
We prove Theorem 4 in the full version of the paper.
In fact, our results hold for the following more general model,
the adversarial stochastic input model. In each step, the adversary
2
The proof of this theorem is obtained by a modification of a simi-
lar theorem for random permutations presented in [AWY09].
3
There may be trivial ways of defining P
j
such that its vertices
correspond to the actual options. The motivation for allowing non-
trivial relaxations is computational: recall that we need to be able
to optimize linear functions over P
j
.
adaptively chooses a distribution from which the request in t hat step
is drawn. The adversary i s constrained to pick the distributions in
one of the following two ways. In the rst case, w e assume that
a target objective value OPT
T
is given to the algorithm, and that
the adversary is constrained to pick distributions such that the frac-
tional optimum solution of each of the corresponding distribution
instances is at least OPT
T
(or at most OPT
T
for minimization prob-
lems). The competitive ratio i s defined with respect to OPT
T
. In
the second case, we are not given a target, but the adversary i s con-
strained to pick distributions so that the fractional optimum of each
of the corresponding distribution instances is the same, which is the
benchmark with respect to which the competitive ratio is defined.
Note that while the i.i.d model can be reduced to the random
permutation model, these generalizations are incomparable to the
random permutation model as they allow the input to vary over
time. Also the constraint that each of the distribution instances
has a large optimum value distinguishes this from the worst-case
model. This constraint i n general implies that the distribution must
contain sufficiently rich variety of requests in order for the corre-
sponding distribution instance to have a high optimum. To truly
simulate the worst-case model, in every step the adversary would
chose a “deterministic distribution”, that is a distribution supported
on a si ngle request. Then t he distribution instance will simply have
m copies of this single request and hence will not have a high op-
timum. For instance consider online bipartite b-matching w here
each resource is a node on one side of a bipartite graph with the
capacity c
i
denoting the number of nodes it can be matched to and
the requests are nodes on the other side of t he graph and can be
matched to at most one node. A deterministic distribution in this
case corresponds to a single online node and if that node is repeated
m times then the optimum for that instance is just the weighted (by
c
i
) degree of t hat node. If the adversary only picks such determin-
istic distributions t hen he is constrained to pick nodes of very high
degree thus making it easy for the algorithm to match them.
We refer the reader to Section 7 for a discussion on several prob-
lems that are special cases of the resource allocation framework
and have been previously considered. Here, we discuss one special
case t he adwords problem.
2.2.1 The Adwords problem
In the i.i.d Adwords problem, there are n bidders, and each bid-
der i has a daily budget of B
i
dollars. Keywords arrive online with
keyword j having an (unknown) probability p
j
of arriving in any
given step. For every keyword j, each bidder submits a bid, b
ij
,
which is the profit obtained by the algorithm on allocating keyword
j to bidder i. The objective is to maximize the profit, subject to the
constraint that no bidder is charged more than his budget. Here, the
resources are the daily budgets of the bidders, the requests are t he
keywords, and the options are once again the bidders. The amount
of resource consumed and the profit are both b
ij
.
For this problem, with no bounds on γ, we show that the greedy
algorithm has a competitive ratio of 1 1/e. For our results for the
adwords problem with bounded γ, see Section 7.1
THEOREM 5. The greedy algorithm achieves a competitive ra-
tio of 11/e for the Adwords problem in the adversarial stochastic
input model with no assumptions on the bid to budget ratio.
We note here that the competitive ratio of 1 1/e is tight for the
greedy algorithm [GM08]. It is however not known to be tight for
an arbitrary algorithm.
2.3 Fast algorithms for very large LPs
Charles et al. [ CCD
+
10] consider the following problem: given
a bipartite graph G = (L, R, E) where m = |L| |R| = n, does
32

there exist an assignment M : L R with (j, M(j)) E for
all j L, and such that for every vertex i R, |M
1
(i)| B
i
for some given values B
i
. They gave an algorithm that runs in time
linear
4
in the number of edges of an induced subgraph obtained by
taking a random sample from R of size O
m log n
min
i
{B
i
}ǫ
2
, for a gap-
version of t he problem with gap ǫ. When min
i
{B
i
} is reasonably
large, such an algorithm is very useful in a variety of applications
involving ad assignment for online advertising.
We consider a generalization of the above problem (that cor-
responds to the resource allocation framework). In fact, we con-
sider the following mixed covering-packing problem. Suppose that
there are n
1
packing constraints, one for each i {1..n
1
} of the
form
P
m
j=1
a
i,j
x
j
c
i
and n
2
covering constraints, one for each
i {1..n
2
} of the form
P
m
j=1
b
i,j
x
j
d
i
. Each x
j
is con-
strained to be in P
j
. Does there exists a feasible solution to this
system of constraints? T he gap-version of this problem is as fol-
lows. Distinguish between the two cases:
YES: There is a feasible solution.
NO: There is no feasible solution even if all of the c
i
s are multi-
plied by 1 + ǫ and all of the d
i
s is multiplied by 1 ǫ.
We note that solving (offline) an optimization problem in the re-
source allocation framework can be reduced to the above problem
through a binary search on the objective function value.
Suppose as in [CCD
+
10] that m is much larger than n. Assume
that solving the following costs unit time: given j and v, find x
j
P
j
that maximizes v.x
j
. Let γ = max{i [n
1
], j [m] :
a
i,j
.x
j
c
i
} {i [n
2
], j [m] :
b
i,j
.x
j
d
i
}.
THEOREM 6. There is an algorithm that solves the gap ver-
sion of the mixed covering-packing problem with a running time of
O
`
γm log n
ǫ
2
´
.
Applications to online advertising:
The matching problem introduced by [CCD
+
10] was motivated
by the problem of computing the available inventory for display ad
allocation (see the original paper for details). In fact, the matching
problem was a simplified version of the real problem, which fits
into the resource allocation framework. Moreover, such algorithms
are used i n multiple ways. For instance, although the technique of
Devanur and Hayes [DH09] was originally designed to solve the
purely online problem, it can be used in the PAC model where the
algorithm can make use of a prediction of the future arrival of re-
quests (see for instance Vee, Vassilvitskiiy and Shanmugasundaram
[VVS10]). The key technique is to formulate an L P relaxation of
the problem and learn the optimal dual variables using the predic-
tion, and these duals can then be used for the allocation online.
Even if the prediction is not entirely accurate, we note that such
an approach has certain advantages. This motivates the problem of
finding the optimal duals. We observe that our algorithm can also
be used to compute near optimal duals which can then be used to
do the allocation online. Problems such as t he Display ad alloca-
tion problem (please see full version of the paper for details) can
benefit from such an algorithm.
A similar approach was considered by Abrams, Mendelevitch
and Tomlin [AMT07] for the following problem motivated by spon-
sored search auctions: for each query j, one can show an advertiser
in each of the K slots. Each advertiser i bids a certain amount on
each query j, and has a daily budget. However, the cost to an adver-
tiser depends on t he entire ordered set of advertisers shown (called
4
I
n fact, the algorithm makes a single pass through this graph.
a slate), based on the rules of the auction. Given the set of queries
that arrive in a day (which in practice is an estimate of the queries
expected rather than the actual queries), the goal is to schedule a
slate of advertisers for each query such that the total cost to each
advertiser i s within the budget and maximize a given objective such
as the total revenue, or the social welfare. This problem is modeled
as an LP and a column-generation approach is suggested to solve it.
Also, many compromises are made, in terms of limiting the num-
ber of queries, etc. due to the difficulties in solving an LP of very
large size. We observe that this LP fits in the resource allocation
framework and thus can be solved quickly using our algorithm.
3. MIN-MAX VERSION
In this section, we solve a slightly simplified version of the gen-
eral online resource allocation problem, which we call the min-max
version. In this problem, m requests arrive online, and each of them
must be served. The objective is to minimize the maximum fracti on
of any resource consumed. (There is no profit.) The following LP
describes it formally.
minimize λ s.t.
i,
X
j,k
a(i, j, k)x
j,k
λc
i
j,
X
k
x
j,k
= 1,
j, k, x
j,k
0.
For ease of illustration, we assume that the requests arrive i.i.d
(unknown distribution) in the following proof. At the end of this
section, we show that the proof holds f or the adversarial stochastic
input model also.
The algorithm proceeds in steps. Let λ
denote the fractional
optimal objective value of the distribution instance of this problem.
Let X
t
i
be the random variable indicating the amount of resource i
consumed during step t, that is, X
t
i
= a(i, j, k) if in step t, request
j was chosen and was served using option k. Let S
T
i
=
P
T
t=1
X
t
i
be the total amount of resource i consumed in the fir st T steps.
Let γ = max
i,j
{
a(i,j,k)
c
i
}, which implies that for all i, j and k,
a(i, j, k) γc
i
. Let φ
t
i
= (1 + ǫ)
S
t
i
/(γc
i
)
. For the sake of con-
venience, we l et S
0
i
= 0 and φ
0
i
= 1 for all i. The algorithm is as
follows.
ALG Min-max.
In step t + 1, on receiving request j, use option
arg min
k
(
X
i
a(i, j, k) φ
t
i
c
i
)
.
LEMMA 7. The algorithm ALG Min-max described above ap-
proximates λ
within a factor of (1 + ǫ), with a probability at least
1 δ, where δ = n exp
ǫ
2
λ
4γ
We will prove Lemma 7 through a series of lemmas, namely Lem-
mas 8, 9 and 10. Before we begin the proof, we give some intu-
ition. Consider a hypothetical algorithm, call it Pure-random, t hat
knows the distribution. Let x
j
denote the optimal fractional so-
lution to the distribution instance. Pure-random is a non-adaptive
algorithm which uses x
j
to sati sf y request j, i.e., it serves request
j using option k with probability x
jk
. Suppose we wanted to prove
a bound on the performance of Pure-random, that is show that with
high probability, Pure-random is within 1 + O(ǫ) of the optimum,
33

Citations
More filters
Posted Content

Introduction to Multi-Armed Bandits

TL;DR: In this article, a more introductory, textbook-like treatment of multi-armed bandits is provided, with a self-contained, teachable technical introduction and a brief review of further developments; many of the chapters conclude with exercises.
Posted Content

The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems.

TL;DR: The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems is presented, which presents a novel class of algorithms for nonconvex online allocation problems that attain good performance simultaneously in stochastic and adversarial input models and also in various nonstationary settings.
Proceedings ArticleDOI

Prophet Inequality for Bipartite Matching: Merits of Being Simple and Non Adaptive

TL;DR: In this paper, the authors consider Bayesian online selection problem of a matching in bipartite graphs, i.e., online weighted matching problem with edge arrivals where online algorithm knows distributions of weights, and present gradient decent type algorithm that quickly converges to the desired vector of vertex prices.
Posted Content

Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds

TL;DR: An equivalent form of the dual problem that relates the dual LP with a sample average approximation to a stochastic program is identified and a new type of OLP algorithm is proposed, action-history-dependent learning algorithm, which improves the previous algorithm performances by taking into account the past input data and the past decisions/actions.
Book ChapterDOI

Autobidding with Constraints

TL;DR: The novel contribution is to show a strong connection between bidding and auction design, in that the bidding formula is optimal if and only if the underlying auction is truthful.
References
More filters
Journal ArticleDOI

The Multiplicative Weights Update Method: A Meta-Algorithm and Applications

TL;DR: A simple meta-algorithm is presented that unifies many of these disparate algorithms and derives them as simple instantiations of the meta-Algorithm.
Proceedings ArticleDOI

Faster and simpler algorithms for multicommodity flow and other fractional packing problems

TL;DR: This paper provides a different approach to these problems which yields faster and much simpler algorithms and allows us to substitute shortest path computations for min- cost flow computations in computing maximum concurrent flow and min-cost multicommodity flow.
Journal ArticleDOI

Fast approximation algorithms for fractional packing and covering problems

TL;DR: The techniques developed in this paper greatly outperform the general methods in many applications, and are extensions of a method previously applied to find approximate solutions to multicommodity flow problems.
Proceedings ArticleDOI

The adwords problem: online keyword matching with budgeted bidders under random permutations

TL;DR: The problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit, is considered, and the current literature on this problem is extended by considering the setting where the keywords arrive in a random order.
Book ChapterDOI

Online primal-dual algorithms for maximizing ad-auctions revenue

TL;DR: A (1 - 1/e)-competitive (optimal) algorithm is designed for the online ad-auctions problem, which is based on a clean primal-dual approach, matching the competitive factor obtained in Mehta et al.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What have the authors contributed in "Near optimal online algorithms and fast approximation algorithms for resource allocation problems" ?

The authors present algorithms for a class of resource allocation problems both in the online setting with stochastic input and in the offline setting. In the online setting the authors introduce a new distributional model called the adversarial stochastic input model, which is a generalization of the i. i. d model with unknown distributions, where the distributions can change over time. The upper bound on ratio that the authors require improves on the previous upper-bound for the i. i. d case by a factor of n. All the previous proofs assume ∗A full version of this paper, with all the proofs, is available at http: //arxiv. org †Part of this work was done while the author was at Microsoft Research, Redmond ‡Part of this work was done while the author was at Microsoft Research, Redmond Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. The authors give algorithms to approximately solve ( within a factor of 1 + ǫ ) the mixed packing-covering problem with O ( γm log n ǫ2 ) oracle calls where the constraint matrix of this LP has dimension n ×m, and γ is a parameter which is very similar to the ratio described for the online setting. The authors discuss several applications, and how their algorithms improve existing results in some of these applications.