scispace - formally typeset
Search or ask a question
Author

Martin Pál

Bio: Martin Pál is an academic researcher from Google. The author has contributed to research in topics: Approximation algorithm & Stochastic optimization. The author has an hindex of 32, co-authored 67 publications receiving 4736 citations. Previous affiliations of Martin Pál include Cornell University & Rutgers University.


Papers
More filters
Journal ArticleDOI
TL;DR: An improved coating pan apparatus and spray arm assembly are disclosed for providing facilitated maintenance and cleaning of sensitive spray nozzles.
Abstract: Let $f:2^X \rightarrow \cal R_+$ be a monotone submodular set function, and let $(X,\cal I)$ be a matroid. We consider the problem ${\rm max}_{S \in \cal I} f(S)$. It is known that the greedy algorithm yields a $1/2$-approximation [M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, Math. Programming Stud., no. 8 (1978), pp. 73-87] for this problem. For certain special cases, e.g., ${\rm max}_{|S| \leq k} f(S)$, the greedy algorithm yields a $(1-1/e)$-approximation. It is known that this is optimal both in the value oracle model (where the only access to $f$ is through a black box returning $f(S)$ for a given set $S$) [G. L. Nemhauser and L. A. Wolsey, Math. Oper. Res., 3 (1978), pp. 177-188] and for explicitly posed instances assuming $P eq NP$ [U. Feige, J. ACM, 45 (1998), pp. 634-652]. In this paper, we provide a randomized $(1-1/e)$-approximation for any monotone submodular function and an arbitrary matroid. The algorithm works in the value oracle model. Our main tools are a variant of the pipage rounding technique of Ageev and Sviridenko [J. Combin. Optim., 8 (2004), pp. 307-328], and a continuous greedy process that may be of independent interest. As a special case, our algorithm implies an optimal approximation for the submodular welfare problem in the value oracle model [J. Vondrak, Proceedings of the $38$th ACM Symposium on Theory of Computing, 2008, pp. 67-74]. As a second application, we show that the generalized assignment problem (GAP) is also a special case; although the reduction requires $|X|$ to be exponential in the original problem size, we are able to achieve a $(1-1/e-o(1))$-approximation for GAP, simplifying previously known algorithms. Additionally, the reduction enables us to obtain approximation algorithms for variants of GAP with more general constraints.

761 citations

Proceedings Article
01 Jan 2007
TL;DR: The generalized assignment problem (GAP) is a special case of the problem, and although the reduction requires |N| to be exponential in the original problem size, it is able to interpret the recent (1 i¾? 1/e)-approximation for GAP by Fleischer et al.[10] in the framework.

437 citations

Proceedings ArticleDOI
23 Oct 2005
TL;DR: An O(log OPT) approximation is obtained for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time and the implications for the approximability of several basic optimization problems are interesting.
Abstract: Given an arc-weighted directed graph G = (V, A, /spl lscr/) and a pair of nodes s, t, we seek to find an s-t walk of length at most B that maximizes some given function f of the set of nodes visited by the walk. The simplest case is when we seek to maximize the number of nodes visited: this is called the orienteering problem. Our main result is a quasi-polynomial time algorithm that yields an O(log OPT) approximation for this problem when f is a given submodular set function. We then extend it to the case when a node v is counted as visited only if the walk reaches v in its time window [R(v), D(v)]. We apply the algorithm to obtain several new results. First, we obtain an O(log OPT) approximation for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time. This captures the time window problem considered earlier for which, even in undirected graphs, the best approximation ratio known [Bansal, N et al. (2004)] is O(log/sup 2/ OPT). The second application is an O(log/sup 2/ k) approximation for the k-TSP problem in directed graphs (satisfying asymmetric triangle inequality). This is the first non-trivial approximation algorithm for this problem. The third application is an O(log/sup 2/ k) approximation (in quasi-poly time) for the group Steiner problem in undirected graphs where k is the number of groups. This improves earlier ratios (Garg, N et al.) by a logarithmic factor and almost matches the inapproximability threshold on trees (Halperin and Krauthgamer, 2003). This connection to group Steiner trees also enables us to prove that the problem we consider is hard to approximate to a ratio better than /spl Omega/(log/sup 1-/spl epsi// OPT), even in undirected graphs. Even though our algorithm runs in quasi-poly time, we believe that the implications for the approximability of several basic optimization problems are interesting.

272 citations

Proceedings ArticleDOI
13 Apr 2008
TL;DR: These algorithms are the first approximation algorithms in the literature with a tight worst-case guarantee for the NP-hard problem and can obtain an aggregate throughput which can be as much as 2.3 times more than that of the max-min fair allocation in 802.11b.
Abstract: In multi-rate wireless LANs, throughput-based fair bandwidth allocation can lead to drastically reduced aggregate throughput. To balance aggregate throughput while serving users in a fair manner, proportional fair or time-based fair scheduling has been proposed to apply at each access point (AP). However, since a realistic deployment of wireless LANs can consist of a network of APs, this paper considers proportional fairness in this much wider setting. Our technique is to intelligently associate users with APs to achieve optimal proportional fairness in a network of APs. We propose two approximation algorithms for periodical offline optimization. Our algorithms are the first approximation algorithms in the literature with a tight worst-case guarantee for the NP-hard problem. Our simulation results demonstrate that our algorithms can obtain an aggregate throughput which can be as much as 2.3 times more than that of the max-min fair allocation in 802.11b. While maintaining aggregate throughput, our approximation algorithms outperform the default user-AP association method in the 802.11b standard significantly in terms of fairness.

224 citations

Book ChapterDOI
09 Dec 2009
TL;DR: The insight is that ad impressions allow for free disposal, that is, advertisers are indifferent to, or prefer being assigned more than n(a) impressions without changing the contract terms, and an algorithm is provided that achieves a competitive ratio of 1 ?
Abstract: We study an online weighted assignment problem with a set of fixed nodes corresponding to advertisers and online arrival of nodes corresponding to ad impressions. Advertiser a has a contract for n(a) impressions, and each impression has a set of weighted edges to advertisers. The problem is to assign the impressions online so that while each advertiser a gets n(a) impressions, the total weight of edges assigned is maximized. Our insight is that ad impressions allow for free disposal, that is, advertisers are indifferent to, or prefer being assigned more than n(a) impressions without changing the contract terms. This means that the value of an assignment only includes the n(a) highest-weighted items assigned to each node a. With free disposal, we provide an algorithm for this problem that achieves a competitive ratio of 1 ? 1/e against the offline optimum, and show that this is the best possible ratio. We use a primal/dual framework to derive our results, applying a novel exponentially-weighted dual update rule. Furthermore, our algorithm can be applied to a general set of assignment problems including the ad words problem as a special case, matching the previously known 1 ? 1/e competitive ratio.

209 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 2006
TL;DR: This paper discusses Fixed-Parameter Algorithms, Parameterized Complexity Theory, and Selected Case Studies, and some of the techniques used in this work.
Abstract: PART I: FOUNDATIONS 1. Introduction to Fixed-Parameter Algorithms 2. Preliminaries and Agreements 3. Parameterized Complexity Theory - A Primer 4. Vertex Cover - An Illustrative Example 5. The Art of Problem Parameterization 6. Summary and Concluding Remarks PART II: ALGORITHMIC METHODS 7. Data Reduction and Problem Kernels 8. Depth-Bounded Search Trees 9. Dynamic Programming 10. Tree Decompositions of Graphs 11. Further Advanced Techniques 12. Summary and Concluding Remarks PART III: SOME THEORY, SOME CASE STUDIES 13. Parameterized Complexity Theory 14. Connections to Approximation Algorithms 15. Selected Case Studies 16. Zukunftsmusik References Index

1,730 citations

Journal ArticleDOI
TL;DR: This work shows that the uncoded optimum file assignment is NP-hard, and develops a greedy strategy that is provably within a factor 2 of the optimum, and provides an efficient algorithm achieving a provably better approximation ratio of 1-1/d d, where d is the maximum number of helpers a user can be connected to.
Abstract: Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as “helpers”). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of 1-(1-1/d )d, where d is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.

1,331 citations

Proceedings ArticleDOI
25 Mar 2012
TL;DR: The theoretical contribution of this paper lies in formalizing the distributed caching problem, showing that this problem is NP-hard, and presenting approximation algorithms that lie within a constant factor of the theoretical optimum.
Abstract: We suggest a novel approach to handle the ongoing explosive increase in the demand for video content in wireless/mobile devices. We envision femtocell-like base stations, which we call helpers, with weak backhaul links but large storage capacity. These helpers form a wireless distributed caching network that assists the macro base station by handling requests of popular files that have been cached. Due to the short distances between helpers and requesting devices, the transmission of cached files can be done very efficiently.

895 citations