scispace - formally typeset
Search or ask a question
Author

Carsten Lund

Other affiliations: Lynn University, AT&T Labs, Bell Labs  ...read more
Bio: Carsten Lund is an academic researcher from AT&T. The author has contributed to research in topics: Sampling (statistics) & Traffic generation model. The author has an hindex of 41, co-authored 95 publications receiving 10272 citations. Previous affiliations of Carsten Lund include Lynn University & AT&T Labs.


Papers
More filters
Journal ArticleDOI
TL;DR: It is proved that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P, and there exists a positive ε such that approximating the maximum clique size in an N-vertex graph to within a factor of Nε is NP-hard.
Abstract: We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof” with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [1998] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length).As a consequence, we prove that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P. The class MAX SNP was defined by Papadimitriou and Yannakakis [1991] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige et al. [1996] and Arora and Safra [1998] and show that there exists a positive e such that approximating the maximum clique size in an N-vertex graph to within a factor of Ne is NP-hard.

1,501 citations

Journal ArticleDOI
TL;DR: It is proved that there is an e > 0 such that Graph Coloring cannot be approximated with ratio n e unless P = NP, and Set Covering cannot be approximation with ratio c log n for any c < 1/4 unless NP is contained in DTIME(n poly log n).
Abstract: We prove results indicating that it is hard to compute efficiently good approximate solutions to the Graph Coloring, Set Covering and other related minimization problems. Specifically, there is an e > 0 such that Graph Coloring cannot be approximated with ratio n e unless P = NP. Set Covering cannot be approximated with ratio c log n for any c < 1/4 unless NP is contained in DTIME(n poly log n ). Similar results follow for related problems such as Clique Cover, Fractional Chromatic Number, Dominating Set, and others

1,025 citations

Journal ArticleDOI
TL;DR: This technique is used to prove that every language in the polynomial-time hierarchy has an interactive proof system and played a pivotal role in the recent proofs that IP = PSPACE and MIP = NEXP.
Abstract: A new algebraic technique for the construction of interactive proof systems is presented. Our technique is used to prove that every language in the polynomial-time hierarchy has an interactive proof system. This technique played a pivotal role in the recent proofs that IP = PSPACE [28] and that MIP = NEXP [4].

751 citations

Journal ArticleDOI
TL;DR: It is shown that the class of languages having tow-prover interactive proof systems is nondeterministic exponential time and that to prove membership in languages inEXP, the honest provers need the power ofEXP only.
Abstract: We determine the exact power of two-prover interactive proof systems introduced by Ben-Or, Goldwasser, Kilian, and Wigderson (1988). In this system, two all-powerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the inputx belongs to the languageL. We show that the class of languages having tow-prover interactive proof systems is nondeterministic exponential time. We also show that to prove membership in languages inEXP, the honest provers need the power ofEXP only. The first part of the proof of the main result extends recent techniques of polynomial extrapolation used in the single prover case by Lund, Fortnow, Karloff, Nisan, and Shamir. The second part is averification scheme for multilinearity of a function in several variables held by an oracle and can be viewed as an independent result onprogram verification. Its proof rests on combinatorial techniques employing a simple isoperimetric inequality for certain graphs:

601 citations

Journal ArticleDOI
TL;DR: This paper presents a model of traffic demands to support traffic engineering and performance debugging of large Internet Service Provider networks, and shows how to infer interdomain traffic demands using measurements collected at a smaller number of edge links-the peering links connecting the neighboring providers.
Abstract: Engineering a large IP backbone network without an accurate network-wide view of the traffic demands is challenging. Shifts in user behavior, changes in routing policies, and failures of network elements can result in significant (and sudden) fluctuations in load. In this paper, we present a model of traffic demands to support traffic engineering and performance debugging of large Internet Service Provider networks. By defining a traffic demand as a volume of load originating from an ingress link and destined to a set of egress links, we can capture and predict how routing affects the traffic traveling between domains. To infer the traffic demands, we propose a measurement methodology that combines flow-level measurements collected at all ingress links with reachability information about all egress links. We discuss how to cope with situations where practical considerations limit the amount and quality of the necessary data. Specifically, we show how to infer interdomain traffic demands using measurements collected at a smaller number of edge links-the peering links connecting the neighboring providers. We report on our experiences in deriving the traffic demands in the AT&T IP BAckbone, by collecting, validating, and joining very large and diverse sets of usage, configuration, and routing data over extended periods of time. The paper concludes with a preliminary analysis of the observed dynamics of the traffic demands and a discussion of the practical implications for traffic engineering.

484 citations


Cited by
More filters
Journal ArticleDOI
17 Aug 2008
TL;DR: This paper shows how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements and argues that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions.
Abstract: Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance.In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.

3,549 citations

MonographDOI
20 Apr 2009
TL;DR: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory and can be used as a reference for self-study for anyone interested in complexity.
Abstract: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory. Requiring essentially no background apart from mathematical maturity, the book can be used as a reference for self-study for anyone interested in complexity, including physicists, mathematicians, and other scientists, as well as a textbook for a variety of courses and seminars. More than 300 exercises are included with a selected hint set.

2,965 citations

01 Jan 1997
TL;DR: A survey of machine learning methods for handling data sets containing large amounts of irrelevant information can be found in this article, where the authors focus on two key issues: selecting relevant features and selecting relevant examples.
Abstract: In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area. @ 1997 Elsevier Science B.V.

2,947 citations

Journal ArticleDOI
TL;DR: It is proved that (1 - o(1) ln n setcover is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms.
Abstract: Given a collection ℱ of subsets of S = {1,…,n}, set cover is the problem of selecting as few as possible subsets from ℱ such that their union covers S,, and max k-cover is the problem of selecting k subsets from ℱ such that their union has maximum cardinality. Both these problems are NP-hard. We prove that (1 - o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low-order terms) between the ratio of approximation achievable by the greedy alogorithm (which is (1 - o(1)) ln n), and provious results of Lund and Yanakakis, that showed hardness of approximation within a ratio of (log2n) / 2 ≃0.72 ln n. For max k-cover, we show an approximation threshold of (1 - 1/e)(up to low-order terms), under assumption that P ≠ NP.

2,941 citations