scispace - formally typeset
Search or ask a question

Showing papers by "Asaf Shapira published in 2007"


Book ChapterDOI
Eyal Even-Dar1, Asaf Shapira2
12 Dec 2007
TL;DR: A very simple and efficient algorithms are provided for solving the spread maximization problem in the context of the well studied probabilistic voter model and it is shown that the most natural heuristic solution, which picks the nodes in the network with the highest degree is indeed the optimal solution.
Abstract: We consider the spread maximization problem that was defined by Domingos and Richardson [6,15] In this problem, we are given a social network represented as a graph and are required to find the set of the most "influential" individuals that by introducing them with a new technology, we maximize the expected number of individuals in the network, later in time, that adopt the new technology This problem has applications in viral marketing, where a company may wish to spread the rumor of a new product via the most influential individuals in popular social networks such as Myspace and Blogsphere The spread maximization problem was recently studied in several models of social networks [10,11,13] In this short paper we study this problem in the context of the well studied probabilistic voter model We provide very simple and efficient algorithms for solving this problem An interesting special case of our result is that the most natural heuristic solution, which picks the nodes in the network with the highest degree, is indeed the optimal solution

189 citations


Journal Article
TL;DR: In this article, the problem of testing the expansion of graphs with bounded degree d in sublinear time was studied, and it was shown that the algorithm proposed by Goldreich and Ron [9] (ECCC-2000) can distinguish with high probability between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2).
Abstract: We study the problem of testing the expansion of graphs with bounded degree d in sublinear time. A graph is said to be an @a-expander if every vertex set U@?V of size at most 12|V| has a neighborhood of size at least @a|U|. We show that the algorithm proposed by Goldreich and Ron [9] (ECCC-2000) for testing the expansion of a graph distinguishes with high probability between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2). This improves a recent result of Czumaj and Sohler [3] (FOCS-07) who showed that this algorithm can distinguish between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2/logn). It also improves a recent result of Kale and Seshadhri [12] (ECCC-2007) who showed that this algorithm can distinguish between @a-expanders and graphs which are -far from having expansion at least @W(@a^2) with twice the maximum degree. Our methods combine the techniques of [3], [9] and [12].

50 citations


Proceedings ArticleDOI
07 Jan 2007
TL;DR: An algorithm for APBSP whose running time is O(n2.575) time, derived from the exponent of fast matrix multiplication, that computes shortest paths of maximum bottleneck weight.
Abstract: Let G = (V, E, w) be a directed graph, where w : V ← R is an arbitrary weight function defined on its vertices The bottleneck weight, or the capacity, of a path is the smallest weight of a vertex on the path For two vertices u, v the bottleneck weight, or the capacity, from u to v, denoted c(u, v), is the maximum bottleneck weight of a path from u to v In the All-Pairs Bottleneck Paths (APBP) problem we have to find the bottleneck weights for all ordered pairs of vertices Our main result is an O(n2575) time algorithm for the APBP problem The exponent is derived from the exponent of fast matrix multiplication Our algorithm is the first sub-cubic algorithm for this problem Unlike the sub-cubic algorithm for the all-pairs shortest paths (APSP) problem, that only applies to bounded (or relatively small) integer edge or vertex weights, the algorithm presented for APBP problem works for arbitrary large vertex weightsThe APBP problem has numerous applications, and several interesting problems that have recently attracted attention can be reduced to it, with no asymptotic loss in the running times of the known algorithms for these problems Some examples are a result of Vassilevska and Williams [STOC 2006] on finding a triangle of maximum weight, a result of Bender et al [SODA 2001] on computing least common ancestors in DAGs and a result of Kowaluk and Lingas [ICALP 2005] on finding maximum witnesses for boolean matrix multiplication Thus, the APBP problem provides a uniform framework for these applications For some of these problems, we can in fact show that their complexity is equivalent to that of the APBP problemA slight modification of our algorithm enables us to compute shortest paths of maximum bottleneck weight Let d(u, v) denote the (unweighted) distance from u to v, and let sc(u, v) denote the maximum bottleneck weight of a path from u to v having length d(u, v) The All-Pairs Bottleneck Shortest Paths (APBSP) problem is to compute sc(u, v) for all ordered pairs of vertices We present an algorithm for the APBSP problem whose running time is O(n286)

44 citations


Proceedings ArticleDOI
21 Oct 2007
TL;DR: For any r ges 3, the algorithm will find a small regular partition in the case that one exists, and an O(n) time randomized algorithm for constructing regular partitions of r-uniform hypergraphs is given, thus improving the previous O( n2r-1) time (deterministic) algorithms.
Abstract: We show that any partition-problem of hypergraphs has an O(n) time approximate partitioning algorithm and an efficient property tester. This extends the results of Goldreich, Goldwasser and Ron who obtained similar algorithms for the special case of graph partition problems in their seminal paper (1998). The partitioning algorithm is used to obtain the following results: ldr We derive a surprisingly simple O(n) time algorithmic version of Szemeredi's regularity lemma. Unlike all the previous approaches for this problem which only guaranteed to find partitions of tower-size, our algorithm will find a small regular partition in the case that one exists; ldr For any r ges 3, we give an O(n) time randomized algorithm for constructing regular partitions of r-uniform hypergraphs, thus improving the previous O(n2r-1) time (deterministic) algorithms. The property testing algorithm is used to unify several previous results, and to obtain the partition densities for the above problems (rather than the partitions themselves) using only poly(1/isin) queries and constant running time.

37 citations


Proceedings ArticleDOI
07 Jan 2007
TL;DR: A simpler construction, which applies the replacement product (only twice!) to turn the Cayley expanders of [4], whose degree is polylog n, into constant degree expanders, is given.
Abstract: We describe a short and easy to analyze construction of constant-degree expanders. The construction relies on the replacement product, applied by [14] to give an iterative construction of bounded-degree expanders. Here we give a simpler construction, which applies the replacement product (only twice!) to turn the Cayley expanders of [4], whose degree is polylog n, into constant degree expanders. This enables us to prove the required expansion using a new simple combinatorial analysis of the replacement product (instead of the spectral analysis used in [14]).

35 citations


Journal Article
TL;DR: It is shown that every hereditary graph property is testable with a constant number of queries provided that every sufficiently large induced subgraph of the input graph has poor expansion.
Abstract: We study graph properties that are testable for bounded-degree graphs in time independent of the input size. Our goal is to distinguish between graphs having a predetermined graph property and graphs that are far from every graph having that property. It is well known that in the bounded-degree graph model (where two graphs are considered “far” if they differ in $\\varepsilon n$ edges for a positive constant $\\varepsilon$), many graph properties cannot be tested even with a constant or even with a polylogarithmic number of queries. Therefore in this paper we focus our attention on testing graph properties for special classes of graphs. Specifically, we show that every hereditary graph property is testable with a constant number of queries provided that every sufficiently large induced subgraph of the input graph has poor expansion. This result implies that, for example, any hereditary property (e.g., $k$-colorability, $H$-freeness, etc.) is testable in the bounded-degree graph model for planar graphs, graphs with bounded genus, interval graphs, etc. No such results have been known before, and prior to our work, very few graph properties have been known to be testable with a constant number of queries for general graph classes in the bounded-degree graph model.

9 citations


Book ChapterDOI
16 Jul 2007
TL;DR: It turns out that for the standard notion of a regular partition, one can construct a graph that has very distinct regular partitions, and all such regular partitions of the same graph must be very "similar".
Abstract: The regularity lemma of Szemeredi gives a concise approximate description of a graph via a so called regular-partition of its vertex set. In this paper we address the following problem: can a graph have two "distinct" regular partitions? It turns out that (as observed by several researchers) for the standard notion of a regular partition, one can construct a graph that has very distinct regular partitions. On the other hand we show that for the stronger notion of a regular partition that has been recently studied, all such regular partitions of the same graph must be very "similar". En route, we also give a short argument for deriving a recent variant of the regularity lemma obtained independently by Rodl and Schacht ([11]) and Lovasz and Szegedy ([9],[10]), from a previously known variant of the regularity lemma due to Alon et al. [2]. The proof also provides a deterministic polynomial time algorithm for finding such partitions.

3 citations


Posted Content
TL;DR: The first result of this paper states that the edge-deletion problem can be efficiently approximated for any monotone property and answers a question of Yannakakis [1981], who asked if it is possible to find a large and natural family of graph properties for which computing E/sub P/' is NP-hard.
Abstract: A graph property is monotone if it is closed under removal of vertices and edges. In this paper we consider the following edge-deletion problem; given a monotone property P and a graph G, compute the smallest number of edge deletions that are needed in order to turn G into a graph satisfying P. We denote this quantity by E_P(G). Our first result states that for any monotone graph property P, any \epsilon >0 and n-vertex input graph G one can approximate E_P(G) up to an additive error of \epsilon n^2 Our second main result shows that such approximation is essentially best possible and for most properties, it is NP-hard to approximate E_P(G) up to an additive error of n^{2-\delta}, for any fixed positive \delta. The proof requires several new combinatorial ideas and involves tools from Extremal Graph Theory together with spectral techniques. Interestingly, prior to this work it was not even known that computing E_P(G) precisely for dense monotone properties is NP-hard. We thus answer (in a strong form) a question of Yannakakis raised in 1981.

2 citations