scispace - formally typeset
Search or ask a question
Topic

Las Vegas algorithm

About: Las Vegas algorithm is a research topic. Over the lifetime, 130 publications have been published within this topic receiving 4340 citations.


Papers
More filters
Proceedings ArticleDOI
19 May 2002
TL;DR: The crucial new idea underlying the first three results above is that of confirming matches by convolving vectors obtained by coding characters in the alphabet with non-boolean entries; in contrast, almost all previous pattern matching algorithms consider only boolean codes for the alphabet.
Abstract: (MATH) This paper obtains the following results on pattern matching problems in which the text has length n and the pattern has length mAn O(nlog m) time deterministic algorithm for the String Matching with Wildcards problems, even when the alphabet is large.An O(klog2 m) time Las Vegas algorithm for the Sparse String Matching with Wildcards problem, where k«n is the number of non-zeros in the text. We also give Las Vegas algorithms for the higher dimensional version of this problem.As an application of the above, an O(nlog2 m) time Las Vegas algorithm for the Subset Matching and Tree Pattern Matching problems, and a Las Vegas algorithm for the Geometric Pattern Matching problem.Finally, an O(nlog2 m) time deterministic algorithm for Subset Matching and Tree Pattern Matching..The crucial new idea underlying the first three results above is that of confirming matches by convolving vectors obtained by coding characters in the alphabet with non-boolean (i.e., rational or even complex) entries; in contrast, almost all previous pattern matching algorithms consider only boolean codes for the alphabet. The crucial new idea underlying the fourth result is a simpler method of shifting characters which ensures that each character occurs as a singleton in some shift.

159 citations

Proceedings ArticleDOI
05 Jan 2014
TL;DR: Two algorithms are deterministic, and thus the first deterministic (2 -- e)-approximation algorithm for the diameter that takes subquadratic time in sparse graphs is presented.
Abstract: The diameter is a fundamental graph parameter and its computation is necessary in many applications. The fastest known way to compute the diameter exactly is to solve the All-Pairs Shortest Paths (APSP) problem.In the absence of fast algorithms, attempts were made to seek fast algorithms that approximate the diameter. In a seminal result Aingworth, Chekuri, Indyk and Motwani [SODA'96 and SICOMP'99] designed an algorithm that computes in O (n2 + m√n) time an estimate D for the diameter D in directed graphs with nonnegative edge weights, such that [EQUATION], where M is the maximum edge weight in the graph. In recent work, Roditty and Vassilevska W. [STOC 13] gave a Las Vegas algorithm that has the same approximation guarantee but improves the (expected) runtime to O (m√n). Roditty and Vassilevska W. also showed that unless the Strong Exponential Time Hypothesis fails, no O (n2-e) time algorithm for sparse unweighted undirected graphs can achieve an approximation ratio better than 3/2. Thus their algorithm is essentially tight for sparse unweighted graphs. For weighted graphs however, the approximation guarantee can be meaningless, as M can be arbitrarily large.In this paper we exhibit two algorithms that achieve a genuine 3/2-approximation for the diameter, one running in O (m3/2) time, and one running in O (mn2/3). time. Furthermore, our algorithms are deterministic, and thus we present the first deterministic (2 -- e)-approximation algorithm for the diameter that takes subquadratic time in sparse graphs.In addition, we address the question of obtaining an additive c-approximation for the diameter, i.e. an estimate D such that D -- c ≤ D ≤ D. An extremely simple O (mn1-e) time algorithm achieves an additive ne-approximation; no better results are known. We show that for any e > 0, getting an additive ne-approximation algorithm for the diameter running in O (n2-e) time for any δ > 2e would falsify the Strong Exponential Time Hypothesis. Thus the simple algorithm is probably essentially tight for sparse graphs, and moreover, obtaining a subquadratic time additive c-approximation for any constant c is unlikely.Finally, we consider the problem of computing the eccentricities of all vertices in an undirected graph, i.e. the largest distance from each vertex. Roditty and Vassilevska W. [STOC 13] show that in O (m√n) time, one can compute for each v e V in an undirected graph, an estimate e(v) for the eccentricity e (v) such that max{R, 2/3 · e(v)} ≤ e (v) ≤ min {D, 3/2 · e(v)} where R = minv e (v) is the radius of the graph. Here we improve the approximation guarantee by showing that a variant of the same algorithm can achieve estimates e' (v) with 3/5 · e (v) ≤ e' (v) ≤ e (v).

116 citations

Book ChapterDOI
13 Sep 2010
TL;DR: A Las Vegas algorithm that solves O(Δ)- interval coloring in O(log n) time with high probability and how to adapt the algorithm for dynamic networks where nodes may join or leave is described and a lower bound is proved on the time required to solve interval coloring for this model against randomized algorithms.
Abstract: We present the discrete beeping communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, do not have access to synchronized clocks and are woken up by an adversary. Moreover, instead on communicating through messages they rely solely on carrier sensing to exchange information. This model is interesting from a practical point of view, because it is possible to implement it (or emulate it) even in extremely restricted radio network environments. From a theory point of view, it shows that complex problems (such as vertex coloring) can be solved efficiently even without strong assumptions on properties of the communication model. We study the problem of interval coloring, a variant of vertex coloring specially suited for the studied beeping model. Given a set of resources, the goal of interval coloring is to assign every node a large contiguous fraction of the resources, such that neighboring nodes have disjoint resources. A k-interval coloring is one where every node gets at least a 1/k fraction of the resources. To highlight the importance of the discreteness of the model, we contrast it against a continuous variant described in [17]. We present an O(1) time algorithm that with probability 1 produces a O(Δ)-interval coloring. This improves an O(log n) time algorithm with the same guarantees presented in [17], and accentuates the unrealistic assumptions of the continuous model. Under the more realistic discrete model, we present a Las Vegas algorithm that solves O(Δ)- interval coloring in O(log n) time with high probability and describe how to adapt the algorithm for dynamic networks where nodes may join or leave. For constant degree graphs we prove a lower bound of Ω(log n) on the time required to solve interval coloring for this model against randomized algorithms. This lower bound implies that our algorithm is asymptotically optimal for constant degree graphs.

104 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A Las Vegas algorithm for dynamically maintaining a minimum spanning forest of an n-node graph undergoing edge insertions and deletions and guarantees an O(n^{o(1)})} worst-case update time with high probability is presented.
Abstract: We present a Las Vegas algorithm for dynamically maintaining a minimum spanning forest of an n-node graph undergoing edge insertions and deletions. Our algorithm guarantees an O(n^{o(1)})} worst-case} update time with high probability. This significantly improves the two recent Las Vegas algorithms by Wulff-Nilsen \cite{Wulff-Nilsen16a} with update time O(n^{0.5-≥ilon}) for some constant ≥ilon 0 and, independently, by Nanongkai and Saranurak \cite{NanongkaiS16} with update time O(n^{0.494}) (the latter works only for maintaining a spanning forest).Our result is obtained by identifying the common framework that both two previous algorithms rely on, and then improve and combine the ideas from both works. There are two main algorithmic components of the framework that are newly improved and critical for obtaining our result. First, we improve the update time from O(n^{0.5-≥ilon}) in \cite{Wulff-Nilsen16a} to O(n^{o(1)}) for decrementally removing all low-conductance cuts in an expander undergoing edge deletions. Second, by revisiting the contraction technique by Henzinger and King \cite{HenzingerK97b} and Holm et al. \cite{HolmLT01, we show a new approach for maintaining a minimum spanning forest in connected graphs with very few (at most (1+o(1))n) edges. This significantly improves the previous approach in \cite{Wulff-Nilsen16a, NanongkaiS16} which is based on Fredericksons 2-dimensional topology tree \cite{Frederickson85} and illustrates a new application to this old technique.

96 citations

Proceedings ArticleDOI
Kenneth L. Clarkson1
24 Oct 1988
TL;DR: In this paper, the expected number of arithmetic operations required by an algorithm for solving linear programming problems is given, with respect to the random choices made by the algorithm, and the bound holds for any given input.
Abstract: An algorithm for solving linear programming problems is given. The expected number of arithmetic operations required by the algorithm is given. The expectation is with respect to the random choices made by the algorithm, and the bound holds for any given input. The technique can be extended to other convex programming problems. >

84 citations

Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
74% related
Pathwidth
8.3K papers, 252.2K citations
73% related
Approximation algorithm
23.9K papers, 654.3K citations
71% related
Hash function
31.5K papers, 538.5K citations
69% related
Line graph
11.5K papers, 304.1K citations
69% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
20214
20209
20199
20184
20177