# Showing papers in "SIAM Journal on Computing in 1988"

••

TL;DR: A digital signature scheme based on the computational difficulty of integer factorization possesses the novel property of being robust against an adaptive chosen-message attack: an adversary who receives signatures for messages of his choice cannot later forge the signature of even a single additional message.

Abstract: We present a digital signature scheme based on the computational difficulty of integer factorization. The scheme possesses the novel property of being robust against an adaptive chosen-message attack: an adversary who receives signatures for messages of his choice (where each message may be chosen in a way that depends on the signatures of previously chosen messages) cannot later forge the signature of even a single additional message. This may be somewhat surprising, since in the folklore the properties of having forgery being equivalent to factoring and being invulnerable to an adaptive chosen-message attack were considered to be contradictory. More generally, we show how to construct a signature scheme with such properties based on the existence of a "claw-free" pair of permutations--a potentially weaker assumption than the intractibility of integer factorization. The new scheme is potentially practical: signing and verifying signatures are reasonably fast, and signatures are compact.

2,994 citations

••

TL;DR: Any pseudorandom bit generator can be used to construct a block private key cryptos system which is secure against chosen plaintext attack, which is one of the strongest known attacks against a cryptosystem.

Abstract: We show how to efficiently construct a pseudorandom invertible permutation generator from a pseudorandom function generator. Goldreich, Goldwasser and Micali [“How to construct random functions,” P...

1,021 citations

••

TL;DR: This paper investigates how the use of a channel with perfect authenticity but no privacy can be used to repair the defects of a channels with imperfect privacy but no authenticity.

Abstract: In this paper, we investigate how the use of a channel with perfect authenticity but no privacy can be used to repair the defects of a channel with imperfect privacy but no authenticity. More preci...

890 citations

••

[...]

TL;DR: A parallel implementation of merge sort on a CREW PRAM that uses n processors and O(logn) time; the constant in the running time is small.

Abstract: We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and $O(\log n)$ time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and $O(\log n)$ time. The constant in the running time is still moderate, though not as small.

821 citations

••

Yale University

^{1}TL;DR: It immediately follows that the context-sensitive languages are closed under complementation, thus settling a question raised by Kuroda.

Abstract: In this paper we show that nondeterministic space $s(n)$ is closed under complementation for $s(n)$ greater than or equal to $\log n$. It immediately follows that the context-sensitive languages are closed under complementation, thus settling a question raised by Kuroda [Inform. and Control, 7 (1964), pp. 207–233].

725 citations

••

TL;DR: A new model for weak random physical sources is presented that strictly generalizes previous models and provides a fruitful viewpoint on problems studied previously such as Extracting almost-perfect bits from sources of weak randomness.

Abstract: A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g., the Santha and Vazirani model [27]). The sources considered output strings according to probability distributions in which no single string is too probable.The new model provides a fruitful viewpoint on problems studied previously such as: • Extracting almost-perfect bits from sources of weak randomness. The question of possibility as well as the question of efficiency of such extraction schemes are addressed. • Probabilistic communication complexity. It is shown that most functions have linear communication complexity in a very strong probabilistic sense. • Robustness of BPP with respect to sources of weak randomness (generalizing a result of Vazirani and Vazirani [32], [33]).

529 citations

••

TL;DR: These results include, in particular, linear-size data structures for range and rectangle counting in two dimensions with logarithmic query time and a redefinition of data structures in terms of functional specifications.

Abstract: We establish new upper bounds on the complexity of multidimensional searching. Our results include, in particular, linear-size data structures for range and rectangle counting in two dimensions with logarithmic query time. More generally, we give improved data structures for rectangle problems in any dimension, in a static as well as a dynamic setting. Several of the algorithms we give are simple to implement and might be the solutions of choice in practice. Central to this paper is the nonstandard approach followed to achieve these results. At its root we find a redefinition of data structures in terms of functional specifications.

385 citations

••

TL;DR: A family of polynomial-time algorithms are given such that the last job to finish is completed as quickly as possible and the algorithm delivers a solution that is within a relative error of the optimum.

Abstract: In this paper we present a polynomial approximation scheme for the minimum makespan problem on uniform parallel processors. More specifically, the problem is to find a schedule for a set of independent jobs on a collection of machines of different speeds so that the last job to finish is completed as quickly as possible. We give a family of polynomial-time algorithms {A∈} such that A∈ delivers a solution that is within a relative error of e of the optimum. The technique employed is the dual approximation approach, where infeasible but superoptimal solutions for a related (dual) problem are converted to the desired feasible but possibly suboptimal solution.

367 citations

••

TL;DR: A general theory of public-key cryptography is developed that is based on the mathematical framework of complexity theory and two related approaches are taken to the development of this theory.

Abstract: A general theory of public-key cryptography is developed that is based on the mathematical framework of complexity theory. Two related approaches are taken to the development of this theory, and th...

336 citations

••

TL;DR: It is shown that computing the volume of a polyhedron given either as a list of facets or as aList of vertices is as hard as computing the permanent of a matrix.

Abstract: We show that computing the volume of a polyhedron given either as a list of facets or as a list of vertices is as hard as computing the permanent of a matrix.

320 citations

••

TL;DR: The RSA and Rabin encryption functions are computationally equivalent, which implies that an adversary, given the RSA/Rabin ciphertext, cannot have a non-negligible advantage in guessing the least-significant bit of the plaintext, unless he can invert RSA/factor N.

Abstract: The RSA and Rabin encryption functions $E_N ( \cdot )$ are respectively defined by raising $x \in Z_N $ to the power e (where e is relatively prime to $\varphi (N)$) and squaring modulo N (i.e., $E_N (x) = x^e (\bmod N)$, $E_N (x) = x^2 (\bmod N)$, respectively). We prove that for both functions, the following problems are computationally equivalent (each is probabilistic polynomial-time reducible to the other): (1) Given $E_N (x)$, find x. (2) Given $E_N (x)$, guess the least-significant bit of x with success probability $\tfrac{1}{2} + {1 {{\operatorname{poly}}(n)}}$ (where n is the length of the modulus N). This equivalence implies that an adversary, given the RSA/Rabin ciphertext, cannot have a non-negligible advantage (over a random coin flip) in guessing the least-significant bit of the plaintext, unless he can invert RSA/factor N. The proof techniques also yield the simultaneous security of the $\log n$ least-significant bits. Our results improve the efficiency of pseudorandom number generation and...

••

TL;DR: This result approaches the $\Omega (n^{\lceil {{d / 2}} \rceil } )$ worst-case time required for any algorithm that constructs the Voronoi...

Abstract: An algorithm for closest-point queries is given. The problem is this: given a set S of n points in d-dimensional space, build a data structure so that given an arbitrary query point p, a closest point in S to p can be found quickly. The measure of distance is the Euclidean norm. This is sometimes called the post-office problem. The new data structure will be termed an RPO tree, from Randomized Post Office. The expected time required to build an RPO tree is $O(n^{\lceil {{d / 2}} \rceil (1 + \epsilon )} )$, for any fixed $\epsilon > 0$, and a query can be answered in $O(\log n)$ worst-case time. An RPO tree requires $O(n^{\lceil {{d / 2}} \rceil (1 + \epsilon )} )$ space in the worst case. The constant factors in these bounds depend on d and $\epsilon $. The bounds are average-case due to the randomization employed by the algorithm, and hold for any set of input points. This result approaches the $\Omega (n^{\lceil {{d / 2}} \rceil } )$ worst-case time required for any algorithm that constructs the Voronoi...

••

TL;DR: The complexity of sets formed by boolean operations (union, intersection, and complement) on NP sets are studied, showing that in some relativized worlds the boolean hierarchy is infinite, and that for every k there is a relativization world in which the Boolean hierarchy extends exactly k levels.

Abstract: In this paper, we study the complexity of sets formed by boolean operations (union, intersection, and complement) on NP sets. These are the sets accepted by trees of hardware with NP predicates as leaves, and together these form the boolean hierarchy.We present many results about the structure of the boolean hierarchy: separation and immunity results, natural complete languages, and structural asymmetries between complementary classes.We show that in some relativized worlds the boolean hierarchy is infinite, and that for every k there is a relativized world in which the boolean hierarchy extends exactly k levels. We prove natural languages, variations of VERTEX COVER, complete for the various levels of the boolean hierarchy. We show the following structural asymmetry: though no set in the boolean hierarchy is ${\text{D}}^{\text{P}} $-immune, there is a relativized world in which the boolean hierarchy contains ${\text{coD}}^{\text{P}} $-immune sets.Thus, this paper explores the structural properties of the...

••

TL;DR: This work defines a novel scheduling problem, which leads to the first optimal logarithmic time PRAM algorithm for list ranking, and shows how to apply these results to obtain improved PRAM upper bounds for a variety of problems on graphs.

Abstract: We define a novel scheduling problem; it is solved in parallel by repeated, rapid, approximate reschedulings. This leads to the first optimal logarithmic time PRAM algorithm for list ranking. Companion papers show how to apply these results to obtain improved PRAM upper bounds for a variety of problems on graphs, including the following: connectivity, biconnectivity, Euler tour and $st$-numbering, and a number of problems on trees.

••

TL;DR: It is shown that a protocol by Broder and Dolev is insecure if RSA with a small exponent is used and the RSA cryptosystem used with asmall exponent is not a good choice to use as a public-key cryptos system in a large network.

Abstract: We consider the problem of solving systems of equations $P_i (x) \equiv 0(\bmod n_i )i = 1 \cdots k$ where $P_i $ are polynomials of degree d and the $n_i $ are distinct relatively prime numbers and $x {{d(d + 1)} / 2}$ we can recover x in polynomial time provided $\min (n_i ) > 2^{d^2 } $. As a consequence the RSA cryptosystem used with a small exponent is not a good choice to use as a public-key cryptosystem in a large network. We also show that a protocol by Broder and Dolev [Proceedings on the 25th Annual IEEE Symposium on the Foundations of Computer Science, 1984] is insecure if RSA with a small exponent is used.

••

TL;DR: Improved algorithms for several other computational geometry problems, including testing whether a polygon is simple, follow from the proposed O(n\log \log n)-time algorithm, improving on the previously best bound and showing that triangulation is not as hard as sorting.

Abstract: Given a simple n-vertex polygon, the triangulation problem is to partition the interior of the polygon into $n - 2$ triangles by adding $n - 3$ nonintersecting diagonals. We propose an $O(n\log \log n)$-time algorithm for this problem, improving on the previously best bound of $O(n\log n)$ and showing that triangulation is not as hard as sorting. Improved algorithms for several other computational geometry problems, including testing whether a polygon is simple, follow from our result.

••

TL;DR: It is believed that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed, and the question whether it is possible to avoid the connectivity requirements by slightly lowering the authors' expectations is raised.

Abstract: Achieving processor cooperation in the presence of faults is a major problem in distributed systems. Popular paradigms such as Byzantine agreement have been studied principally in the context of a complete network. Indeed, Dolev [J. Algorithms, 3 (1982), pp. 14–30] and Hadzilacos [Issues of Fault Tolerance in Concurrent Computations, Ph.D. thesis, Harvard University, Cambridge, MA, 1984] have shown that $\Omega (t)$ connectivity is necessary if the requirement is that all nonfaulty processors decide unanimously, where t is the number of faults to be tolerated. We believe that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed. We therefore raise the question whether it is possible to avoid the connectivity requirements by slightly lowering our expectations. In many practical situations we may be willing to “lose” some correct processors and settle for cooperation between the vast majority of the processors. Thus motivated, ...

••

TL;DR: It is shown that if the Boolean hierarchy (BH) collapses, then there exists a sparse set S such that ${\text{co-NP}} \subseteq {\text{ NP}}^S $, and therefore the polynomial time hierarchy (PH) col...

Abstract: It is shown that if the Boolean hierarchy (BH) collapses, then there exists a sparse set S such that ${\text{co-NP}} \subseteq {\text{ NP}}^S $, and therefore the polynomial time hierarchy (PH) col...

••

TL;DR: This algorithm yields the first efficient deterministic polynomial time algorithm (and moreover boolean $NC-algorithm) for interpolating t-sparse polynomials over finite fields and should be contrasted with the fact that efficient interpolation using a black box that only evaluates the polynometric at points in $GF[q]$ is not possible.

Abstract: The authors consider the problem of reconstructing (i.e., interpolating) a t-sparse multivariate polynomial given a black box which will produce the value of the polynomial for any value of the arguments. It is shown that, if the polynomial has coefficients in a finite field $GF[q]$ and the black box can evaluate the polynomial in the field $GF[q^{\ulcorner 2\log_{q}(nt)+3 \urcorner}]$, where n is the number of variables, then there is an algorithm to interpolate the polynomial in $O(\log^3 (nt))$ boolean parallel time and $O(n^2 t^6 \log^2 nt)$ processors.This algorithm yields the first efficient deterministic polynomial time algorithm (and moreover boolean $NC$-algorithm) for interpolating t-sparse polynomials over finite fields and should be contrasted with the fact that efficient interpolation using a black box that only evaluates the polynomial at points in $GF[q]$ is not possible (cf. [M. Clausen, A. Dress, J. Grabmeier, and M. Karpinski, Theoret. Comput. Sci., 1990, to appear]). This algorithm, tog...

••

TL;DR: Goldwasser and Micali as discussed by the authors proved the equivalence of three different formal definitions of security for public-key cryptosystems, one by Goldwasser, and two by Yao.

Abstract: Three very different formal definitions of security for public-key cryptosystems have been proposed—two by Goldwasser and Micali and one by Yao. We prove all of them to be equivalent. This equivalence provides evidence that the right formalization of the notion of security has been reached.

••

TL;DR: This work extensively study the relationship between four shared memory models of parallel computation that allow simultaneous read/write access, and proves nontrivial separations and simulation results among them.

Abstract: Shared memory models of parallel computation (e.g., parallel RAMs) that allow simultaneous read/write access are very natural and already widely used for parallel algorithm design. The various models differ from each other in the mechanism by which they resolve write conflicts. To understand the effect of these communication primitives on the power of parallelism, we extensively study the relationship between four such models that appear in the literature, and prove nontrivial separations and simulation results among them.

••

TL;DR: It is shown that the problems of finding cardinality Steiner trees and connected dominating sets are polynomially solvable in a distance-hereditary graph.

Abstract: Distance-hereditary graphs have been introduced by Howorka and studied in the literature with respect to their metric properties. In this paper several equivalent characterizations of these graphs are given: in terms of existence of particular kinds of vertices (isolated, leaves, twins) and in terms of properties of connections, separators, and hangings. Distance-hereditary graphs are then studied from the algorithmic viewpoint: simple recognition algorithms are given and it is shown that the problems of finding cardinality Steiner trees and connected dominating sets are polynomially solvable in a distance-hereditary graph.

••

[...]

TL;DR: It is shown that the class of sets of small generalized Kolmogorov complexity is exactly theclass of sets which are P-isomorphic to a tally language.

Abstract: P-printable sets arise naturally in the.studies of generalized Kolmogorov complexity and data compression, as well as in other areas. We present new characterizations of the P-printable sets and present necessary and sufficient conditions for the existence of sparse sets in P that are not P-printable. As a corollary to one of our results, we show that the class of sets of small generalized Kolmogorov complexity is exactly the class of sets which are P-isomorphic to a tally language.

••

TL;DR: A general polynomial time algorithm is proposed to find small integer solutions to systems of linear congruences and will solve most problems when twice as much information as that necessary to uniquely determine the variables is available.

Abstract: We propose a general polynomial time algorithm to find small integer solutions to systems of linear congruences. We use this algorithm to obtain two polynomial time algorithms for reconstructing the values of variables $x_1 , \cdots ,x_k $ when we are given some linear congruences relating them together with some bits obtained by truncating the binary expansions of the variables. The first algorithm reconstructs the variables when either the high order bits or the low order bits of the $x_i $ are known. It is essentially optimal in its use of information in the sense that it will solve most problems almost as soon as the variables become uniquely determined by their constraints. The second algorithm reconstructs the variables when an arbitrary window of consecutive bits of the variables is known. This algorithm will solve most problems when twice as much information as that necessary to uniquely determine the variables is available. Two cryptanalytic applications of the algorithms are given: predicting li...

••

TL;DR: This paper studies the problem of protecting sensitive data in an n by n two-dimensional table of statistics, when the nonsensitive data are made public along with the row and column sums for the table.

Abstract: In this paper we study the problem of protecting sensitive data in an n by n two-dimensional table of statistics, when the nonsensitive data are made public along with the row and column sums for the table. A sensitive cell is considered unprotected if its exact value can be deduced from the nonsensitive cell values and the row and column sums. We give an efficient algorithm to identify all unprotected cells in a table. The algorithm runs in linear time if the sensitive values are known, and in $O(n^3 )$ time if they are not known. We then consider the problem of suppressing the fewest additional cell values to protect all the sensitive cells, when some cells are initially unprotected. We give a linear time algorithm for this problem in the useful special case that all cell values are strictly positive. We next consider the problem of computing the tightest upper and lower bounds on the values of sensitive cells. We show that each cell bound can be computed in $O(n^3 )$ time, but all $\Theta (n^2 )$ value...

••

TL;DR: The authors give efficient solutions to transportation problems motivated by the following robotics problem: A robot arm has the task of rearranging m objects between n stations in the plane and needs to be moved to another station.

Abstract: We give efficient solutions to transportation problems motivated by the following robotics problem. A robot arm has the task of rearranging m objects between n stations in the plane. Each object is initially at one of these n stations and needs to be moved to another station. The robot arm consists of a single link that rotates about a fixed pivot. The link can extend in and out (like a telescope) so that its length is a variable. At the end of this “telescoping” link lies a gripper that is capable of grasping any one of the m given objects (the gripper cannot be holding more than one object at the same time). The robot arm must transport each of the m objects to its destination and come back to where it started. Since the problem of scheduling the motion of the gripper so as to minimize the total distance traveled is NP-hard, we focus on the problem of minimizing only the total angular motion (rotation of the link about the pivot), or only the telescoping motion. We give algorithms for two different mode...

••

TL;DR: A new parallel algorithm is given to evaluate a straight line program over a commutative semi-ring R of degree d and size n in time O (log n(log nd) time) using M(n) processors.

Abstract: A new parallel algorithm is given to evaluate a straight line program. The algorithm evaluates a program over a commutative semi-ring R of degree d and size n in time O(log n(log nd)) using M(n) processors, where M(n) is the number of processors required for multiplying n×n matrices over the semi-ring R in O (log n) time.

••

TL;DR: An algorithm is presented which given a graph G and a value k either determines that G is not k-planar or generates an appropriate embedding and associated minimum cover in O(c^k n) time, where c is a constant.

Abstract: The pair $(G,D)$ consisting of a planar graph $G = (V,E)$ with n vertices together with a subset of d special vertices $D \subseteq V$ is called k-planar if there is an embedding of G in the plane so that at most k faces of G are required to cover all of the vertices in D. Checking 1-planarity can be done in linear-time since it reduces to a problem of checking planarity of a related graph. We present an algorithm which given a graph G and a value k either determines that G is not k-planar or generates an appropriate embedding and associated minimum cover in $O(c^k n)$ time, where c is a constant. Hence, the algorithm runs in linear time for any fixed k. The fact that the time required by the algorithm grows exponentially in k is to be expected since we also show that for arbitrary k, the associated decision problem is strongly NP-complete, even when the planar graph has essentially a unique planar embedding, $d = \theta (n)$, and all facial cycles have bounded length. These results provide a polynomial-t...

••

TL;DR: It is shown, for a particular language based on finite (CCS) terms, that the generalised equivalence coincides with observational equivalence; the more powerful observers do not lead to a finer equivalence.

Abstract: We re-examine the well-known observational equivalence between processes with a view to modifying it so as to distinguish between concurrent and purely nondeterministic processes.Observational equivalence is based on atomic actions or observations. In the first part of this paper we generalise these atomic observations so that arbitrary processes may act as observers. We show, for a particular language based on finite (CCS) terms, that the generalised equivalence coincides with observational equivalence; the more powerful observers do not lead to a finer equivalence.In the second part of the paper we consider observers which can distinguish the beginning and ending of atomic actions. The resulting equivalence distinguishes a concurrent process from the purely nondeterministic process obtained by interleaving its possible actions. We give a complete axiomatisation for the congruence generated by the new equivalence.

••

TL;DR: This work studies sets that are truth-table reducible to sparse sets in polynomial time and results show that for every integer k > 0, there is a set L and a sparse set S such that $L...

Abstract: We study sets that are truth-table reducible to sparse sets in polynomial time. The principal results are as follows: (1) For every integer $k > 0$, there is a set L and a sparse set S such that $L...