scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1991"


Journal ArticleDOI
TL;DR: It is shown that the class of programs possessing a total well-founded model properly includes previously studied classes of "stratified" and "locally stratified" programs, and is compared with other proposals in the literature.
Abstract: A general logic program (abbreviated to "program" hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the "meaning of the program, " or Its "declarative semantics. " Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a "satisfactory" total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of "stratified" and "locally stratified" programs, The method in this paper is also compared with other proposals in the literature, including Clark's "program completion, " Fitting's and Kunen's 3-vahred interpretations of it, and the "stable models" of Gelfond and Lifschitz.

1,908 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that all languages in NP have zero-knowledge interactive proofs, which are probabilistic and interactive proofs that, for the members of a language, efficiently demonstrate membership in the language without conveying any additional knowledge.
Abstract: In this paper the generality and wide applicability of Zero-knowledge proofs, a notion introduced by Goldwasser, Micali, and Rackoff is demonstrated. These are probabilistic and interactive proofs that, for the members of a language, efficiently demonstrate membership in the language without conveying any additional knowledge. All previously known zero-knowledge proofs were only for number-theoretic languages in NP fl CONP. Under the assumption that secure encryption functions exist or by using "physical means for hiding information, '' it is shown that all languages in NP have zero-knowledge proofs. Loosely speaking, it is possible to demonstrate that a CNF formula is satisfiable without revealing any other property of the formula, in particular, without yielding neither a satis@ing assignment nor properties such as whether there is a satisfying assignment in which xl = X3 etc. It is also demonstrated that zero-knowledge proofs exist "outside the domain of cryptography and number theory. " Using no assumptions. it is shown that both graph isomorphism and graph nonisomor- phism have zero-knowledge interactive proofs. The mere existence of an interactive proof for graph nonisomorphism is interesting, since graph nonisomorphism is not known to be in NP and hence no efficient proofs were known before for demonstrating that two graphs are not isomorphic.

1,366 citations


Journal ArticleDOI
TL;DR: The proof of correctness of the algorithm relies on recent theory of rapidly mixing Markov chains and isoperimetric inequalities to show that a certain random walk can be used to sample nearly uniformly from within K within Euclidean space.
Abstract: A randomized polynomial-time algorithm for approximating the volume of a convex body K in n-dimensional Euclidean space is presented. The proof of correctness of the algorithm relies on recent theory of rapidly mixing Markov chains and isoperimetric inequalities to show that a certain random walk can be used to sample nearly uniformly from within K.

702 citations


Journal ArticleDOI
TL;DR: It is proven that the problem of existence of stable models is NP-complete and the definition of stratified theories is extended and efficient algorithms for testing whether atheory is stratified are proposed.
Abstract: Autoepistemic logic is one of the principal modes of nonmonotonic reasoning. It unifies several other modes of nonmonotonic reasoning and has important application in logic programming. In the paper, a theory of autoepistemic logic is developed. This paper starts with a brief survey of some of the previously known results. Then, the nature of nonmonotonicity is studied by investigating how membership of autoepistemic statements in autoepistemic theories depends on the underlying objective theory. A notion similar to set-theoretic forcing is introduced. Expansions of autoepistemic theories are also investigated. Expansions serve as sets of consequences of an autoepistemic theory and they can also be used to define semantics for logic programs with negation. Theories that have expansions are characterized, and a normal form that allows the description of all expansions of a theory is introduced. Our results imply algorithms to determine whether a theory has a unique expansion. Sufficient conditions (stratification) that imply existence of a unique expansion are discussed. The definition of stratified theories is extended and (under some additional assumptions) efficient algorithms for testing whether a theory is stratified are proposed. The theorem characterizing expansions is applied to two classes of theories, K1-theories and ae-programs. In each case, simple hypergraph characterization of expansions of theories from each of these classes is given. Finally, connections with stable model semantics for logic programs with negation is discussed. In particular, it is proven that the problem of existence of stable models is NP-complete.—Authors' Abstract

484 citations


Journal ArticleDOI
TL;DR: In this paper, a modal temporal loglc based on time intervals is developed, a logic that can be viewed as a generalization of point-based modality temporal logic.
Abstract: In certain areas of artificial intelligence there is need to represent continuous change and to make statements that are interpreted with respect to time intervals rather than time points. To this end, a modal temporal loglc based on time intervals is developed, a logic that can be viewed as a generalization of point-based modal temporal logic. Related loglcs are discussed, an intuitive presentation of the new logic is given, and its formal syntax and semantics are defined. No assumption is made about the underlying nature of time, allowing it to be discrete (such as the natural numbers) or continuous (such as the rationals or the reals), linear or branching, complete (such as the reals), or not (such as the rational). It is shown, however, that there are formulas in the logic that allow us to distinguish all these situations. A translation of our logic into first-order logic is given, which allows the application of some results on first-order logic to our modal logic. Finally. the difficulty of validity problems for the logic is considered. This turns out to depend critically, and in surprising ways, on our assumptions about time. For example, if our underlying temporal structure is the ratlonals, then, the validity problem is r. e .-complete; if it is the reals, then validity n II ~-hard: and if it is the natural numbers, then validity is fI ] -complete.

424 citations


Journal ArticleDOI
TL;DR: In this article, an algorithm for minimum-cost matching on a general graph with integral edge costs is presented, which runs in time close to the fastest known bound for maximum-cardinality matching.
Abstract: An algorithm for minimum-cost matching on a general graph with integral edge costs is presented. The algorithm runs in time close to the fastest known bound for maximum-cardinality matching. Specifically, let n, m, and N denote the number of vertices, number of edges, and largest magnitude of a cost, respectively. The best known time bound for maximum-cardinal ity matching M 0( Am). The new algorithm for minimum-cost matching has time bound 0( in a ( m, n )Iog n m log ( nN)). A slight modification of the new algorithm finds a maximum-cardinality matching in 0( fire) time. Other applications of the new algorlthm are given, mchrding an efficient implementa- tion of Christofides' traveling salesman approximation algorithm and efficient solutions to update problems that require the linear programming duals for matching.

328 citations


Journal ArticleDOI
TL;DR: An algorithm that constructs a (restricted) “shortest path map” with respect to a given source point is presented and exploits the fact that shortest paths obey Snell's Law of Refraction at region boundaries, a local optimaly property of shortest paths that is well known from the analogous optics model.
Abstract: The problem of determining shortest paths through a weighted planar polygonal subdivision with n vertices is considered. Distances are measured according to a weighted Euclidean metric: The length of a path is defined to be the weighted sum of (Euclidean) lengths of the subpaths within each region. An algorithm that constructs a (restricted) “shortest path map” with respect to a given source point is presented. The output is a partitioning of each edge of the subdivion into intervals of e-optimality, allowing an e-optimal path to be traced from the source to any query point along any edge. The algorithm runs in worst-case time O(ES) and requires O(E) space, where E is the number of “events” in our algorithm and S is the time it takes to run a numerical search procedure. In the worst case, E is bounded above by O(n4) (and we give an Ω(n4) lower bound), but it is likeky that E will be much smaller in practice. We also show that S is bounded by O(n4L), where L is the precision of the problem instance (including the number of bits in the user-specified tolerance e). Again, the value of S should be smaller in practice. The algorithm applies the “continuous Dijkstra” paradigm and exploits the fact that shortest paths obey Snell's Law of Refraction at region boundaries, a local optimaly property of shortest paths that is well known from the analogous optics model. The algorithm generalizes to the multi-source case to compute Voronoi diagrams.

327 citations


Journal ArticleDOI
TL;DR: A multiobjective generalization of the heuristic search algorithm A* is presented and it is shown that &fOA * is complete and, when used with a suitably defined set of admissible heuristic functions, admissible.
Abstract: A multiobjective generalization of the heuristic search algorithm A* is presented. called A40A*. The research is motivated by the observation that most real-world problems involve multiple. conflicting, and noncommensurate objectwes. MOA * exphcit]y accommodates this observation by identl~ing the set of all nondominated paths from a specified start node to a given set of goal nodes in an OR graph. It is shown that &fOA * is complete and, when used with a suitably defined set of admissible heuristic functions, admissible. Several results concerning the comparison of versions of A40A* directed by different sets of heuristic functions are provided. The implications of using a monotone or consistent set of heuristic functions in MOA * are also discussed. Two simple examples are used to illustrate the behavior of the algorithm.

246 citations


Journal ArticleDOI
TL;DR: A new string-matching algorithm is presented, which can be viewed as an intermediate between the classical algorithms of Knuth, Morris, and Pratt and Boyer and Moore, which presents the advantage of being remarkably simple which consequently makes its analysis possible.
Abstract: A new string-matching algorithm is presented, which can be viewed as an intermediate between the classical algorithms of Knuth, Morris, and Pratt on the one hand and Boyer and Moore, on the other hand. The algorithm is linear in time and uses constant space as the algorithm of Galil and Seiferas. It presents the advantage of being remarkably simple which consequently makes its analysis possible. The algorithm relies on a previously known result in combinatorics on words, called the Critical Factorization Theorem,which relates the global period of a word to Its local repetitions of blocks

173 citations


Journal ArticleDOI
TL;DR: Polynomial algorithms for testing stability and stabilizabdity, and for constructing a stabilizing control law are presented, and connections are established between the notions of invariance and the classical notions of A -Invariance and (A. 11)-invariance of linear systems.
Abstract: A finite-state automaton is adopted as a model for Discrete Event Dynamic Systems (DEDS), Stabdity is defined as wslting a given set E mtinitely often. Stabilizability is defined as choosing state feedback such that the closed loop system is stable. These notions are proposed as properties of resiliency or error-recovery. An important ingredient in stability is shown to be a notion of transition- function-invariance. Relations between our notions of stability and invariance, and the notions of safety, fairness, hvelock, deadlock, etc., in computer science literature are pointed out. Connections are established between our notions of invariance and the classical notions of A -Invariance and (A. 11)-invariance of linear systems. Polynomial algorithms for testing stability and stabilizabdity, and for constructing a stabilizing control law are also presented.

142 citations


Journal ArticleDOI
TL;DR: A proof method based on a notion of transfinite semantic trees is presented and it is shown how to apply it to prove the completeness of refutational theorem proving methods for first order predicate calculus with equality.
Abstract: In this paper, a proof method based on a notion of transfinite semantic trees is presented and it is shown how to apply it to prove the completeness of refutational theorem proving methods for first order predicate calculus with equality. To demonstrate how this method is used, the completeness of two theorem-proving strategies, both refinements of resolution and paramodulation, are proved. Neither of the strategies need the functionally reflexive axioms nor paramodulating into variables. Therefore the Wos-Robinson conjecture follows as a corollary. Another strategy for Horn logic with equality is also presented.

Journal ArticleDOI
TL;DR: The polynomial-time counting hierarchy, a hierarchy of complexity classes related to the notion of counting, is studied, settling many open questions dealing with oracle characterizations, closure under Boolean operations, and relations with other complexity classes.
Abstract: The polynomial-time counting hierarchy, a hierarchy of complexity classes related to the notion of counting is studied. Some of their structural properties are investigated, settling many open questions dealing with oracle characterizations, closure under Boolean operations, and relations with other complexity classes. A new combinatorial technique to obtain relativized separations for some of the studied classes, which imply absolute separations for some logarithmic time bounded complexity classes, is developed.

Journal ArticleDOI
TL;DR: A general semantic model of knowledge is introduced, to allow reasoning about statements such as «He knows that I know whether or not she knows whether or whether it is raining».
Abstract: A general semantic model of knowledge is introduced, to allow reasoning about statements such as «He knows that I know whether or not she knows whether or not it is raining». This approach more naturally models a state of knowledge than previous proposals (including Kripke structures). Using this notion of model, a model theory for knowledge is developed. This theory enables one to interpret the notion of a «finite amount of information»

Journal ArticleDOI
TL;DR: A general framework for removing randomness from randomized NC algorithms whose analysis uses only polyalgorithmic independence is developed, which can be used to obtain many other NC algorithms, including a better NC edge coloring algorithm.
Abstract: : We develop a general framework for removing randomness from randomized NC algorithms whose analysis uses only polyalgorithmic independence. Previously no techniques were known to determine those RNC algorithms depending on more than constant independence. One application of our techniques is an NC algorithm for the set discrepancy problem, which can be used to obtain many other NC algorithms, including a better NC edge coloring algorithm. As another application of our techniques, we provide an NC algorithm for the hypergraph coloring problem.

Journal ArticleDOI
TL;DR: Constraints on the local structure of the network give, by a counting argument and a construction, lower and upper bounds for K(m) that are both linear in m.
Abstract: Let K(m) denote the smallest number with the property that every m-state finite automaton can be built as a neural net using K(m) or fewer neurons. A counting argument shows that K(m) is at least Ω((m log m)1/3), and a construction shows that K(m) is at most O(m3/4). The counting argument and the construction allow neural nets with arbitrarily complex local structure and thus may require neurons that themselves amount to complicated networks. Mild, and in practical situations almost necessary, constraints on the local structure of the network give, again by a counting argument and a construction, lower and upper bounds for K(m) that are both linear in m.

Journal ArticleDOI
TL;DR: A broadcast communication network that interconnects multiple hosts N considered, it is revealed that an allocation algorlthm can be employed to solve the system-level load-balancing problem.
Abstract: A broadcast communication network that interconnects multiple hosts N considered. A job generated by a host is either generic, m which case It can be processed at any of the hosts in the network, or dedicated, m which case it must be processed at the host from which it originates. It is assumed that each host can generate several types of dedicated jobs, where each type imposes a constraint on its average response time. The optimal design objective is twofold: First, to redistribute the generic jobs among the host computers in order to minimize the average response time, which shall be referred to as the load-balancing problem; second, for each host computer, to design a control that schedules the dedicated and the generic Jobs so that the response-time constraints are met for each of the dedicated traffic types. Despite its complexity, the model described above has some attractive features. Specifically, for a given allocation of the generic traffic, the scheduhng problem at each host can be solved as a polymatroid optimization problem. The underlying polymatroid structure leads to an efficient algorithm to determine, as a function of the offered load of generic traffic, the average delay of the generic jobs at any given host. Further analysis reveals that the delay functions at each of the hosts are convex and increasing. Therefore, an allocation algorlthm can be employed to solve the system-level load-balancing problem. An example is provided mdlcating that a substantial improvement m performance can be obtained by incorporating scheduling into the load-balancing procedure.

Journal ArticleDOI
TL;DR: It is shown that to every arborescence there corresponds a family of extended Horn sets, where ordinary Horn sets correspond to stars with a root at the center, derived from a theorem of Chandresekaran that characterizes when an integer solution of a system of inequalities can be found by rounding a real solution in a certain way.
Abstract: The class of Horn clause sets in propositional logic is extended to a larger class for which the satisfiability problem can still be solved by unit resolution in linear time. It is shown that to every arborescence there corresponds a family of extended Horn sets, where ordinary Horn sets correspond to stars with a root at the center. These results derive from a theorem of Chandresekaran that characterizes when an integer solution of a system of inequalities can be found by rounding a real solution in a certain way. A linear-time procedure is provided for identifying “hidden” extended Horn sets (extended Horn but for complementation of variables) that correspond to a specified arborescence. Finally, a way to interpret extended Horn sets in applications is suggested.

Journal ArticleDOI
TL;DR: In this article, a linear-time algorithm is presented for finding an appropriate embedding of a directed planar graph G and a corresponding face-on-vertex covering of cardinality O(p), where p is the minimum cardinality of a subset of the faces that cover all vertices.
Abstract: An algorithm is presented for generating a succinct encoding of all pairs shortest path information in a directed planar graph G with real-valued edge costs but no negative cycles. The algorithm runs in O(pn) time, where n is the number of vertices in G, and p is the minimum cardinality of a subset of the faces that cover all vertices, taken over all planar embeddings of G. The algorithm is based on a decomposition of the graph into O(pn) outerplanar subgraphs satisfying certain separator properties. Linear-time algorithms are presented for various subproblems including that of finding an appropriate embedding of G and a corresponding face-on-vertex covering of cardinality O(p), and of generating all pairs shortest path information in a directed outerplannar graph.

Journal ArticleDOI
TL;DR: This paper presents the theoretical foundations of several related approaches to circuit verification based on logic simulation that exploit the three-valued modeling capability found in most logic simulators, where the third-value X indicates a signal with unknown digital value.
Abstract: A logic simulator can prove the correctness of a digital circuit if it can be shown that only circuits fulfilling the system specification will produce a particular response to a sequence of simulation commands.This style of verification has advantages over the other proof methods in being readily automated and requiring less attention on the part of the user to the low-level details of the design. It has advantages over other approaches to simulation in providing more reliable results, often at a comparable cost.This paper presents the theoretical foundations of several related approaches to circuit verification based on logic simulation. These approaches exploit the three-valued modeling capability found in most logic simulators, where the third-value X indicates a signal with unknown digital value. Although the circuit verification problem is NP-hard as measured in the size of the circuit description, several techniques can reduce the simulation complexity to a manageable level for many practical circuits.

Journal ArticleDOI
TL;DR: A parallel version of the shortest augmenting path algorithm for the assignment problem, which was tested on a 14-processor Butterfly Plus computer, on problems with up to 900 million variables and the speedup obtained increases with problem size.
Abstract: : We describe a parallel version of the shortest augmenting path algorithm for the assignment problem. While generating the initial dual solution and partial assignment in parallel does not require substantive changes in the sequential algorithm, using several augmenting paths in parallel does require a new dual variable recalculation method. The parallel algorithm was tested on a 14-processor Butterfly Plus computer, on problems with up to 900 million variables. The speedup obtained increases with problem size. The algorithm was also embedded into a parallel branch and bound procedure for the traveling salesman problem on a directed graph, which was test on the Butterfly Plus on problems involving up to 7,500 cities. To our knowledge, these are the largest assignment problems and traveling salesman problems solved so far.

Journal ArticleDOI
TL;DR: Any language recognizable in deterministic exponential time has an interactive proof that uses only logarithmic space, and it is shown that any language in BC-TIME(t(n)) has an Interactive proof that use time polynomial in t(n) but space only logARithmic in t (n).
Abstract: New results on the power of space-bounded probabilistic game automata are presented. Space-bounded analogues of Arthur-Merlin games and interactive proof systems, which are denoted by BC and BP respectively, for Bounded random game automata with Complete and Partial information, are considered. The main results are that BC-SPACE(s(n))⊆BP-SPACE(log s(n)) for s(n)=Ω(n) U c DTIME (2 cs(n) )=BC-SPACE(s(n)) for s(n)=Ω(log n), where s(n) is space constructible. A consequence of these results is that any language recognizable in deterministic exponential time has an interactive proof that uses only logarithmic space. The power of games with simultaneous time and space bounds is also studied, and it is shown that any language in BC-TIME(t(n)) has an interactive proof that uses time polynomial in t(n) but space only logarithmic in t(n)

Journal ArticleDOI
TL;DR: Certain problems related to the length of cycles and paths modulo a given integer are studied and linear-time algorithms are presented that determine whether all cycles in an undirected graph are of length P mod Q.
Abstract: Certain problems related to the length of cycles and paths modulo a given integer are studied. Linear-time algorithms are presented that determine whether all cycles in an undirected graph are of length P mod Q and whether all paths between two specified nodes are of length P mod Q, for fixed integers P.Q. These results are compared to those for directed graphs.

Journal ArticleDOI
TL;DR: A general technique is presented for updating the maximum (minimum) value of a decomposable function as elements are inserted into and deleted from the set S, a model of dynamization where when an element is inserted, how long it will stay is told.
Abstract: Let S be a set, f: S×S→R+ a bivariate function, and f(x,S) the maximum value of f(x,y) over all elements y∈S. We say that f is decomposable with respect with the maximum if f(x,S) = max {f(x,S1),f(x,S2),…,f(x,Sk)} for any decomposition S = ∪i=1i=kSi. Computing the maximum (minimum) value of a decomposable function is inherent in many problems of computational geometry and robotics. In this paper, a general technique is presented for updating the maximum (minimum) value of a decomposable function as elements are inserted into and deleted from the set S. Our result holds for a semi-online model of dynamization: When an element is inserted, we are told how long it will stay. Applications of this technique include efficient algorithms for dynamically computing the diameter or closest pair of a set of points, minimum separation among a set of rectangles, smallest distance between a set of points and a set of hyperplanes, and largest or smallest area (perimeter) retangles determined by a set of points. These problems are fundamental to application areas such as robotics, VLSI masking, and optimization.

Journal ArticleDOI
TL;DR: A new technique for clipping is provided, called virtual clipping, whose overhead per window W depends logarithmically on the number if intersections between the borders of W and the input segments, in contrast to the overhead of the conventional clipping technique.
Abstract: Randomized, optimal algorithms to find a partition of the plane induced by a set of algebraic segments of a bounded degree, and a set of linear chains of a bounded degree, are given. This paper also provides a new technique for clipping, called virtual clipping, whose overhead per window W depends logarithmically on the number if intersections between the borders of W and the input segments. In contrast, the overhead of the conventional clipping technique depends linearly on this number of intersections. As an application of virtual clipping, a new simple and efficient algorithm for plannar point location is given.

Journal ArticleDOI
TL;DR: An algebraic framework for the study of recursion has been developed and for the first time, the query answer is able to be expressed in an explicit algebraic form within an algebraic structure.
Abstract: An algebraic framework for the study of recursion has been developed. For immediate linear recursion, a Horn clause is represented by a relational algebra operator. It is shown that the set of all such operators forms a closed semiring. In this formalism, query answering corresponds to solving a linear equation. For the first time, the query answer is able to be expressed in an explicit algebraic form within an algebraic structure. The manipulative power thus afforded has several implications on the implementation of recursive query processing algorithms. Several possible decompositions of a given operator are presented that improve the performance of the algorithms, as well as several transformations that give the ability to take into account any selections or projections that are present in a givin query. In addition, it is shown that mutual linear recursion can also be studied within a closed semiring, by using relation vectors and operator matrices. Regarding nonlinear recursion, it is first shown that Horn clauses always give rise to multilinear recursion, which can always be reduced to bilinear recursion. Bilinear recursion is then shown to form a nonassociative closed semiring. Finally, several sufficient and necessary-and-sufficient conditions for bilinear recursion to be equivalent to a linear one of a specific form are given. One of the sufficient conditions is derived by embedding to bilinear recursion in an algebra.

Journal ArticleDOI
TL;DR: This paper shows that partial correctness logic can be viewed as a special case of the equational logic of iteration theories, and the familiar rules for the structured programming constructs of composition, if-then-else and while-do are shown valid in all guarded iteration theories.
Abstract: What is special about the rules of Hoare logic? This paper shows that partial correctness logic can be viewed as a special case of the equational logic of iteration theories [6, 7, 24]. It is shown how to formulate a partial correctness assertion {a} f { /3} as an equation between iteration theory terms. The guards (a, ~) that appear in partial correctness assertions are equationally axiomatized, and a new representation theorem for Boolean algebras is derived. The familiar rules for the structured programming constructs of composition, if-then-else and while-do are shown valid in all guarded iteration theories. A new system of partial correctness logic is described that applies to all flowchart programs. The invariant guard condition, weaker than the well-known condition of expressiveness. is found to be both necessary and sufficient for the completeness of these rules. The Cook completeness theorem [19] follows as an easy corollary. The role played by weakest liberal preconditions in connection with completeness is examined.

Journal ArticleDOI
TL;DR: In this article, the authors presented a parallel algorithm for computing the visible portion of a simple polygonal chain with 7i vertices from a point in the plane in O(logn) time using 0{nf log n) processors in the CREW-PRAM computational model.
Abstract: We present a parallel algorithm for computing the visible portion of a simple polygonal chain with 7i vertices from a point in the plane. The algorithm runs in O(logn) time using 0{nf log n) processors in the CREW-PRAM computational model, and hence is asymptotically optimal.

Journal ArticleDOI
TL;DR: If the constants explicitly involved in any operation performed in the tree are restricted to be “0” and “1” (and any other constant must be computed), then it is proved that an &OHgr;(log log n) lower bound on the depth of any computation tree with operations is proved.
Abstract: It is proved that no finite computation tree with operations { +, -, *, /, mod, < } can decide whether the greatest common divisor (gcd) of a and b is one, for all pairs of integers a and b. This settles a problem posed by Gro¨tschel et al. Moreover, if the constants explicitly involved in any operation performed in the tree are restricted to be “0” and “1” (and any other constant must be computed), then we prove an O(log log n) lower bound on the depth of any computation tree with operations { +, -, *, /, mod, < } that decides whether the gcd of a and b is one, for all pairs of n-bit integers a and b. A novel technique for handling the truncation operation is implicit in the proof of this lower bound. In a companion paper, other lower bounds for a large class of problems are proved using a similar technique.

Journal ArticleDOI
TL;DR: This work studies nonmonotonic logics based on various sets of defaults and presents a necessary and sufficient condition for a nonmonOTonic modal theory to be degenerate, which provides several alternative descriptions of degenerate theories.
Abstract: Conclusions by failure to prove the opposite are frequently used in reasoning about an incompletely specified world. This naturally leads to logics for default reasoning which, in general, are nonmonotonic, i.e., introducing new facts can invalidate previously made conclusions. Accordingly, a nonmonotonic theory is called (nonmonotonically) degenerate, if adding new axioms does not invalidate already proved theorems. We study nonmonotonic logics based on various sets of defaults and present a necessary and sufficient condition for a nonmonotonic modal theory to be degenerate. In particular, this condition provides several alternative descriptions of degenerate theories. Also we establish some closure properties of sets of defaults defining a nonmonotonic modal logic.

Journal ArticleDOI
TL;DR: The condition for stability of the system is first precisely specified, the degree of parallelism, expressed as the asymptotic average number of processors that work concurrently, is computed, and various design and simulation aspects concerning parallel processing systems are considered.
Abstract: The general problem of parallel (concurrent) processing is investigated from a queuing theoretic point of view.As a basic simple model, consider infinitely many processors that can work simultaneously, and a stream of arriving jobs, each carrying a processing time requirement. Upon arrival, a job is allocated to a processor and starts being executed, unless it is blocked by another one already in the system. Indeed, any job can be randomly blocked by any preceding one, in the sense that it cannot start being processed before the one that blocks it leaves. After execution, the job leaves the system. The arrival times, the processing times and the blocking structures of the jobs form a stationary and ergodic sequence.The random precedence constraints capture the essential operational characteristic of parallel processing and allow a unified treatment of concurrent processing systems from such diverse areas as parallel computation, database concurrency control, queuing networks, flexible manufacturing systems. The above basic model includes the G/G/1 and G/G/∞ queuing systems as special extreme cases.Although there is an infinite number of processors, the precedence constraints induce a queuing phenomenon, which, depending on the loading conditions, can lead to stability or instability of the system.In this paper, the condition for stability of the system is first precisely specified. The asymptotic behavior, at large times, of the quantities associated with the performance of the system is then studied, and the degree of parallelism, expressed as the asymptotic average number of processors that work concurrently, is computed. Finally, various design and simulation aspects concerning parallel processing systems are considered, and the case of finite number of processors is discussed.The results proved for the basic model are then extended to cover more complex and realistic parallel processing systems, where each job has a random internal structure of subtasks to be executed according to some internal precedence constriants.