scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1972"


Journal ArticleDOI
TL;DR: New algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimum-cost flow problem are presented, and Dinic shows that, in a network with n nodes and p arcs, a maximum flow can be computed in 0 (n2p) primitive operations by an algorithm which augments along shortest augmenting paths.
Abstract: This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimum-cost flow problem. Upper bounds on the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps required by earlier algorithms. First, the paper states the maximum flow problem, gives the Ford-Fulkerson labeling method for its solution, and points out that an improper choice of flow augmenting paths can lead to severe computational difficulties. Then rules of choice that avoid these difficulties are given. We show that, if each flow augmentation is made along an augmenting path having a minimum number of arcs, then a maximum flow in an n-node network will be obtained after no more than ~(n a - n) augmentations; and then we show that if each flow change is chosen to produce a maximum increase in the flow value then, provided the capacities are integral, a maximum flow will be determined within at most 1 + logM/(M--1) if(t, S) augmentations, wheref*(t, s) is the value of the maximum flow and M is the maximum number of arcs across a cut. Next a new algorithm is given for the minimum-cost flow problem, in which all shortest-path computations are performed on networks with all weights nonnegative. In particular, this algorithm solves the n X n assigmnent problem in O(n 3) steps. Following that we explore a "scaling" technique for solving a minimum-cost flow problem by treating a sequence of derived problems with "scaled down" capacities. It is shown that, using this technique, the solution of a Iiitchcock transportation problem with m sources and n sinks, m ~ n, and maximum flow B, requires at most (n + 2) log2 (B/n) flow augmentations. Similar results are also given for the general minimum-cost flow problem. An abstract stating the main results of the present paper was presented at the Calgary International Conference on Combinatorial Structures and Their Applications, June 1969. In a paper by l)inic (1970) a result closely related to the main result of Section 1.2 is obtained. Dinic shows that, in a network with n nodes and p arcs, a maximum flow can be computed in 0 (n2p) primitive operations by an algorithm which augments along shortest augmenting paths. KEY WOl¢l)S AND PHP~ASES: network flows, transportation problem, analysis of algorithms CR CATEGOI{.IES: 5.3, 5.4, 8.3

2,186 citations


Journal ArticleDOI
TL;DR: It is proved that, in fuzzy logic, a set of clauses is unsatisfiable iff it is unsatisfiability in two-valued logic, and it is also shown that if the most unreliable clause of aSet of clauses has a truth-value a>0.5, then all the logical consequences obtained by repeatedly applying the resolution principle has truth- value never smaller than a.
Abstract: The relationship between fuzzy logic and two-valued logic in the context of the first order predicate calculus is discussed. It is proved that if every clause in a set of clauses is something more than a “half-truth” and the most reliable clause has truth-value a and the most unreliable clause has truth-value b , then we are guaranteed that all the logical consequences obtained by repeatedly applying the resolution principle will have truth-value between a and b . The significance of this theorem is also discussed.

292 citations


Journal ArticleDOI
TL;DR: The model is established that a switchyard is an acyclic directed graph, with a unique source and a unique sink, and each vertex represents a siding, which has the property of having indefinite storage space.
Abstract: Inspired by Knuth [2, p. 234], we wish to consider the following problem: Suppose we are presented with the layout of a railroad switchyard (Figure 1 ). I f a train is driven into one end of the yard, what rearrangements of the cars may be made before the train comes out the other end? In order to get a handle on the problem, we must introduce some formalization. A switchyard is an acyclic directed graph, with a unique source and a unique sink (Figure 2). Each vertex represents a siding. The vertex/siding is assumed to have indefinite storage space and may be a stack, a queue, or a deque of some sort (see Knuth [2, p. 234]). A stack is a siding which has the property tha t the last element inserted is the first to be removed. A queue has the proper ty tha t the first element inserted is the first to be removed. In the switchyard, the sidings associated with the source and sink are assumed to be queues. Suppose a finite sequence of numbers s = (sl, s2, • • • , sn) is placed in the source queue of a switchyard (Figure 3). We may rearrange s by moving the elements of s through the switchyard. At each step, an element is moved from some siding to another siding along an arc of the switchyard. After a suitable number of such moves, all elements will be in the sink queue. I f they are in order, smallest to largest, we have sorted the sequence s using the switchyard. We wish to analyze the sequences s which may be sorted in a switchyard Y. We lose nothing in our formalism by allowing storage only on the vertices, and not on the arcs of the switchyard. We ignore questions concerning the finite size of the sidings; assuming small sidings complicates the problem considerably. A circuit in the switchyard will allow us to sort any sequence; thus we do not allow circuits. Having established our model, we proceed to discover its properties.

225 citations


Journal ArticleDOI
TL;DR: Algorithms for finding a maximum size clique and a minimum coloration of transitive grapl are presented and are applicable in solving problems in memo] allocation and circuit layout.
Abstract: A graph G with vertex set N = {1, 2, .-. , n} is called a permutation graph there exists a permutation P on N such that for i , j E N, (i j)[P-'(i) P-'(j)] < 0 if ar only if i and j are joined by an edge in G. A structural relationship is established between permutation graphs and transitive graph An algorithm for determining whether a given graph is a permutation graph is given. Efficie, algorithms for finding a maximum size clique and a minimum coloration of transitive grapl are presented. These algorithms are then shown to be applicable in solving problems in memo] allocation and circuit layout.

186 citations


Journal ArticleDOI
TL;DR: Topological characterizations of sets of graphs which can be generated by contextfree or linear grammars are given and it is shown that the set of all planar graphs cannot be generate by a context-free grammar while theset of all outerplanar graphs can.
Abstract: Topological characterizations of sets of graphs which can be generated by contextfree or linear grammars are given. It is shown, for example, that the set of all planar graphs cannot be generated by a context-free grammar while the set of all outerplanar graphs can

124 citations


Journal ArticleDOI
TL;DR: In this paper, a new search algorithm for finding the simple cycles of any finite directed graph is presented, and the validity of the algorithm is proven.
Abstract: In many applications of directed graph theory, it is desired to obtain a list of the simple cycles of the graph. In this paper, a new search algorithm for finding the simple cycles of any finite directed graph is presented, and the validity of the algorithm is proven. The algorithm has been implemented experimentally in Snobol3, and tests indicate that the algorithm is reasonably fast. (The simple cycles of a 193 vertex graph were obtained in 6.8 seconds on an IBM 7094 computer.) KEY W O R D S AND P H R A S E S : directed graphs, cycles, path examination, feedback paths, program segmentation, flow-chart analysis, algorithms, search algorithms, Snobol CR CAT~GORIF-,S: 5.32

99 citations


Journal ArticleDOI
John E. Savage1
TL;DR: In this paper, measures of the computational work and computational delay required by ms chines to compute functions are given and many e~ change inequalities involving storage, time, and other important parameters of computation are developed.
Abstract: Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.

98 citations


Journal ArticleDOI
TL;DR: It is shown that the relabeling associated with a basis change can be minimized by defining yet another function called the successor function, and the algorithms for labeling and relabeled are then specialized for the specific case of transportation problems.
Abstract: Adjacent extreme point problems involving a tree basis (e.g. the transportation problem) require the determination of cycles which are created when edges not belonging to the basis are added to the basis-tree. This paper offers an improvement over the predecessorindex method for finding such cycles and involves the use of a distance function defined on the nodes of the tree, in addition to the predecessor labels. I t is shown that the relabeling associated with a basis change can be minimized by defining yet another function called the successor function. The algorithms for labeling and relabeling are then specialized for the specific case of transportation problems.

84 citations


Journal ArticleDOI
Alan G. Konheim1, Bernd Meister1
TL;DR: Two related measures of the grade of service are considered: the average queue length and the average v i r tua l wai t ing t ime at each s ta t ion.
Abstract: The s ta t i s t ica l behavior of a loop service sys tem is s tudied. The sys tem consists of a main s ta t ion , a server and N s ta t ions ar ranged on a loop. Customers arr ive at each s ta t ion according to a r andom process. The server makes successive tours along the loop br inging customers from the N s ta t ions to the main s ta t ion. Two related measures of the grade of service are considered: the average queue length and the average v i r tua l wai t ing t ime at each s ta t ion.

83 citations


Journal ArticleDOI
Shi-Kuo Chang1
TL;DR: Experimental studies indicate that this iterative method which generates a minimal tree with a Steiner topology in at most 2 steps is close to optimal Steiner minimal trees.
Abstract: An iterative method is described which generates a minimal tree with a Steiner topology in at most n 2 steps, where n is the number of fixed vertices. The SI algorithm is formulated. When n < 4, the SI algorithm converges to a proper tree. Experimental studies indicate that this algorithm generates trees close to optimal Steiner minimal trees.

79 citations


Journal ArticleDOI
TL;DR: The counterexamples to their and the modified version of the Bierstone algorithm for finding the set of cliques of a finite undirected linear graph are presented.
Abstract: Recently Augustson and Minker presented a version of the Bierstone algorithm for finding the set of cliques of a finite undirected linear graph. Their version contains two errors. In this paper the counterexamples to their version and the modified version of the Bierstone algorithm are presented.

Journal ArticleDOI
TL;DR: This result implies that serial storage may be used to handle files requiring several points of immediate two-way read-write access without interruptions for rewinds, etc, and yields simplified proofs of several results in the literature of computational complexity.
Abstract: The main result of this paper is that, given a Turing machine with several read-write heads per tape, one can effectively construct an equivalent multitape Turing machine with a single read-write head per tape, which runs at precisely the same speed. This result implies that serial storage may be used to handle files requiring several points of immediate two-way read-write access without interruptions for rewinds, etc. It also yields simplified proofs of several results in the literature of computational complexity.

Journal ArticleDOI
TL;DR: The es tab l i shment of lower bounds on the number of comparisons necessary to solve various combinator problems is considered and the maximum of a set of n real numbers cannot be computed in fewer than n 1 comparisons if comparisons of only l inear funct ions of the numbers are permitted.
Abstract: The es tab l i shment of lower bounds on the number of comparisons necessary to solve various combinator ia l problems is considered. Some of the new results are : (a) given two finite sets of real numbers , A and B, where n = max ( I A I , I B I ), O(n. log n) comparisons are required to determine if A = B, even when comparisons are allowed between l inear functions of the numbers ; and (b) the maximum of a set of n real numbers cannot be computed in fewer than n 1 comparisons if comparisons of only l inear funct ions of the numbers are permitted, but the maximum can be computed in Ilog2n] comparisons if comparisons are allowed between exponential funct ions of the numbers ,

Journal ArticleDOI
TL;DR: Some consequences of the Blum axioms for step counting functions are investigated and complexity classes of recursive functions are introduced analogous to the HartmanisStearns classes of recursion sequences.
Abstract: Some consequences of the Blum axioms for step counting functions are investigated. Complexity classes of recursive functions are introduced analogous to the HartmanisStearns classes of recursive sequences. Arbitrarily large "gaps" are shown to occur throughout any complexity hierarchy.

Journal ArticleDOI
TL;DR: A formal definition of one grammar "covering" another grammar is presented and it is argued that this definition has the property that G' covers G when and only when the ability to parse G suffices for parsing G.
Abstract: A formal definition of one grammar \"covering\" another grammar is presented. I t is argued that this definition has the property that G' covers G when and only when the ability to parse G' suffices for parsing G. I t is shown that every grammar may be covered by a grammar in canonical two form. Every A-free grammar is covered by an operator normal form grammar while there exist grammars which cannot be covered by any grammar in Greibach form. Any grammar may be covered by an invertible grammar. Each A-free and chain reduced LR(k) (bounded right context) grammar is covered by a precedence detectable, LR(k) (bounded right context) reducible grammar.

Journal ArticleDOI
TL;DR: Oxidizing butane to acetic acid which comprises contacting a sufficient concentration of an oxygen-containing gas with normal butane in the presence of catalyst consisting essentially of bromine and cobalt to initiate a self-sustaining exothermic reaction.
Abstract: TO make further progress, resolution principle programs need to make better inferences and to make them faster. This paper presents a fairly general approach for taking some advantages of the structure of special theories, for example, the theories of equality, partial ordering, and sets. The object of the approach is to replace some of the axioms of a given theory by (refutation) complete, valid, efficient (in time) inference rules. The author believes that the new rules are efficient because: (1) inference for inference, a computer program embodying the rules should be faster than a resolution program computing with the corresponding axioms; (2) more importantly, certain troublesome inferences made by resolution are avoided by the new rules. In this paper, the three main applications of the approach concern "building-in" the theories of equality, partial ordering, and sets and may be stated roughly as follows. (1) If only {x =x} is retained from the equality axioms, and if the others are replaced by the functionally reflexive axioms and the rule of renamable paramodulation, refutation completeness is preserved. (2) If only {xC_x} is retained from the eight (not all independent) partial ordering axioms for { =, C:, C:} and if the other seven are replaced by the rules r(C::, C) a n d r(C:), refutation completeness is preserved. (3) If a certain seven of the twenty-four set axioms are retained and if the remaining seventeen are replaced by the rules r(C::, ~ ) , r(C), r(C ), r(e) (complement), r( U ), r(['l ), and r(u) (unit sets), refutation completeness is preserved. KEY V¢ORDS AND P H R A S E S : theorem proving, built-in theories, theories with equality, partial ordering, set theory, resolution principle, paramodulation, refutation completeness, inference rules, artificial intelligence, deduction, mathematical logic CR C.~TEGORIES: 3.60, 3.64, 3.66, 5.21

Journal ArticleDOI
TL;DR: Sufficient condit ions are given under which preserva of the local s ta te space s t ruc tu re (weak morphism) also forces the preserva t ion of component in teract ion and provides a ra t ionale for making valid inferences about the local local dynamic of a sys tem when the model of a behaviora l ly val id model is known.
Abstract: A s imula t ion consists of a tr iple of au toma ta (system to be s imulated, model of this system, computer realizing the model). In a val id s imulat ion these elements are connected by behavior and s t ruc ture preserving morphisms. In format iona l and complexity considerat ions mot iva te the development of s t ruc tu re preserving morphisms which can preserve not only global, bu t also local dynamic s t ruc ture . A formalism for au tomaton s t ruc tu re ass ignment and the re levant weak and s t rong s t ruc tu re preserving morphisms are introduced. I t is shown tha t these preserva t ion not ions proper ly refine the usual au tomaton homomorphism concepts. Sufficient condit ions are given under which preserva t ion of the local s ta te space s t ruc tu re (weak morphism) also forces the preserva t ion of component in teract ion. The s t rong sense in which these condit ions are necessary is also demons t ra ted . This provides a ra t ionale for making valid inferences about the local s t ruc ture of a sys tem when t h a t of a behaviora l ly val id model is known.


Journal ArticleDOI
TL;DR: These examples display the great versatility of the results and demonstrate the flexibility available for the intelligent design of discriminatory treatment among jobs (in favor of short jobs and against long iobs) in time shared computer systems.
Abstract: Scheduling algorithms for time shared computing facilities are considered in terms of a queueing theory model. The extremely useful limit of "processor sharing" is adopted, wherein the quantum of service shrinks to zero; this approach greatly simplifies the problem. A class of algorithms is studied for which the scheduling discipline may change for a given job as a function of the amount of service received by that job. These multilevel disciplines form a natural extension to many of the disciplines previously considered. The average response time for jobs conditioned on their service requirement is solved for. Explicit solutions are given for the system M/G/1 in which levels may be first come first served (FCFS), feedback (FB), or round-robin (RR) in any order. The service time distribution is restricted to be a polynomial times an exponential for the case of RR. Examples are described for which the average response time is plotted. These examples display the great versatility of the results and demonstrate the flexibility available for the intelligent design of discriminatory treatment among jobs (in favor of short jobs and against long iobs) in time shared computer systems.

Journal ArticleDOI
TL;DR: A demonstration that versions of the linked conjunct, resolution, matrix reduction, and model elimination proof procedures can be highly related in their design.
Abstract: The linked conjunct, resolution, matrix reduction, and model elimination proof procedures constitute a nearly exhaustive list of the basic Herbrand proof procedures introduced in the 1960's. Each was introduced as a hopefully efficient complete procedure for the first order predicate calculus for the purpose of mechanical theorem proving. This paper contains a demonstration that versions of these procedures can be highly related in their design. S-linear resolution, a particular strategy of resolution previously proposed, is seen to possess a natural refinement isomorphic at ground level to a refinement of the model elimination procedure. There is also an isomorphism at the general level between a less natural s-linear resolution refinement and the model elimination refinement. The model elimination procedure is also interpreted within the linked conjunct and matrix reduction procedures. An alternate interpretation of these results is that, very roughly, the procedures other than resolution can be viewed as forms of linear resolution. K E Y W O R D S A N D PHRASES: mechanical theorem proving, Herbrand proof procedure, linked coniunct, resolution, model elimination, matrix reduction procedure, linear resolution procedure CR CATEGORIES: 3.64, 3.66, 5.21

Journal ArticleDOI
TL;DR: A set of sufficient condit ions on tape funct ions Ll( n) and L2(n) is presented, which guarantees the existence of a set accepted by an Ll (n) tape bounded nondeterminis t ic Turing machine, which is not accepted by any L~(n)-tape bounded nondeteris tic Tur ing machine.
Abstract: A set of sufficient condit ions on tape funct ions Ll(n) and L2(n) is presented t h a t guarantees the existence of a set accepted by an Ll (n) tape bounded nondeterminis t ic Turing machine, bu t not accepted by any L~(n)-tape bounded nondeterminis t ic Tur ing machine. Inte res t ing corollaries arise. For example, i t is shown tha t , for integers m >_ 0, p > 1, and q > 1, there is a set accepted by an [n~+(P/q)]-tape bounded nondeterminis t ic Tur ing machine t h a t is not accepted by any [nm+(p/(q+l))]-tape bounded nondeterminis t ic Tur ing machine.

Journal ArticleDOI
TL;DR: A (computer programming) algorithm is presented which is based on Dijkstra 's principle for finding the lengths of all shortest paths from either a fixed node or from all nodes in N-node nonnegative-distance complete networks.
Abstract: A (computer programming) algorithm is presented which is based on Dijkstra 's principle for finding the lengths of all shortest paths from either a fixed node or from all nodes in N-node nonnegative-distance complete networks. This algorithm is more efficient than other available (computer programming) algorithms known to the author for solving the above two problems. An empirical study on a computer has confirmed its superiority.

Journal ArticleDOI
TL;DR: The computa t ion of a number of picture propert ies which involve connect iv i ty and component count ing is considered and a noncomputab le proper ty is presented.
Abstract: The computa t ion of a number of picture propert ies which involve connect iv i ty and component count ing is considered. The computa t iona l model consists of a one-dimensional array of f ini te-state a u t o m a t a which scans a digital picture, one row at a t ime, in one-pass. The inheren t complexity of a picture proper ty is reflected in the memory requi rements of each element of the corresponding scanner and the number of neighbor ing a u t o m a t a wi th which it mus t communicate . A noncomputab le proper ty is presented.

Journal ArticleDOI
TL;DR: This work uses abstract language models which approximate highly struc+, "-,.
Abstract: The structural complexity of programming languages, and therefore of programs as well, can be measured by the subrecursive class of functions which characterize the language. Using such a measure of structural complexity, we examine the tradeoff relationship between structural and computational complexity. Since measures of structural complexity directly related to high level languages interest us most, we use abstract language models which approximate highly struc+, "-,., +

Journal ArticleDOI
James C. Beatty1
TL;DR: An axiomatic approach is proposed as a means of specifying precisely what liberties are permitted in evaluating expressions, and two algorithms are given for finding optimal equivalent forms of an expression not having multiple references to any variable.
Abstract: An axiomatic approach is proposed as a means of specifying precisely what liberties are permitted in evaluating expressions. Specific axiom systems are introduced for arithmetic expressions, which permit free grouping of terms within parentheses, in the spirit of American National Standard Fortran. Using these axiom systems, two algorithms are given for finding optimal equivalent forms of an expression not having multiple references to any variable. The first algorithm is intended for highly parallel computers and is a slight generalization of that of Baer and Bovet. The concept of delay is introduced as a measure of the serial dependency of a computation and the algorithm is shown to minimize delay. This provides, as a special case, a proof of the level minimality claimed by Baer and Bovet. The second algorithm is shown to produce an equivalent expression which can be evaluated with a minimal number of instructions on a computer of the IBM System/360 type. I t is an extension of a result of Sethi and Ullman, which relates only to commutative and associative operations. KEY WOR D S A N D P H R A S E S : arithmetic expression, associativity, code generation, compilers, commutativity, language processing, object-code optimization, parallel processors, parsing, programming languages, semantics, trees CR C A T E G O R I E S : 4.12, 4.22, 5.19, 5.29, 5.39, 5.49

Journal ArticleDOI
TL;DR: The conclusion is that D- charts are in one technical sense more restrictive than general flowcharts, but not if one allows the introduction of additional variables which represent a history of control flow.
Abstract: This paper discusses the expression of algorithms by flowcharts, and in particular by flowcharts without explicit go-to's (D-charts). For this purpose we introduce a machine independent definition of algorithm which is broader than usual. Our conclusion is that D- charts are in one technical sense more restrictive than general flowcharts, but not if one allows the introduction of additional variables which represent a history of control flow. The term "algorithm" is used in many different ways. Sometimes we speak of an algorithm as a process in the abstract, without reference to a particular computer. It is in this sense, for example, that we speak of the "radix exchange sort algorithm," or the "simplex algorithm." Often we identify an algorithm with a particular se- quence of instructions for a particular computer. In this paper we shall present a new definition of algorithm which emphasizes the sequence of commands associated with a particular "input." We then define the notion "expression" of algorithms by general flowcharts and flowcharts without explicit go-to's (D-charts). Some theorems are given which exhibit some of the rela- tionships between algorithms, flowcharts, and D-charts.

Journal ArticleDOI
TL;DR: It is shown that randomness is a sufficient but not necessary condition for attainment of the 1 4k / ( N -k -41) performance for all k, and a necessary and sufficient condition is given.
Abstract: Hashing functions are considered which store data \"randomly\" in a fixed size hash table with no auxiliary storage. I t is known that if k out of the N locations have been previously filled randomly, then the expected number of locations which must be looked at until an empty one is found is 1 qk / ( N -k -b 1). I t is shown that there exist nonrandom hashing functions which are more efficient for certain values of k and N. However, it is shown that the 1 + k/ (N k + 1) figure is a \"lower bound\" on hashing performance in the following sense. If h is any hashing function such that the expected number of trials to insert the (k -b 1)st item into a table of size N is C(k, N) , and if for some k, C(k, N) < 1 + k / ( N k -b 1), then there exists k' < k such that C(k', N) > 1 -b k / ( N -k 41). Finally, it is shown that randomness is a sufficient but not necessary condition for attainment of the 1 4k / ( N -k -41) performance for all k, and a necessary and sufficient condition is given.

Journal ArticleDOI
TL;DR: It is proved that no recursive operator can increase every recursive bound enough to reach new computations, and the Operator Gap Theorem proved here is shown to be the strongest possible gap theorem for general recursive operators.
Abstract: This paper continues investigations pertaining to recursive bounds on computing resources (such as time or memory) and the amount by which these bounds must be increased if new computations are to occur within the new bound. The paper proves that no recursive operator can increase every recursive bound enough to reach new computations. In other words, given any general recursive operator F[], there is an arbitrarily large recursive t() such that between bound t() and bound F[t()]() there is a gap in which no new computation runs. This demonstrates that the gap phenomenon first discovered by Borodin for composition is a deeply intrinsic property of computational complexity measures. Moreover, the Operator Gap Theorem proved here is shown to be the strongest possible gap theorem for general recursive operators. The proof involves a priority argument but is sufficiently self-contained that it can easily be read by a wide audience. The paper also discusses interesting connections between the Operator Gap Theorem and McCreight & Meyer''s important result that every complexity class can be named by a function from a measured set.

Journal ArticleDOI
TL;DR: This paper supplies a smaller Upper Boundary for finding the maximum flow in a network and shows examples for which the UPPER BOUND is reached.
Abstract: EDMONDS AND KARP PROPOSED AN ALGORITHM FOR FINDING THE MAXIMAL FLOW IN A NETWORK. SIMULTANEOUSLY THEY FOUND AN UPPER BOUND ON THE AMOUNT OF WORK REQUIRED TO SOLVE THE MAX FLOW PROBLEM FOR ANY NETWORK AS A FUNCTION OF THE NUMBER OF ARCS AND EDGES. THIS PAPER SUPPLIES A SMALLER UPPER BOUND AND SHOWS EXAMPLES FOR WHICH THE UPPER BOUND IS REACHED.

Journal ArticleDOI
TL;DR: Two generalizations of the simple precedence grammars are considered and are shown to generate exactly the deterministic context-free languages.
Abstract: Two generalizations of the simple precedence grammars are considered. The first, the weak precedence grammars, are shown to generate exactly the simple precedence languages. The second, the \"simple\" mixed strategy precedence grammars, are shown to generate exactly the deterministic context-free languages. Algorithms for their implementation are given, and the complexity of these algorithms studied. KEY W O R D S A N D P H R A S E S : bottom-up parsing, shift-reduce parsing, weak precedence grammars, simple precedence grammars, mixed strategy precedence grammars, bounded right context grammars, weak precedence languages, deterministic context-free languages CR CATEGORIES: 4 . 1 2 , 5 .23