scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1974"


Journal ArticleDOI
TL;DR: An optimized tree is defined and an algorithm to accomplish optimization in n log n time is presented, guaranteeing that Searching is guaranteed to be fast in optimized trees.
Abstract: The quad tree is a data structure appropriate for storing information to be retrieved on composite keys. We discuss the specific case of two-dimensional retrieval, although the structure is easily generalised to arbitrary dimensions. Algorithms are given both for staightforward insertion and for a type of balanced insertion into quad trees. Empirical analyses show that the average time for insertion is logarithmic with the tree size. An algorithm for retrieval within regions is presented along with data from empirical studies which imply that searching is reasonably efficient. We define an optimized tree and present an algorithm to accomplish optimization in n log n time. Searching is guaranteed to be fast in optimized trees. Remaining problems include those of deletion from quad trees and merging of quad trees, which seem to be inherently difficult operations.

2,048 citations



Journal ArticleDOI
TL;DR: This paper points out that all of these heuristic algorithms fall into the class of minimum spanning tree (MST) problems, constrained by traffic or response time requirements, and most of the algorithms can be unified into a modified Kruskal's MST algorithm.
Abstract: The problem of designing minimum-cost multidrop lines which connect remote terminals to a concentrator or a central data-processing computer is studied. In some cases, optimal solutions can be obtained by using either linear integer programming or a branch-bound method. These approaches are not practical, since they lack flexibility and require an enormous amount of computer time for most practical problems. As a consequence, heuristic algorithms have been developed by various authors. In this paper, we point out that all of these algorithms fall into the class of minimum spanning tree (MST) problems, constrained by traffic or response time requirements. The difference between them is mainly the sequential order with which a branch or a line is selected into the tree. Without the constraints, all algorithms converge to a MST. With the constraints, they form different subtrees. Most of the algorithms can be unified into a modified Kruskal's MST algorithm. In the modified algorithm, a weight is associated with each terminal. Let w i be the weight associated with terminal i , and d_{i}{j} be the cost for the line directed from terminal i to terminal j . When the algorithm fetches the cost for the line, it replaces it with d_{i}{j} - w_{i} . In some cases, w i 's need to be readjusted in the middle of the algorithm. The difference between all existing heuristic algorithms is in the way w i 's are defined. If w i is zero for all i , the algorithm reduces to the unmodified Kruskal's algorithm; if w i is set to zero whenever a line incident to terminal i is selected as a tree branch, the algorithm reduces to Prim's MST algorithm. An extension of the algorithm to the solution of an associated problem of partitioning the terminals with respect to a predetermined set of concentrators, multiplexers, terminal interface processors, or central computers is also derived. The efficiency of an algorithm depends greatly on how it is implemented. The computational complexity of the unified algorithm is in the order of N^{2} log N for the most general case, where N is the number of terminals. By using good heuristics, it reduces to K_{1}N log N + K_{2}N , where K 1 and K 2 are constants, for many practical applications. The algorithm has been applied to large networks with over 1000 terminals, yielding excellent results and using only 15 seconds of computer time on a CDC 6600 computer. Designs obtained by using different w i 's are compared.

68 citations


Proceedings ArticleDOI
30 Apr 1974
TL;DR: Further evidence is presented in support of the conjecture that SP cannot be recognized using storage (log n)k for any k as well as a game on directed acyclic graphs (dags) to show that a fairly general machine to recognize SP also requires 0(n 1/4) storage.
Abstract: A striking example of practical tradeoffs between storage space and execution time is provided by the IBM 1401 Fortran compiler.On another level, there is an interesting relation between the time and storage required to recognize context free languages. The recognition algorithm in [Y] requires time no more than 0(n3), but requires at least linear storage, whereas the algorithm in [LI requires recognition space no more than 0((log n)2) and requires more than polynomial time. An intriguing question is whether (log n)2 space is enough to recognize all languages recognizable in deterministic polynomial time.The above question has been narrowed down in [C] to the storage required to recognize a particular language called SP.This paper presents further evidence in support of the conjecture that SP cannot be recognized using storage (log n)k for any k. In section 2 we consider a game on directed acyclic graphs (dags) and show that at least 0(n1/4) markers are needed to play the game on some n node dags. The 0(n1/4) bound is used in section 3 to show that a fairly general machine to recognize SP also requires 0(n1/4) storage.

61 citations


Proceedings ArticleDOI
Kurt Mehlhorn1
30 Apr 1974
TL;DR: In this paper, a polynomial time computable operator is defined, which generalizes Cook's definition to arbitrary function inputs, and a class is defined in terms of these operators; the properties of these classes are investigated.
Abstract: We define polynomial time computable operator. Our definition generalizes Cook's definition to arbitrary function inputs. Polynomial classes are defined in terms of these operators; the properties of these classes are investigated. Honest polynomial classes are generated by running time. They posses a modified Ritchie-Cobham property. A polynomial class is a complexity class iff it is honest.Starting from the observation that many results about subrecursive classes hold for all reducibility relations (e.g. primitive recursive in, elementary recursive in), which were studied so far, we define abstract subrecursive reducibility relation. Many results hold for all abstract subrecursive reducibilities.

61 citations


Journal ArticleDOI
TL;DR: This work considers nondeterministic multitape acceptors which are both reversal-bounded and also operate in linear time and shows that such an acceptor need have only three pushdown stores as auxiliary storage, each pushdown store need make only one reversal, and the acceptor can operate in real time.
Abstract: A Turing machine whose behavior is restricted so that each read-write head can change its direction only a bounded number of times is reversal-bounded. Here we consider nondeterministic multitape acceptors which are both reversal-bounded and also operate in linear time. Our main result shows that such an acceptor need have only three pushdown stores as auxiliary storage, each pushdown store need make only one reversal, and the acceptor can operate in real time.

47 citations


01 Jan 1974
TL;DR: A lower bound of cNlogN is proved for the mean time complexity of an on-line multitape with known upper bounds of the form cN(logN)k sub K, and for some classes the upper and lower bounds coincide.
Abstract: : A lower bound of cNlogN is proved for the mean time complexity of an on-line multitape. Turing machine performing the multiplication of N-digit binary integers. For a more general class of machines which includes some models of random-access machines, the corresponding bound is cNlogN/loglogN. These bounds compare favorably with known upper bounds of the form cN(logN)k sub K, and for some classes the upper and lower bounds coincide. The proofs are based on the 'overlap' argument due to Cook and Aanderaa. (Author)

43 citations


Proceedings ArticleDOI
14 Oct 1974
TL;DR: The class of P-Complete problems is studied and it is shown that for any constant e ≫0 there is a P-complete problem for which an e-approximate solution can be found in linear time.
Abstract: We study the class of P-Complete problems and show the following: i) for any constant e ≫0 there is a P-complete problem for which an e-approximate solution can be found in linear time ii) there exist P-Complete problems for which linear time approximate solutions that get closer and closer to the optimal (with increasing problem size) can be found iii) there exist P-Complete problems for which the approximation problems are also P-Complete

43 citations


Proceedings ArticleDOI
14 Oct 1974
TL;DR: This paper develops techniques for proving functions of n inputs and 0( n) outputs have nonlinear combinational complexity if only OR and AND operations are allowed, and demonstrates that binary sorting requires 0(n log n) operations.
Abstract: An important open question in the field of computational complexity in the development of nontrivial lower bounds on the number of logical operations required to compute switching functions. Although counting arguments can be used to show that most Boolean functions of n inputs and 0(n) or fewer outputs have complexity growing exponentially in n, no one has yet exhibited a particular such function whose unlimited fan-out combinational complexity is known to grow faster than linearly in n when a functionally complete set of primitive operations is allowed. In this paper, we consider the class of monotone increasing Boolean functions. These correspond to the functions which can be computed using only two-input OR and AND operations, an incomplete set of primitives. We develop techniques for proving functions of n inputs and 0(n) outputs have nonlinear combinational complexity if only OR and AND operations are allowed. We do this by demonstrating that binary sorting requires 0(n log n) operations, and by exhibiting a set of n Boolean sums over n variables which requires 0(n3/2) operations.

40 citations


Journal ArticleDOI
TL;DR: Some open problems in the theory of cellular automata are considered: the tradeoff between machine complexity and interconnection complexity, linear time pattern recognition and transformation problems, and the noncomputability of the constant of linearity of linear time problems.
Abstract: Some open problems in the theory of cellular automata are considered: the tradeoff between machine complexity and interconnection complexity, linear time pattern recognition and transformation problems, and the noncomputability of the constant of linearity of linear time problems.

40 citations


Proceedings ArticleDOI
30 Apr 1974
TL;DR: All nontrivial predicates for certain specific classes of languages are shown to be hard, and it is shown that a dpda can always be converted in polynomial time into an equivalent d pda that always halts.
Abstract: This paper presents a complexity theory of formal languages. The main technique used is that of embedding “={0,1}*”, “=0*”, and “=φ” into other linguistic predicates. In Section 2, the undecidability of “={0,1}*” for cfl's is exploited to provide sufficient conditions for the undecidability of predicates on the cfl's. In Section 3, the same techniques are applied to regular sets. Predicates satisfying conditions similar to those of Section 2 are shown to be hard, where how hard depends on the descriptors used to enumerate the regular sets. Section 4 concentrates on the equivalence and containment problems for cfl's. For cfl's, regular sets, and linear cfl's, the complexity of determining equivalence to a fixed language is linked to whether the fixed language is finite, infinite but bounded, or unbounded. In Section 5, the ability of cfg's to generate finite languages whose strings are exponential in the size of the grammar is used to obtain exponential lower bounds on several decidable problems for cfg's generating finite sets. In Section 6, all nontrivial predicates for certain specific classes of languages are shown to be hard. In Section 7, we show that a dpda can always be converted in polynomial time into an equivalent dpda that always halts. Therefore the predicate “={0,1}*” is in P for dpda's, and embedding this problem into other predicates on the dpda's will not yield nonpolynomial lower bounds. In Section 8, some of the preceding results are generalized to other families of languages.

Journal ArticleDOI
TL;DR: Two new algorithms are presented for list structure copying using bounded workspace, showing that without cell tag bits the task can be performed in time n in time 2, and demonstrating that marking can be done in average time log n without the aid of supplemental tag bits or stacks.
Abstract: Two new algorithms are presented for list structure copying using bounded workspace. The first, of primarily theoretical interest, shows that without cell tag bits the task can be performed in time n2. The second algorithm, assuming one tag bit in each cell, delivers attractive practical speed. Any noncyclic structure is copied in linear speed, while cyclic structures are copied in average time less than n log n. No foreknowledge of cycle absence is necessary to achieve linear speed. A variation of the second algorithm solves an open problem concerning list structure marking. That result demonstrates that marking can be done in average time n log n without the aid of supplemental tag bits or stacks.

Proceedings ArticleDOI
30 Apr 1974
TL;DR: A heirarchy similar in form to Kleene's arithmetic heirarchy may be shown to correspond to the Ritchie functions.
Abstract: The complexity of decision procedures for the Weak Monadic Second-Order Theories of the Natural Numbers are considered. If only successor is allowed as a primitive, then every alternation of second-order quantifiers causes an exponential increase in the complexity of deciding the validity of a formula. Thus a heirarchy similar in form to Kleene's arithmetic heirarchy may be shown to correspond to the Ritchie functions. On the other hand, if first-order less-than is allowed as a primitive, one existential quantifier suffices for arbitrarily complex (in the Ritchie heirarchy) decision problems. This leads to a normal form, in which every sentence in the theory is equivalent in polynomial time to a sentence with less-than but only one existential second-order quantifier.

Book ChapterDOI
29 Jul 1974
TL;DR: In this article, the time complexity of the recognition problem for some “moderate” extensions of context-free grammars or pushdown automata is studied.
Abstract: In this article we study the time complexity of the recognition problem for some “moderate” extensions of context-free grammars or pushdown automata. It is well known that, for a given context-free grammar G, the recognition problem “x ∈ L(G)” can be decided in 0(|x|)3 steps by a suitable algorithm. How do extensions behave in this respect? In particular, do they admit recognition algorithms whose time is polynomially bounded by the length of the input?

Journal ArticleDOI
TL;DR: In this article, a program for discretization in the time dimension of a parabolic time element is described and the coefficients required to form the global system are given, and the efficiency of the process is examined by comparison with the customary difference method.
Abstract: A program is demonstrated which apart from linear finite elements in time also includes elements with shape functions of the second and third degree. The algorithm for discretization in the time dimension is described and, using the example of a parabolic time element, the coefficients required to form the global system are given. By various test examples the efficiency of the process is examined by comparison with the customary difference method. Generally, with finite elements in time, the solution has better stability. Comparing the time required for calculation with the accuracy of the solution it would appear that in examining problems where boundary conditions are constant in time, higher order time elements are no improvement over the linear time element. However, for the purpose of reproducing periodic processes, higher order time elements offer an advantage in that one is not limited to linear variations of the boundary conditions within the element. Thus, for example, the temperature curve for parabolic variation of the surface temperature can be reproduced with close approximation by two time elements per period and a shape function of the third degree.

Book ChapterDOI
29 Jul 1974
TL;DR: The classes Lk of all functions which are computable in time c·nk for a fixed k ∈ ℕ depend very much on the machine model taken as a basis for the complexity measure.
Abstract: Complexity measures usually are based on a machine model (1-tape Turing machine, multi-tape Turing machine, bounded activity machine, random access machine). The class of all functions which are computable in polynomial time is the same for a wide variety of machines (for characterizations of this class see A. Cobham [2], D.B. Thompson [8], and S.A. Cook [3]). On the other hand the classes Lk of all functions which are computable in time c·nk for a fixed k ∈ ℕ depend very much on the machine model taken as a basis for the complexity measure.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of testing a given open-loop system for time dependence, based on the concept of the "evolutionary cross spectra," and show that the mechanics of the tests are formally equivalent to a two-factor multivariate analysis of variance (MANOVA) procedure.
Abstract: In studies of linear open-loop systems, the assumption of time invariance is often tacitly made. In this paper we show how a time dependent system can arise quite naturally. The estimation of time dependent transfer function, on the basis of a single realization, when the input/output processes are nonstationary is considered. We also consider the problem of testing a given open-loop system for time dependence. The tests described here make use of the concept of the "evolutionary cross spectra," and rests essentially on testing the "uniformity" of a set of vectors, whose components consist of the "evolutionary gain spectra" and "evolutionary phase spectra." Using a logarithmic transformation on the evolutionary gain spectra, we show that the mechanics of the tests are formally equivalent to a two-factor multivariate analysis of variance (MANOVA) procedure. Numerical illustrations, from the real and simulated data, of the proposed tests are included.

Proceedings ArticleDOI
30 Apr 1974
TL;DR: The concept of a contiguent is defined and it is shown that a contIGuent forming algorithm may be used as the basis for a stable sort, which requires 0(N) time and 0(log N) bits of extra space.
Abstract: @ describes the open problem of stable sorting with no more than 0(log2N) bits of extra space and less than 0(N2) computation time.In this paper we define the concept of a contiguent and show that a contiguent forming algorithm may be used as the basis for a stable sort. A class of such contiguent forming algorithm may be used as the basis for a stable sort. A class of such contiguent forming algorithms is described, the most naive of which requires 0(log N) bits of extra space and 0(N log2N) computation time.We also describe a stable merging algorithm which requires 0(N) time and 0(log N) bits of extra space, but which is not applicable to all cases. It is shown, however, that this merge may be combined with a contiguent compilation algorithm to yield a generally applicable stable sorting algorithm. One such combination provides the basis for a stable sorting algorithm which requires 0(N.logN.G(N)) time and 0(log N) bits of extra space. Another such combination provides the basis for a stable sorting algorithm which requires 0(N log N) time and 0(log2N) bits of extra space.


Journal ArticleDOI
TL;DR: At the Eighth International Symposium on Mathematical Programming (August 1973 at Stanford), Hugo Scolnik suggested a line of reasoning leading to an algorithm which he thought might solve the line.
Abstract: At the Eighth International Symposium on Mathematical Programming (August 1973 at Stanford), Hugo Scolnik suggested a line of reasoning leading to an algorithm which he thought might solve the line...