scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1995"


Proceedings ArticleDOI
Gabriel Taubin1
15 Sep 1995
TL;DR: A very simple surface signal low-pass filter algorithm that applies to surfaces of arbitrary topology that is a linear time and space complexity algorithm and a very effective fair surface design technique.
Abstract: In this paper we describe a new tool for interactive free-form fair surface design. By generalizing classical discrete Fourier analysis to two-dimensional discrete surface signals – functions defined on polyhedral surfaces of arbitrary topology –, we reduce the problem of surface smoothing, or fairing, to low-pass filtering. We describe a very simple surface signal low-pass filter algorithm that applies to surfaces of arbitrary topology. As opposed to other existing optimization-based fairing methods, which are computationally more expensive, this is a linear time and space complexity algorithm. With this algorithm, fairing very large surfaces, such as those obtained from volumetric medical data, becomes affordable. By combining this algorithm with surface subdivision methods we obtain a very effective fair surface design technique. We then extend the analysis, and modify the algorithm accordingly, to accommodate different types of constraints. Some constraints can be imposed without any modification of the algorithm, while others require the solution of a small associated linear system of equations. In particular, vertex location constraints, vertex normal constraints, and surface normal discontinuities across curves embedded in the surface, can be imposed with this technique. CR

2,004 citations


Proceedings ArticleDOI
25 Jan 1995
TL;DR: The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time by transforming them into a special kind of graph-reachability problem.
Abstract: The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of probable problems includes—but is not limited to—the classical separable problems (also known as “gen/kill” or “bit-vector” problems)—e.g., reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes many non-separable problems, including truly-live variables, copy constant propagation, and possibly-uninitialized variables.Results are reported from a preliminary experimental study of C programs (for the problem of finding possibly-uninitialized variables).

1,154 citations


Journal ArticleDOI
TL;DR: Two serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type are presented.
Abstract: We present serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type. A vehicle starts at a prespecified point x/sub o/ and follows a unit speed trajectory x(t) inside a region in /spl Rscr//sup m/ until an unspecified time T that the region is exited. A trajectory minimizing a cost function of the form /spl int//sub 0//sup T/ r(x(t))dt+q(x(T)) is sought. The discretized Hamilton-Jacobi equation corresponding to this problem is usually solved using iterative methods. Nevertheless, assuming that the function r is positive, we are able to exploit the problem structure and develop one-pass algorithms for the discretized problem. The first algorithm resembles Dijkstra's shortest path algorithm and runs in time O(n log n), where n is the number of grid points. The second algorithm uses a somewhat different discretization and borrows some ideas from a variation of Dial's shortest path algorithm (1969) that we develop here; it runs in time O(n), which is the best possible, under some fairly mild assumptions. Finally, we show that the latter algorithm can be efficiently parallelized: for two-dimensional problems and with p processors, its running time becomes O(n/p), provided that p=O(/spl radic/n/log n). >

816 citations


Journal ArticleDOI
Peter W. Shor1
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms on a quantum computer and gave an efficient randomized algorithm for both problems, which takes a number of steps polynomial in the input size, e.g., the number of digits to be factored.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

723 citations


Journal ArticleDOI
TL;DR: This work determines the complexity of testing whether a finite state, sequential or concurrent probabilistic program satisfies its specification expressed in linear-time temporal logic and addresses questions for specifications described by ω-automata or formulas in extended temporal logic.
Abstract: We determine the complexity of testing whether a finite state, sequential or concurrent probabilistic program satisfies its specification expressed in linear-time temporal logic. For sequential programs, we present an algorithm that runs in time linear in the program and exponential in the specification, and also show that the problem is in PSPACE, matching the known lower bound. For concurrent programs, we show that the problem can be solved in time polynomial in the program and doubly exponential in the specification, and prove that it is complete for double exponential time. We also address these questions for specifications described by o-automata or formulas in extended temporal logic.

664 citations


Journal ArticleDOI
TL;DR: This work shows that INDEPENDENT SET is complete for W, and the W Hierarchy of parameterized problems was defined, and complete problems were identified for the classes W [ t ] for t ⩾ 2.

659 citations


Journal ArticleDOI
TL;DR: Two linear time algorithms for computing the Euclidean distance transform of a two-dimensional binary image are presented based on the construction and regular sampling of the Voronoi diagram whose sites consist of the unit pixels in the image.
Abstract: Two linear time (and hence asymptotically optimal) algorithms for computing the Euclidean distance transform of a two-dimensional binary image are presented. The algorithms are based on the construction and regular sampling of the Voronoi diagram whose sites consist of the unit (feature) pixels in the image. The first algorithm, which is of primarily theoretical interest, constructs the complete Voronoi diagram. The second, more practical, algorithm constructs the Voronoi diagram where it intersects the horizontal lines passing through the image pixel centers. Extensions to higher dimensional images and to other distance functions are also discussed. >

457 citations


Journal ArticleDOI
TL;DR: Two structural theorems are proven here that establishes equivalence classes of problems such that whenever one problem in a class is checkable, all problems in the class are checkable.
Abstract: A program correctness checker is an algorithm for checking the output of a computation. That is, given a program and an instance on which the program is run, the checker certifies whether the output of the program on that instance is correct. This paper defines the concept of a program checker. It designs program checkers for a few specific and carefully chosen problems in the class FP of functions computable in polynomial time. Problems in FP for which checkers are presented in this paper include Sorting, Matrix Rank and GCD. It also applies methods of modern cryptography, especially the idea of a probabilistic interactive proof, to the design of program checkers for group theoretic computations.Two structural theorems are proven here. One is a characterization of problems that can be checked. The other theorem establishes equivalence classes of problems such that whenever one problem in a class is checkable, all problems in the class are checkable.

378 citations


Book ChapterDOI
19 May 1995
TL;DR: The purpose of this article is to introduce Monadic Second-order Logic as a practical means of specifying regularity and built a tool MONA, which acts as a decision procedure and as a translator to finite-state automata.
Abstract: The purpose of this article is to introduce Monadic Second-order Logic as a practical means of specifying regularity. The logic is a highly succinct alternative to the use of regular expressions. We have built a tool MONA, which acts as a decision procedure and as a translator to finite-state automata. The tool is based on new algorithms for minimizing finite-state automata that use binary decision diagrams (BDDs) to represent transition functions in compressed form. A byproduct of this work is an algorithm that matches the time but improves the space of Sieling and Wegener's algorithm to reduce OBDDs in linear time.

377 citations


Proceedings ArticleDOI
23 Oct 1995
TL;DR: A duality theorem is proved which expresses the genomic distance in terms of easily computable parameters reflecting different combinatorial properties of sets of strings and leads to a polynomial time algorithm for computing most parsimonious rearrangement scenarios for human-mouse evolution.
Abstract: Many people believe that transformations of humans into mice happen only in fairy tales. However, despite some differences in appearance and habits, men and mice are genetically very similar. In the pioneering paper, J.H. Nadeau and B.A. Taylor (1984) estimated that surprisingly few genomic rearrangements (178/spl plusmn/39) happened since the divergence of human and mouse 80 million years ago. However, their analysis is nonconstructive and no rearrangement scenario for human-mouse evolution has been suggested yet. The problem is complicated by the fact that rearrangements in multi chromosomal genomes include inversions, translocations, fusions and fissions of chromosomes, a rather complex set of operations. As a result, at first glance, a polynomial algorithm for the genomic distance problem with all these operations looks almost as improbable as the transformation of a (real) man into a (real) mouse. We prove a duality theorem which expresses the genomic distance in terms of easily computable parameters reflecting different combinatorial properties of sets of strings. This theorem leads to a polynomial time algorithm for computing most parsimonious rearrangement scenarios. Based on this result and the latest comparative physical mapping data we have constructed a scenario of human-mouse evolution with 131 reversals/translocaitons/fusions/fissions. A combination of the genome rearrangement algorithm with the recently proposed experimental technique called ZOO FISH suggests a new constructive approach to the 100 year old problem of reconstructing mammalian evolution.

349 citations


Proceedings ArticleDOI
21 Jun 1995
TL;DR: The main result of this paper shows that the problem of checking the solvability of BMIs is NP-hard, and hence it is rather unlikely to find a polynomial time algorithm for solving general BMI problems.
Abstract: In this paper, it is shown that the problem of checking the solvability of a bilinear matrix inequality (BMI), is NP-hard. A matrix valued function, F(X,Y), is called bilinear if it is linear with respect to each of its arguments, and an inequality of the form, F(X,Y)>0 is called a bilinear matrix inequality. Recently, it was shown that, the static output feedback problem, fixed order controller problem, reduced order H/sup /spl infin// controller design problem, and several other control problems can be formulated as BMIs. The main result of this paper shows that the problem of checking the solvability of BMIs is NP-hard, and hence it is rather unlikely to find a polynomial time algorithm for solving general BMI problems. As an independent result, it is also shown that simultaneous stabilization with static output feedback is an NP-hard problem, namely for given n plants, the problem of checking the existence of a static gain matrix, which stabilizes all of the n plants, is NP-hard.

Journal ArticleDOI
TL;DR: An algorithm for computing the spatial similarity between two symbolic images that is robust in the sense that it can deal with translation, scale, and rotational variances in images, and has quadratic time complexity in terms of the total number of objects in both the database and query images.
Abstract: Similarity-based retrieval of images is an important task in many image database applications. A major class of users' requests requires retrieving those images in the database that are spatially similar to the query image. We propose an algorithm for computing the spatial similarity between two symbolic images. A symbolic image is a logical representation of the original image where the image objects are uniquely labeled with symbolic names. Spatial relationships in a symbolic image are represented as edges in a weighted graph referred to as spatial-orientation graph. Spatial similarity is then quantified in terms of the number of, as well as the extent to which, the edges of the spatial-orientation graph of the database image conform to the corresponding edges of the spatial-orientation graph of the query image.The proposed algorithm is robust in the sense that it can deal with translation, scale, and rotational variances in images. The algorithm has quadratic time complexity in terms of the total number of objects in both the database and query images. We also introduce the idea of quantifying a system's retrieval quality by having an expert specify the expected rank ordering with respect to each query for a set of test queries. This enables us to assess the quality of algorithms comprehensively for retrieval in image databases. The characteristics of the proposed algorithm are compared with those of the previously available algorithms using a testbed of images. The comparison demonstrated that our algorithm is not only more efficient but also provides a rank ordering of images that consistently matches with the expert's expected rank ordering.

01 Sep 1995
TL;DR: This dissertation examines complete search algorithms for SAT, the satisfiability problem for propositional formulas in conjunctive normal form, and argues that any efficient SAT search algorithm should perform only a few key technique at each node of the search tree.
Abstract: In this dissertation, we examine complete search algorithms for SAT, the satisfiability problem for propositional formulas in conjunctive normal form. SAT is NP-complete, easy to think about, and one of the most important computational problems in the field of Artificial Intelligence. From an empirical perspective, the central problem associated with these algorithms is to implement one that runs as quickly as possible on a wide range of hard SAT problems. This in turn require identifying a set of useful techniques and programming guidelines. Another important problem is to identify the techniques that do not work well in practice, and provide qualitative reasons for their poor performance whenever possible. This dissertation addresses all four of these problems. Our thesis is that any efficient SAT search algorithm should perform only a few key technique at each node of the search tree. Furthermore, any implementation of such an algorithm should perform these techniques in quadratic time total down any path of the search tree, and use only a linear amount of space. We have justified these claims by writing POSIT (for PrOpositional SatIsfiability Testbed), a SAT tester which runs more quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3$\sb{-}$U1, POSIT can solve hard random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable random 3-SAT problems with search trees of size $O(2\sp{n/18.7}).$ In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NP-complete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research.

Proceedings ArticleDOI
29 May 1995
TL;DR: A unified framework for designing polynomial time approximation schemes (PTASs) for “dense” instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimum k-way cut with and without specified terminals, and maximum 3-satisfiability is presented.
Abstract: We present a unified framework for designing polynomial time approximation schemes (PTASs) for “dense” instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimum k-way cut with and without specified terminals, and maximum 3-satisfiability. By dense graphs we mean graphs with minimum degree Ω(n), although our algorithms solve most of these problems so long as the average degree is Ω(n). Denseness for non-graph problems is defined similarly. The unified framework begins with the idea of exhaustive sampling: picking a small random set of vertices, guessing where they go on the optimum solution, and then using their placement to determine the placement of everything else. The approach then develops into a PTAS for approximating certain smooth integer programs where the objective function and the constraints are “dense” polynomials of constant degree.

Journal ArticleDOI
TL;DR: For unordered trees, it is shown that the alignment problem can be solved in polynomial time if the trees have a bounded degree and becomes MAX SNP-hard if one of the trees is allowed to have an arbitrary degree.

Proceedings ArticleDOI
13 Dec 1995
TL;DR: It is shown that some basic linear control design problems are NP-hard, implying that, unless P=NP, they cannot be solved by polynomial time algorithms.
Abstract: We show that some basic linear control design problems are NP-hard, implying that, unless P=NP, they cannot be solved by polynomial time algorithms. The problems that we consider include simultaneous stabilization by output feedback, stabilization by state or output feedback in the presence of bounds on the elements of the gain matrix, and decentralized control. These results are obtained by first showing that checking the existence of a stable matrix in an interval family of matrices is an NP-hard problem.

Journal ArticleDOI
TL;DR: It is proved that SCS does not have a polynomial time linear approximation algorithm, unless {\bf P} = {\bf NP", and a new method for analyzing the average-case performance of algorithms for sequences, based on Kolmogorov complexity is introduced.
Abstract: The problems of finding shortest common supersequences (SCS) and longest common subsequences (LCS) are two well-known {\bf NP}-hard problems that have applications in many areas including computational molecular biology, data compression, robot motion planning and scheduling, text editing, etc. A lot of fruitless effort has been spent in searching for good approximation algorithms for these problems. In this paper, we show that these problems are inherently hard to approximate in the worst case. In particular, we prove that (i) SCS does not have a polynomial time linear approximation algorithm, unless {\bf P} = {\bf NP}; (ii) There exists a constant $\delta > 0$ such that, if SCS has a polynomial time approximation algorithm with ratio $\log^{\delta} n$, where $n$ is the number of input sequences, then {\bf NP} is contained in {\bf DTIME}$(2^{\polylog n})$; (iii) There exists a constant $\delta > 0$ such that, if LCS has a polynomial time approximation algorithm with performance ratio $n^{\delta}$, then {\bf P} = {\bf NP}. The proofs utilize the recent results of Arora et al. [Proc. 23rd IEEE Symposium on Foundations of Computer Science, 1992, pp. 14-23] on the complexity of approximation problems. In the second part of the paper, we introduce a new method for analyzing the average-case performance of algorithms for sequences, based on Kolmogorov complexity. Despite the above nonapproximability results, we show that near optimal solutions for both SCS and LCS can be found on the average. More precisely, consider a fixed alphabet $\Sigma$ and suppose that the input sequences are generated randomly according to the uniform probability distribution and are of the same length $n$. Moreover, assume that the number of input sequences is polynomial in $n$. Then, there are simple greedy algorithms which approximate SCS and LCS with expected additive errors $O(n^{0.707})$ and $O(n^{\frac{1}{2}+\epsilon})$ for any $\epsilon > 0$, respectively. Incidentally, our analyses also provide tight upper and lower bounds on the expected LCS and SCS lengths for a set of random sequences, solving a generalization of another well-known open question on the expected LCS length for two random sequences [K. Alexander, The rate of convergence of the mean length of the longest common subsequence, 1992, manuscript],[V. Chvatal and D. Sankoff, J. Appl. Probab., 12 (1975), pp. 306-315], [D. Sankoff and J. Kruskall, eds., Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison, Addison-Wesley, Reading, MA, 1983].

Journal ArticleDOI
TL;DR: The range of approximability stretches from Apx-complete problems which can be approximated within a constant but not within every constant unless P = NP, to NPO PB-complete ones that are as hard to approximate as all NP optimization problems with polynomially bounded objective functions.

Journal ArticleDOI
TL;DR: The following tree-matching problem is considered: Given labeled trees P and T, can P be obtained from T by deleting nodes?
Abstract: The following tree-matching problem is considered: Given labeled trees $P$ and $T$, can $P$ be obtained from $T$ by deleting nodes? Deleting a node $u$ entails removing all edges incident to $u$ and, if $u$ has a parent $v$, replacing the edge from $v$ to $u$ by edges from $v$ to the children of $u$. The problem is motivated by the study of query languages for structured text databases. Simple solutions to this problem require exponential time. For ordered trees an algorithm is presented that requires $O(|P| |T|)$ time and space. The corresponding problem for unordered trees is also considered and a proof of its NP-completeness is given. An algorithm is presented for the unordered problem. This algorithm works in $O(|P| |T|)$ time if the out-degrees of the nodes in $P$ are bounded by a constant, and in polynomial time if they are $O(\log |T|)$.

Journal ArticleDOI
TL;DR: A simple algorithm to find a spanning tree that simultaneously approximates a shortest-path tree and a minimum spanning tree is given and obtains the best-possible tradeoff.
Abstract: We give a simple algorithm to find a spanning tree that simultaneously approximates a shortest-path tree and a minimum spanning tree. The algorithm provides a continuous tradeoff: given the two trees and aź>0, the algorithm returns a spanning tree in which the distance between any vertex and the root of the shortest-path tree is at most 1+ź2ź times the shortest-path distance, and yet the total weight of the tree is at most 1+ź2/ź times the weight of a minimum spanning tree. Our algorithm runs in linear time and obtains the best-possible tradeoff. It can be implemented on a CREW PRAM to run a logarithmic time using one processor per vertex.

Journal ArticleDOI
01 Aug 1995
TL;DR: It is shown that a tree 1-spanner, if it exists, in a weighted graph with $m$ edges and $n$ vertices is a minimum spanning tree and can be found in $O(m \log \beta(m, n)$ time, and the problem of determining the existence of a tree $t$- spanner in a Weighted graph is proven to be NP-complete.
Abstract: A tree $t$-spanner $T$ of a graph $G$ is a spanning tree in which the distance between every pair of vertices is at most $t$ times their distance in $G$. This notion is motivated by applications in communication networks, distributed systems, and network design. This paper studies graph-theoretic, algorithmic, and complexity issues about tree spanners. It is shown that a tree 1-spanner, if it exists, in a weighted graph with $m$ edges and $n$ vertices is a minimum spanning tree and can be found in $O(m \log \beta(m, n))$ time, where $\beta(m, n) = \min\{i\mid\log^{(i)}n \leq m/n\}$. On the other hand, for any fixed $t > 1$, the problem of determining the existence of a tree $t$-spanner in a weighted graph is proven to be NP-complete. For unweighted graphs, it is shown that constructing a tree 2-spanner takes linear time, whereas determining the existence of a tree $t$-spanner is NP-complete for any fixed $t \geq 4$. A theorem that captures the structure of tree 2-spanners is presented for unweighted graphs. For digraphs, an $O((m + n)\alpha(m, n))$ algorithm is provided for finding a tree $t$-spanner with $t$ as small as possible, where $\alpha(m, n)$ is a functional inverse of Ackerman's function. The results for tree spanners on undirected graphs are extended to "quasi-tree spanners" on digraphs. Furthermore, linear-time algorithms are derived for verifying tree spanners and quasi-tree spanners.

Proceedings ArticleDOI
29 May 1995
TL;DR: In this paper, it was shown that a unit-cost RAM with a word length of bits can sort integers in the range in time, for arbitrary!, a significant improvement over the bound of " # $ achieved by the fusion trees of Fredman and Willard, provided that % &'( *),+., for some fixed /102, the sorting can even be accomplished in linear expected time with a randomized algorithm.
Abstract: We show that a unit-cost RAM with a word length of bits can sort integers in the range in time, for arbitrary ! , a significant improvement over the bound of " # $ achieved by the fusion trees of Fredman and Willard. Provided that % & ' ( *),+., for some fixed /102 , the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unitcost PRAM with a word length of bits. The first one yields an algorithm that uses 3 4 5 $ time and 6 ( operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses ' 5 7 expected time and " expected operations on a randomized EREW PRAM, provided that 8 ' 5 7 *),+.for some fixed /90: . Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multiple-precision integers represented in several words.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: Experimental results confirm the viability and usefulness of the approach in minimizing power consumption during the register assignment phase of the behavioral synthesis process.
Abstract: This paper describes a technique for calculating the switching activity of a set of registers shared by different data values. Based on the assumption that the joint pdf (probability density function) of the primary input random variables is known or that a suffficiently large number of input vectors has been given, the register assignment problem for minimum power consumption is formulated as a minimum cost clique covering of an appropriately defined compatibility graph (which is shown to be transitively orientable). The problem is then solved optimally (in polynomial time) using a max-cost ow algorithm. Experimental results confirm the viability and usefulness of the approach in minimizing power consumption during the register assignment phase of the behavioral synthesis process.

Book ChapterDOI
29 May 1995
TL;DR: The resulting codes are faster than the previous codes, and much faster on some problem families, due to the combination of heuristics used in the implementation of the push-relabel method.
Abstract: We study efficient implementations of the push-relabel method for the maximum flow problem. The resulting codes are faster than the previous codes, and much faster on some problem families. The speedup is due to the combination of heuristics used in our implementation. We also exhibit a family of problems for which all known methods seem to have almost quadratic time growth rate.

Journal ArticleDOI
TL;DR: It is shown that the existence of an incrementally polynomial algorithm for this problem is equivalent to the exist of the following algorithms, where ƒ and g are positive Boolean functions.
Abstract: We consider in this paper the problem of identifying min T(ƒ) and max F(ƒ) of a positive (i.e., monotone) Boolean function ƒ, by using membership queries only, where min T(ƒ) (max F(ƒ)) denotes the set of minimal true vectors (maximal false vectors) of ƒ. It is shown that the existence of an incrementally polynomial algorithm for this problem is equivalent to the existence of the following algorithms, where ƒ and g are positive Boolean functions: • An incrementally polynomial algorithm to dualize ƒ; • An incrementally polynomial algorithm to self-dualize ƒ; • A polynomial algorithm to decide if ƒ and are mutually dual; • A polynomial algorithm to decide if ƒ is self-dual; • A polynomial algorithm to decide if ƒ is saturated; • A polynomial algorithm in |min (ƒ)| + |max (ƒ)| to identify min (ƒ) only. Some of these are already well known open problems in the respective fields. Other related topics, including various equivalent problems encountered in hypergraph theory and theory of coteries (used in distributed systems), are also discussed.

Journal ArticleDOI
TL;DR: A restricted set of contraints is identified which gives rise to a class of tractable problems which generalizes the notion of a Horn formula in propositional logic to larger domain sizes, and it is proved that the class of problems generated by any larger set of constraints is NP-complete.

Journal ArticleDOI
Joseph Y. Halpern1
TL;DR: It is shown that the PSPACE-completeness results of Ladner and Halpern and Moses hold for the modal logics K n, T n, S4 n, n ⩾ 1, and K45 n, KD45 n , S5 n , n⩾ 2, even if there is only one primitive proposition in the language.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the learnability of boolean functions from membership and equivalence queries and developed the Monotone Theory that proves (1) any boolean function is learnable in polynomial time in its minimal disjunctive normal form size, its minimal conjunctive norm size, and the number of variables n. In particular, decision trees are learnable.
Abstract: We study the learnability of boolean functions from membership and equivalence queries. We develop the Monotone Theory that proves (1) Any boolean function is learnable in polynomial time in its minimal disjunctive normal form size, its minimal conjunctive normal form size, and the number of variables n. In particular, (2) Decision trees are learnable. Our algorithms are in the model of exact learning with membership queries and unrestricted equivalence queries. The hypotheses to the equivalence queries and the output hypotheses are depth 3 formulas.

Proceedings ArticleDOI
05 Jul 1995
TL;DR: It is proved that the algorithm proposed can efficiently learn distributions generated by the subclass of APFAs it considers, and it is shown that the KL-divergence between the distributiongenerated by the target source and the distribution generated by the authors' hypothesis can be made arbitrarily small with high confidence in polynomial time.
Abstract: We propose and analyze a distribution learning algorithm for a subclass ofacyclic probalistic finite automata(APFA). This subclass is characterized by a certain distinguishability property of the automata's states. Though hardness results are known for learning distributions generated by general APFAs, we prove that our algorithm can efficiently learn distributions generated by the subclass of APFAs we consider. In particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made arbitrarily small with high confidence in polynomial time. We present two applications of our algorithm. In the first, we show how to model cursively written letters. The resulting models are part of a complete cursive handwriting recognition system. In the second application we demonstrate how APFAs can be used to build multiple-pronunciation models for spoken words. We evaluate the APFA-based pronunciation models on labeled speech data. The good performance (in terms of the log-likelihood obtained on test data) achieved by the APFAs and the little time needed for learning suggests that the learning algorithm of APFAs might be a powerful alternative to commonly used probabilistic models.

Journal ArticleDOI
TL;DR: It is shown that the problem of learning a probably almost optimal weight vector for a neuron is so difficult that the minimum error cannot even be approximated to within a constant factor in polynomial time (unless RP = NP); the same hardness result is obtained for several variants of this problem.