scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1997"


Journal ArticleDOI
TL;DR: The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+/-) has a much smaller loss if only a few components of the input are relevant for the predictions, which is quite tight already on simple artificial data.
Abstract: We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+/-). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+/-) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+/-) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+/-) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data.

878 citations


Journal ArticleDOI
TL;DR: A region-based dynamic semantics for a skeletal programming language extracted from Standard ML is defined and the inference system which specifies where regions can be allocated and de-allocated is presented and a detailed proof that the system is sound with respect to a standard semantics is presented.
Abstract: This paper describes a memory management discipline for programs that perform dynamic memory allocation and de-allocation. At runtime, all values are put intoregions. The store consists of a stack of regions. All points of region allocation and de-allocation are inferred automatically, using a type and effect based program analysis. The scheme does not assume the presence of a garbage collector. The scheme was first presented in 1994 (M. Tofte and J.-P. Talpin,in“Proceedings of the 21st ACM SIGPLAN?SIGACT Symposium on Principles of Programming Languages,” pp. 188?201); subsequently, it has been tested in The ML Kit with Regions, a region-based, garbage-collection free implementation of the Standard ML Core language, which includes recursive datatypes, higher-order functions and updatable references L. Birkedal, M. Tofte, and M. Vejlstrup, (1996),in“Proceedings of the 23 rd ACM SIGPLAN?SIGACT Symposium on Principles of Programming Languages,” pp. 171?183. This paper defines a region-based dynamic semantics for a skeletal programming language extracted from Standard ML. We present the inference system which specifies where regions can be allocated and de-allocated and a detailed proof that the system is sound with respect to a standard semantics. We conclude by giving some advice on how to write programs that run well on a stack of regions, based on practical experience with the ML Kit.

640 citations


Journal ArticleDOI
TL;DR: Stable model semantics is focused on, it is shown that the set of stable models coincides with the family of unfounded-free models, and it is proved that stable models can be defined equivalently by a property of their false literals.
Abstract: Disjunctive logic programs have become a powerful tool in knowledge representation and commonsense reasoning. This paper focuses on stable model semantics, currently the most widely acknowledged semantics for disjunctive logic programs. After presenting a new notion of unfounded sets for disjunctive logic programs, we provide two declarative characterizations of stable models in terms of unfounded sets. One shows that the set of stable models coincides with the family of unfounded-free models (i.e., a model is stable iff it contains no unfounded atoms). The other proves that stable models can be defined equivalently by a property of their false literals, as a model is stable iff the set of its false literals coincides with its greatest unfounded set. We then generalize the well-founded WPoperator to disjunctive logic programs, give a fixpoint semantics for disjunctive stable models and present an algorithm for computing the stable models of function-free programs. The algorithm's soundness and completeness are proved and some complexity issues are discussed.

223 citations


Journal ArticleDOI
TL;DR: A complexity analysis of concept satisfiability and subsumption for a wide class of concept languages and algorithms for these inferences that comply with the worst-case complexity of the reasoning task they perform.
Abstract: A basic feature of Terminological Knowledge Representation Systems is to represent knowledge by means of taxonomies, here called terminologies, and to provide a specialized reasoning engine to do inferences on these structures. The taxonomy is built through a representation language called a concept language (or description logic ), which is given a well-defined set-theoretic semantics. The efficiency of reasoning has often been advocated as a primary motivation for the use of such systems. The main contributions of the paper are: (1) a complexity analysis of concept satisfiability and subsumption for a wide class of concept languages; (2) algorithms for these inferences that comply with the worst-case complexity of the reasoning task they perform.

210 citations


Journal ArticleDOI
TL;DR: This paper investigates a peculiar intuitionistic modal logic, called Propositional Lax Logic (PLL), which has promising applications to the formal verification of computer hardware, and investigates some of its proof-theoretic properties and defines a new class of fallible two-frame Kripke models for PLL.
Abstract: We investigate a peculiar intuitionistic modal logic, called Propositional Lax Logic (PLL), which has promising applications to the formal verification of computer hardware. The logic has emerged from an attempt to express correctness up to behavioural constraints?a central notion in hardware verification?as a logical modality. As a modal logic it is special since it features a single modal operator ? that has a flavour both of possibility and of necessity. In the paper we provide the motivation for PLL and present several technical results. We investigate some of its proof-theoretic properties, presenting a cut-elimination theorem for a standard Gentzen-style sequent presentation of the logic. We go on to define a new class of fallible two-frame Kripke models for PLL. These models are unusual since they feature worlds with inconsistent information; furthermore, the only frame condition imposed is that the ?-frame be a subrelation of the ?-frame. We give a natural translation of these models into Goldblatt's J-space models of PLL. Our completeness theorem for these models yields a Godel-style embedding of PLL into a classical bimodal theory of type (S4,S4) and underpins a simple proof of the finite model property. We proceed to prove soundness and completeness of several theories for specialized classes of models. We conclude with a brief exploration of two concrete and rather natural types of model from hardware verification for which the modality ? models correctness up to timing constraints. We obtain decidability of ?-free fragment of the logic of the first type of model, which coincides with the stable form of Maksimova's intermediate logicL?.

168 citations


Journal ArticleDOI
TL;DR: It is shown that time-abstracted equivalence is decidable for the calculus of Wang, using classical methods based on a finite-state symbolic, structured operational semantics.
Abstract: In the last few years a number of real-time process calculi have emerged with the purpose of capturing important quantitative aspects of real-time systems. In addition, a number of process equivalences sensitive to time-quantities have been proposed, among these the notion of timed (bisimulation) equivalence. In this paper, we introduce a time-abstracting (bisimulation) equivalence and investigate its properties with respect to the real-time process calculus of Wang (Real-time behaviour of asynchronous agents, in “Proceedings of CONCUR90,” Lecture Notes in Computer Science, Vol. 458, Springer-Verlag, Berlin/New York, 1990). Seemingly, such an equivalence would yield very little information (if any) about the timing properties of a process. However, time-abstracted reasoning about a composite process may yield important information about the relative timing-properties of the components of the system. In fact, we show as a main theorem that such implicit reasoning will reveal all timing aspects of a process. More precisely, we prove that two processes are interchangeable in any context up to time-abstracted equivalence precisely if the two processes are themselves timed equivalent. As our second main theorem, we prove that time-abstracted equivalence is decidable for the calculus of Wang, using classical methods based on a finite-state symbolic, structured operational semantics.

99 citations


Journal ArticleDOI
TL;DR: It is shown that a very simple system of interaction combinators, with only three symbols and six rules, is a universal model of distributed computation, in a sense that will be made precise.
Abstract: It is shown that a very simple system ofinteraction combinators, with only three symbols and six rules, is a universal model of distributed computation, in a sense that will be made precise. This paper is the continuation of the author's work oninteraction nets, inspired by Girard's proof nets forlinear logic, but no preliminary knowledge of these topics is required for its reading.

81 citations


Journal ArticleDOI
TL;DR: This paper proposes and analyse several approximation algorithms with constant absolute worst case ratio for graphs that can be colored in polynomial time and considers the following time constrained scheduling problem.
Abstract: In this paper we consider the following time constrained scheduling problem. Given a set of jobsJwith execution timese(j)?(0, 1] and an undirected graphG=(J, E), we consider the problem to find a schedule for the jobs such that adjacent jobs (j, j?)?Eare assigned to different machines and that the total execution time for each machine is at most 1. The goal is to find a minimum number of machines to execute all jobs under this time constraint. This scheduling problem is a natural generalization of the classical bin-packing problem. We propose and analyse several approximation algorithms with constant absolute worst case ratio for graphs that can be colored in polynomial time.

81 citations


Journal ArticleDOI
TL;DR: A way to transform the All Pairs Shortest Distances problem where the edge lengths are integers with small (⩽M) absolute value into a problem with edge lengths in {−1, 0, 1}.
Abstract: There is a way to transform the All Pairs Shortest Distances (APSD) problem where the edge lengths are integers with small (⩽M) absolute value into a problem with edge lengths in {−1, 0, 1}. This transformation allows us to use the algorithms we developed earlier ([1]) and yields quite efficient algorithms. In this paper we give new improved algorithms for these problems. Forn=|V| the number of vertices,Mthe bound on edge length, andωthe exponent of matrix multiplication, we get the following results: 1. A directed nonnegative APSD(n, M) algorithm which runs inO(T(n, M)) time, where[formula]2. A undirected APSD(n, M) algorithm which runs inO(M(ω+1)/2nωlog(Mn)) time.

77 citations


Journal ArticleDOI
TL;DR: The complexity status of the MINIMUM t -spanner problem for various values of t is completely settled and approximation algorithms for the bandwidth minimization problem on convex bipartite graphs and split graphs using the notion of tree spanners are provided.
Abstract: A t -spanner of a graph G is a spanning subgraph H such that the distance between any two vertices in H is at most t times their distance in G . Spanners arise in the context of approximating the original graph with a sparse subgraph (Peleg, D., and Schaffer, A. A. (1989), J. Graph. Theory 13 (1), 99–116). The MINIMUM t -SPANNER problem seeks to find a t -spanner with the minimum number of edges for the given graph. In this paper, we completely settle the complexity status of this problem for various values of t , on chordal graphs, split graphs, bipartite graphs and convex bipartite graphs. Our results settle an open question raised by L. Cai (1994, Discrete Appl. Math. 48 , 187–194) and also greatly simplify some of the proofs presented by Cai and by L. Cai and M. Keil (1994, Networks 24 , 233–249). We also give a factor 2 approximation algorithm for the MINIMUM 2-SPANNER problem on interval graphs. Finally, we provide approximation algorithms for the bandwidth minimization problem on convex bipartite graphs and split graphs using the notion of tree spanners.

63 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that integers in the range 1,?n can be sorted in O((logn)2) time with O(n) operations on an EREW PRAM using a nonstandard word length of O(loglognlogm) bits, thereby greatly improving the upper bound on the word length necessary to sort integers with a linear time?processor product.
Abstract: We show thatnintegers in the range 1,?,ncan be sorted stably on an EREW PRAM usingO(t) time andO(n(lognloglogn+(logn)2/t)) operations, for arbitrary givent?lognloglogn, and on a CREW PRAM usingO(t) time andO(n(logn+logn/2t/logn)) operations, for arbitrary givent?logn. In addition, we are able to sortnarbitrary integers on a randomized CREW PRAM within the same resource bounds with high probability. In each case our algorithm is a factor of almost?:(logn) closer to optimality than all previous algorithms for the stated problem in the stated model, and our third result matches the operation count of the best previous sequential algorithm. We also show thatnintegers in the range 1,?,mcan be sorted inO((logn)2) time withO(n) operations on an EREW PRAM using a nonstandard word length ofO(lognloglognlogm) bits, thereby greatly improving the upper bound on the word length necessary to sort integers with a linear time?processor product, even sequentially. Our algorithms were inspired by, and in one case directly use, the fusion trees of Fredman and Willard.

Journal ArticleDOI
TL;DR: In this paper, a family of cyclic?-graphs with explicit recursion with explicit substitution is introduced. But the confluence property breaks down in an essential way, and a restraining mechanism on the substitution operation is introduced to restore it.
Abstract: This paper is concerned with the study of?-calculus with explicit recursion, namely of cyclic?-graphs. The starting point is to treat a?-graph as a system of recursion equations involving?-terms and to manipulate such systems in an unrestricted manner, using equational logic, just as is possible for first-order term rewriting. Surprisingly, now the confluence property breaks down in an essential way. Confluence can be restored by introducing a restraining mechanism on the substitution operation. This leads to a family of?-graph calculi, which can be seen as an extension of the family of??-calculi (?-calculi with explicit substitution). While the??-calculi treat the let-construct as a first-class citizen, our calculi support the letrec, a feature that is essential to reason about time and space behavior of functional languages and also about compilation and optimizations of programs

Journal ArticleDOI
TL;DR: The Input/Output automaton model and the theory of testing are analyzed in the framework of transition systems and the new reversed MUST preorder is shown to coincide with the quiescent preorder on strongly convergent, finitely branching automata.
Abstract: Two different formalisms for concurrency are compared and are shown to have common foundations. The Input/Output automaton model and the theory of testing are analyzed in the framework of transition systems. The relationship between the fair and quiescent preorders of I/O automata is investigated and the two preorders are shown to coincide on a large class of automata. I/O automata are encoded into the theory of testing and the reversed MUST preorder is shown to be equivalent to the quiescent preorder for strongly convergent, finitely branching automata up to encoding. Conversely, a theory of testing is defined directly on I/O automata, and the new reversed MUST preorder is shown to coincide with the quiescent preorder on strongly convergent, finitely branching automata. Finally, some considerations are given on the issue of divergence, and on other existing theories with an I/O distinction.

Journal ArticleDOI
TL;DR: A quasi-polynomial-time algorithm for sampling almost uniformly at random from then-slice of the languageL(G) generated by an arbitrary context-free grammarG, where |G| is a natural measure of the size of grammarG.
Abstract: A quasi-polynomial-time algorithm is presented for sampling almost uniformly at random from then-slice of the languageL(G) generated by an arbitrary context-free grammarG. (Then-slice of a languageLover an alphabetΣis the subsetL∩Σnof words of length exactlyn.) The time complexity of the algorithm ise−2(n |G|)O(log n)where the parameterebounds the variation of the output distribution from uniform, and |G| is a natural measure of the size of grammarG. The algorithm applies to a class of language sampling problems that includes slices of context-free languages as a proper subclass. For the restricted case of homogeneous languages expressed by regular expressions without Kleene-star, a truly polynomial-time algorithm is presented.

Journal ArticleDOI
TL;DR: The family of frontiers of recognizable picture languages is exactly the family of context-sensitive languages.
Abstract: The theorem stating that the family of frontiers of recognizable tree languages is exactly the family of context-free languages (see J. Mezei and J. B. Wright, 1967, Inform. and Comput. 11 , 3–29), is a basic result in the theory of formal languages. In this article, we prove a similar result: the family of frontiers of recognizable picture languages is exactly the family of context-sensitive languages

Journal ArticleDOI
TL;DR: It is shown that this question is PSPACE hard for all equivalences that lie between strong bisimulation and trace equivalences and that this problem is NP hard and co-NP hard even for a class of very simple finite agents.
Abstract: A concurrent system of synchronous communicating agents is assembled from simpler sequential agents by parallel composition and hiding. For example,hide a1, … alin (p1‖p2…‖ pn) describes the system of communicating agentsp1, … pnin which the communication eventsa1, … alare hidden. Consider descriptions of two systemspandqof synchronously communicating finite state agents. Assume that one wants to check whetherp∼qfor one of the commonly used equivalence ∼. We show that this question is PSPACE hard for all equivalences that lie between strong bisimulation and trace equivalences. For some equivalences exponential lower and upper bounds are proven. We also show that this problem is NP hard and co-NP hard even for a class of very simple finite agents.

Journal ArticleDOI
TL;DR: It is shown that every graph that does not contain K 2, : r as a minor has treewidth at most 2 r −2, and graphs that allow k -label Interval Routing Schemes under dynamic cost edges have treewitzer at most 4 k.
Abstract: In this paper, we investigate which processor networks allow k -label Interval Routing Schemes, under the assumption that costs of edges may vary. We show that for each fixed k ⩾1, the class of graphs allowing such routing schemes is closed under minor-taking in the domain of connected graphs, and hence has a linear time recognition algorithm. This result connects the theory of compact routing with the theory of graph minors and treewidth. We show that every graph that does not contain K 2, : r as a minor has treewidth at most 2 r −2. As a consequence, graphs that allow k -label Interval Routing Schemes under dynamic cost edges have treewidth at most 4 k . Similar results are shown for other types of Interval Routing Schemes.

Journal ArticleDOI
TL;DR: This paper defines a general scheme for recursive definitions and proves that, for all systems that satisfy this scheme, every term typeable without using the type-constant is strongly normalizable and all typeable terms have a (weak) head-normal form.
Abstract: In this paper we introduce Curryfied term rewriting systems, and a notion of partial type assignment on terms and rewrite rules that uses intersection types with sorts and?. Three operations on types?substitution, expansion, and lifting?are used to define type assignment and are proved to be sound. With this result the system is proved closed for reduction. Using a more liberal approach to recursion, we define a general scheme for recursive definitions and prove that, for all systems that satisfy this scheme, every term typeable without using the type-constant?is strongly normalizable. We also show that, under certain restrictions, all typeable terms have a (weak) head-normal form, and that terms whose type does not contain?are normalizable.

Journal ArticleDOI
TL;DR: The sequences are characterized to show that the successor function associated with the sequence is a left, resp.
Abstract: LetUbe a strictly increasing sequence of integers. By a greedy algorithm, every nonnegative integer has a greedyU-representation. The successor function maps the greedyU-representation ofNonto the greedyU-representation ofN+1. We characterize the sequencesUsuch that the successor function associated withUis a left, resp. a right, sequential function. We also show that the odometer associated toUis continuous if and only if the successor function is right sequential.

Journal ArticleDOI
TL;DR: Two models of on-line learning of binary-valued functions from drifting distributions due to Bartlett are considered and it is shown that if each example is drawn from a joint distribution which changes in total variation distance between trials, then an algorithm can achieve a probability of a mistake at mosteworse than the best function in a class of VC-dimensiond.
Abstract: We consider two models of on-line learning of binary-valued functions from drifting distributions due to Bartlett. We show that if each example is drawn from a joint distribution which changes in total variation distance by at mostO(e3/(d log(1/e))) between trials, then an algorithm can achieve a probability of a mistake at mosteworse than the best function in a class of VC-dimensiond. We prove a corresponding necessary condition ofO(e3/d). Finally, in the case that a fixed function is to be learned from noise-free examples, we show that if the distributions on the domain generating the examples change by at mostO(e2/(d log(1/e))), then any consistent algorithm learns to within accuracye.

Journal ArticleDOI
TL;DR: It is shown that the general problem is NP-complete for bi-connected planar graphs, and an approximation algorithm is presented to triangulate triconnected planar graph such that the maximum degree of the triangulation is at mostd+8, wheredis the maximumdegree of the input graph.
Abstract: In this paper we consider the problem how to augment a planar graph to a triangulated planar graph while minimizing the maximum degree increase. We show that the general problem is NP-complete for bi-connected planar graphs. An approximation algorithm is presented to triangulate triconnected planar graphs such that the maximum degree of the triangulation is at mostd+8, wheredis the maximum degree of the input graph. Generalizing this result yields a triangulation algorithm for general planar graphs with maximum degree at most an additional constant larger than existing lower bounds.

Journal ArticleDOI
TL;DR: A new technique to infer strong normalization of a notion of reduction in a typed?-calculus from weaknormalization of thesamenotion of reduction is presented, giving hope for a positive answer to the Barendregt?Geuvers conjecture stating that every pure type system which is weakly normalizing is also strongly normalizing.
Abstract: For some typed?-calculi it is easier to prove weak normalization than strong normalization. Techniques to infer the latter from the former have been invented over the last twenty years by Nederpelt, Klop, Khasidashvili, Karr, de Groote, and Kfoury and Wells. However, these techniques infer strong normalization of one notion of reduction from weak normalization of amore complicatednotion of reduction. This paper presents a new technique to infer strong normalization of a notion of reduction in a typed?-calculus from weak normalization of thesamenotion of reduction. The technique is demonstrated to work on some well-known systems including second-order?-calculus and the system of positive, recursive types. It gives hope for a positive answer to the Barendregt?Geuvers conjecture stating that every pure type system which is weakly normalizing is also strongly normalizing. The paper also analyzes the relationship between the techniques mentioned above, and reviews, in less detail, other techniques for proving strong normalization.

Journal ArticleDOI
TL;DR: Results are presented that relate topological properties of learnable classes to that of intrinsic complexity and ordinal mind change complexity and show that a class that is complete according to the reductions for intrinsic complexity has infinite elasticity.
Abstract: Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data Examples of these classes are Angluin's pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by length-bounded elementary formal systems studied by Shinohara The present paper employs two distinct bodies of abstract studies in the inductive inference literature to analyze the learnability of these concrete classes The first approach uses constructive ordinals to bound the number of mind changesωdenotes the first limit ordinal An ordinal mind change bound ofωmeans that identification can be carried out by a learner that after examining some element(s) of the language announces an upper bound on the number of mind changes it will make before converging; a bound ofω·2 means that the learner reserves the right to revise this upper bound once; a bound ofω·3 means the learner reserves the right to revise this upper bound twice, and so on A bound ofω2means that identification can be carried out by a learner that announces an upper bound on the number of times it may revise its conjectured upper bound on the number of mind changes It is shown in the present paper that the ordinal mind change complexity for identification of languages formed by unions of up to n pattern languages isωn It is also shown that this bound is essential Similar results are also shown to hold for classes definable by length-bounded elementary formal systems with up to n clauses The second approach employs reductions to study the intrinsic complexity of learnable classes It is shown that the class of languages formed by taking unions of up ton+1 pattern languages is a strictly more difficult learning problem than the class of languages formed by the union of up tonpattern languages It is also shown that a similar hierarchy holds for the bound on the number of clauses in the case of languages definable by length-bounded EFS In addition to building bridges between three distinct areas of inductive inference, viz, learnability of EFS subclasses, ordinal mind change complexity, and intrinsic complexity, this paper also presents results that relate topological properties of learnable classes to that of intrinsic complexity and ordinal mind change complexity For example, it is shown that a class that is complete according to the reductions for intrinsic complexity has infinite elasticity Since EFS languages and their learnability results have counterparts in traditional logic programming, the present paper demonstrates the possibility of using abstract results of inductive inference to gain insights into inductive logic programming

Journal ArticleDOI
TL;DR: The communication power of the one-way and two-way edge-disjoint path modes for broadcast and gossip is investigated, and some upper bounds are obtained for theOne-way mode and the complete binary trees meet the upper bound.
Abstract: The communication power of the one-way and two-way edge-disjoint path modes for broadcast and gossip is investigated. The complexity of communication algorithms is measured by the number of communication steps (rounds). The main results achieved are the following: 1. For each connected graphGnofnnodes, the complexity of broadcast inGn,Bmin(Gn), satisfies ?log2n??Bmin(Gn)??log2n?+1. The complete binary trees meet the upper bound, and all graphs containing a Hamiltonian path meet the lower bound. 2. For each connected graphGnofnnodes, the one-way (two-way) gossip complexityR(Gn) (R2(Gn)) satisfies?log2n??R2(Gn)?2·?log2n?+1,1.44...log2n?R(Gn)?2·?log2n?+2.All these lower and upper bounds are shown to be sharp up to 1. 3. All planar graphs ofnnodes and degreehhave a two-way gossip complexity of at least 1.5log2n?log2log2n?0.5log2h?8, and the two-dimensional grid ofnnodes has the gossip complexity 1.5log2n?log2log2n±O(1); i.e., two-dimensional grids are optimal gossip structures among planar graphs of bounded degree. Some upper bounds are also obtained for the one-way mode. 4. Thed-dimensional grid,d?3, ofnnodes has the two-way gossip complexity (1+1/d)·log2n?log2nlog2n±O(d).

Journal ArticleDOI
TL;DR: The results imply that when considering nulls in relational database design the authors need not assume that NINDs are noncircular, and it is proved that the implication problem for NFDs and Ninds is decidable and EXPTIME-complete.
Abstract: Functional dependencies (FDs) and inclusion dependencies (INDs) are the most fundamental integrity constraints that arise in practice in relational databases. We introduce null inclusion dependencies (NINDs) to cater for the situation when a database is incomplete and contains null values. We show that the implication problem for NINDs is the same as that for INDs. We then present a sound and complete axiom system for null functional dependencies (NFDs) and NINDs, and prove that the implication problem for NFDs and NINDs is decidable and EXPTIME-complete. By contrast, when no nulls are allowed, this implication problem is undecidable. This undecidability result has motivated several researchers to restrict their attention to FDs and noncircular INDs in which case the implication problem was shown to be EXPTIME- complete. Our results imply that when considering nulls in relational database design we need not assume that NINDs are noncircular.

Journal ArticleDOI
TL;DR: The addition of multi-exit iteration to BPA yields a more expressive language than that obtained by augmenting BPA with the standard binary Kleene star (BPA*), and the proof of completeness of the proposed equational axiomatization for this language is much more involved than that for BPA*.
Abstract: This paper presents an equational axiomatization of bisimulation equivalence over the language of Basic Process Algebra (BPA) with multi-exit iteration. Multi-exit iteration is a generalization of the standard binary Kleene star operation that allows for the specification of agents that, up to bisimulation equivalence, are solutions of systems of recursion equations of the form X 1 = def P 1 X 2 +Q 1 ⋮ X n = def P n X 1 +Q n , wherenis a positive integer and thePiand theQiare process terms. The addition of multi-exit iteration to BPA yields a more expressive language than that obtained by augmenting BPA with the standard binary Kleene star (BPA*). As a consequence, the proof of completeness of the proposed equational axiomatization for this language, although standard in its general structure, is much more involved than that for BPA*. An expressiveness hierarchy for the family ofk-exit iteration operators proposed by Bergstra, Bethke, and Ponse is also offered.

Journal ArticleDOI
TL;DR: This paper presents the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model, and proves that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi - random.
Abstract: This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to model the “typical” labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in presenting the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means of resetting the target machine to a fixed start state, we first present an efficient algorithm that makes an expected polynomial number of mistakes in this model. Next, we show how this first algorithm can be used as a subroutine by a second algorithm that also makes a polynomial number of mistakes even in the absence of a reset. Along the way, we prove a number of combinatorial results for randomly labeled automata. We also show that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi - random . Finally, we discuss an extension of our results to a model in which automata are used to represent distributions over binary strings.

Journal Article
TL;DR: A new method is proposed for extracting general rules from an artificial neural network, which is trained by destructive learning, that can obtain a simpler and smaller size network structure and extract a relatively small rule set by using pruned network.
Abstract: A new method is proposed for extracting general rules from an artificial neural network, which is trained by destructive learning The method consistes of two phases of preprocessing and rule extracting The preprocessing phase contains three parts:dynamic modification, cluster and pruning The dynamic modification generates automatically or constructs a fully connected or non fully connected preliminary topological network having one hidden layer from initial rule set Redundant and unimportant hidden unites and links are deleted from a trained network respectively in the cluster and pruning phase, and then, the link weights remaining in the network are retrained to obtain the same MSE Thus, we can obtain a simpler and smaller size network structure and extract a relatively small rule set by using pruned network The method is applied to meteorologic cloud atlas data from AD reports of USA Test results using test data set show the correctness and effectiveness of proposed method, and this method is a simple and feasible one

Journal ArticleDOI
TL;DR: This paper develops a denotational semantics for constraint logic programming with dynamic scheduling, where the denotation of an atom or goal is a set of closure operators, where different closure operators correspond to different sequences of rule choices.
Abstract: The first logic programming languages, such as Prolog, used a fixed left-to-right atom scheduling rule. Recent logic programming languages, however, provide more flexible scheduling in which there is a default computation rule such as left-to-right but in which some calls are dynamically “delayed” until their arguments are sufficiently instantiated to allow the call to run efficiently. Such languages include constraint logic programming languages, since most implementations of these languages delay constraints which are “too hard.” From the semantic point of view, the fact that an atom must be delayed under certain conditions, causes the standard semantics of (constraint) logic programming to be no longer adequate to capture the meaning of a program. In our paper we attack this problem and we develop a denotational semantics for constraint logic programming with dynamic scheduling. The key idea is that the denotation of an atom or goal is a set of closure operators, where different closure operators correspond to different sequences of rule choices.

Journal ArticleDOI
Martin J. Strauss1
TL;DR: A notion of measure at P is given in a paradigm that differs somewhat from the standard theory, including the density and immunity characteristics of a random language, and it is argued that these results are parallel to previous measure results at exponential time.
Abstract: We give a notion of measure at P in a paradigm that differs somewhat from the standard theory. Our new notion overcomes some limitations of earlier formulations, specifically, concerning closure of null sets under union. First, we analyze formally some of the difficulties in defining measure at P. We then present the new definitions and determine the basic properties of the notion, including the density and immunity characteristics of a random language. We argue that these results are parallel to previous measure results at exponential time.