scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2007"


Journal ArticleDOI
TL;DR: The first technical result shows that the schedulability checking problem will be undecidable if the following three conditions hold: the execution times of tasks are intervals, the precise finishing time of a task instance may influence new task releases, and a task is allowed to preempt another running task.
Abstract: We present a model, task automata, for real time systems with non-uniformly recurring computation tasks. It is an extended version of timed automata with asynchronous processes that are computation tasks generated (or triggered) by timed events. Compared with classical task models for real time systems, task automata may be used to describe tasks (1) that are generated non-deterministically according to timing constraints in timed automata, (2) that may have interval execution times representing the best case and the worst case execution times, and (3) whose completion times may influence the releases of task instances. We generalize the classical notion of schedulability to task automata. A task automaton is schedulable if there exists a scheduling strategy such that all possible sequences of events generated by the automaton are schedulable in the sense that all associated tasks can be computed within their deadlines. Our first technical result is that the schedulability for a given scheduling strategy can be checked algorithmically for the class of task automata when the best case and the worst case execution times of tasks are equal. The proof is based on a decidable class of suspension automata: timed automata with bounded subtraction in which clocks may be updated by subtractions within a bounded zone. We shall also study the borderline between decidable and undecidable cases. Our second technical result shows that the schedulability checking problem will be undecidable if the following three conditions hold: (1) the execution times of tasks are intervals, (2) the precise finishing time of a task instance may influence new task releases, and (3) a task is allowed to preempt another running task.

181 citations


Journal ArticleDOI
TL;DR: A symbolic framework for the verification of Probabilistic timed automata against the probabilistic, timed temporal logic PTCTL is presented and the results of its application to the CSMA/CD and FireWire root contention protocol case studies are presented.
Abstract: Probabilistic timed automata are timed automata extended with discrete probability distributions, and can be used to model timed randomised protocols or fault-tolerant systems. We present symbolic model-checking algorithms for probabilistic timed automata to verify both qualitative temporal logic properties, corresponding to satisfaction with probability 0 or 1, and quantitative properties, corresponding to satisfaction with arbitrary probability. The algorithms operate on zones, which represent sets of valuations of the probabilistic timed automaton's clocks. Our method considers only those system behaviours which guarantee the divergence of time with probability 1. The paper presents a symbolic framework for the verification of probabilistic timed automata against the probabilistic, timed temporal logic PTCTL. We also report on a prototype implementation of the algorithms using Difference Bound Matrices, and present the results of its application to the CSMA/CD and FireWire root contention protocol case studies.

157 citations


Journal ArticleDOI
TL;DR: This paper presents a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra and shows decidability of should-testing (on the basis of the denotational characterisation); its advantages are demonstrated by the application to a number of examples.
Abstract: In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In fact, we present two distinct characterisations, which give rise to the same relation: an operational one based on a De Nicola-Hennessy-like testing modality which we call should-testing, and a denotational one based on a refined notion of failures. One of the distinguishing characteristics of the should-testing pre-congruence is that it abstracts from divergences in the same way as Milner's observation congruence, and as a consequence is strictly coarser than observation congruence. In other words, should-testing has a built-in fairness assumption. This is in itself a property long sought-after; it is in notable contrast to the well-known must-testing of De Nicola and Hennessy (denotationally characterised by a combination of failures and divergences), which treats divergence as catastrophic and hence is incompatible with observation congruence. Due to these characteristics, should-testing supports modular reasoning and allows to use the proof techniques of observation congruence, but also supports additional laws and techniques. Moreover, we show decidability of should-testing (on the basis of the denotational characterisation). Finally, we demonstrate its advantages by the application to a number of examples, including a scheduling problem, a version of the Alternating Bit-protocol, and fair lossy communication channels.

142 citations


Journal ArticleDOI
TL;DR: It is shown how good properties of first-order rewriting survive the extension, by giving an efficient rewriting algorithm, a critical pair lemma, and a confluence theorem for orthogonal systems.
Abstract: Nominal rewriting is based on the observation that if we add support for @a-equivalence to first-order syntax using the nominal-set approach, then systems with binding, including higher-order reduction schemes such as @l-calculus beta-reduction, can be smoothly represented. Nominal rewriting maintains a strict distinction between variables of the object-language (atoms) and of the meta-language (variables or unknowns). Atoms may be bound by a special abstraction operation, but variables cannot be bound, giving the framework a pronounced first-order character, since substitution of terms for variables is not capture-avoiding. We show how good properties of first-order rewriting survive the extension, by giving an efficient rewriting algorithm, a critical pair lemma, and a confluence theorem for orthogonal systems.

126 citations


Journal ArticleDOI
TL;DR: The Tyrolean Termination Tool incorporates several new refinements of the dependency pair method that are easy to implement, increase the power of the method, result in simpler termination proofs, and make the method more efficient.
Abstract: The Tyrolean Termination Tool (T"TT for short) is a powerful tool for automatically proving termination of rewrite systems. It incorporates several new refinements of the dependency pair method that are easy to implement, increase the power of the method, result in simpler termination proofs, and make the method more efficient. T"TT employs polynomial interpretations with negative coefficients, like x-1 for a unary function symbol or x-y for a binary function symbol, which are useful for extending the class of rewrite systems that can be proved terminating automatically. Besides a detailed account of these techniques, we describe the convenient web interface of T"TT and provide some implementation details.

109 citations


Journal ArticleDOI
TL;DR: A new automata-theoretic technique is used to show PSPACE decidability of the logic of linear-time temporal logic with constraints interpreted over a concrete domain, and it is shown that the logic becomes undecidable when one considers constraint systems that allow a counting mechanism.
Abstract: We consider an extension of linear-time temporal logic (LTL) with constraints interpreted over a concrete domain. We use a new automata-theoretic technique to show PSPACE decidability of the logic for the constraint systems (Z,<,=) and (N,<,=). Along the way, we give an automata-theoretic proof of a result of Balbiani and Condotta when the constraint system satisfies the completion property. Our decision procedures extend easily to handle extensions of the logic with past-time operators and constants, as well as an extension of the temporal language itself to monadic second order logic. Finally we show that the logic becomes undecidable when one considers constraint systems that allow a counting mechanism.

103 citations


Journal ArticleDOI
TL;DR: In this article, the authors define reactive simulatability for general asynchronous systems, which is a type of refinement that preserves particularly strong properties, in particular confidentiality, and define the reactive runtime via a realization by Turing machines such that notions like polynomial time are composable.
Abstract: We define reactive simulatability for general asynchronous systems. Roughly, simulatability means that a real system implements an ideal system (specification) in a way that preserves security in a general cryptographic sense. Reactive means that the system can interact with its users multiple times, e.g., in many concurrent protocol runs or a multi-round game. In terms of distributed systems, reactive simulatability is a type of refinement that preserves particularly strong properties, in particular confidentiality. A core feature of reactive simulatability is composability, i.e., the real system can be plugged in instead of the ideal system within arbitrary larger systems; this is shown in follow-up papers, and so is the preservation of many classes of individual security properties from the ideal to the real systems. A large part of this paper defines a suitable system model. It is based on probabilistic IO automata (PIOA) with two main new features: One is generic distributed scheduling. Important special cases are realistic adversarial scheduling, procedure-call-type scheduling among colocated system parts, and special schedulers such as for fairness, also in combinations. The other is the definition of the reactive runtime via a realization by Turing machines such that notions like polynomial-time are composable. The simple complexity of the transition functions of the automata is not composable. As specializations of this model we define security-specific concepts, in particular a separation between honest users and adversaries and several trust models. The benefit of IO automata as the main model, instead of only interactive Turing machines as usual in cryptographic multi-party computation, is that many cryptographic systems can be specified with an ideal system consisting of only one simple, deterministic IO automaton without any cryptographic objects, as many follow-up papers show. This enables the use of classic formal methods and automatic proof tools for proving larger distributed protocols and systems that use these cryptographic systems.

103 citations


Journal ArticleDOI
TL;DR: It is shown that ρ(n)≤n and there are at most O.67n runs with periods larger than 87, which supports the conjecture that the number of all runs is smaller than n.
Abstract: A run in a string is a nonextendable (with the same minimal period) periodic segment in a string. The set of runs corresponds to the structure of internal periodicities in a string. Periodicities in strings were extensively studied and are important both in theory and practice (combinatorics of words, pattern-matching, computational biology). Let ρ(n) be the maximal number of runs in a string of length n. It has been shown that ρ(n)=O(n), the proof was very complicated and the constant coefficient in O(n) has not been given explicitly. We demystify the proof of the linear upper bound for ρ(n) and propose a new approach to the analysis of runs based on the properties of subperiods:the periods of periodic parts of the runs We show that ρ(n)≤n and there are at most O.67n runs with periods larger than 87. This supports the conjecture that the number of all runs is smaller than n. We also give a completely new proof of the linear bound and discover several new interesting "periodicity lemmas".

67 citations


Journal ArticleDOI
TL;DR: In this article, a simple and clean incremental congruence closure algorithm for ground equational theories is presented, which runs in the best known time O(n log n log n).
Abstract: Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi-)decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of input equations, the small subset that explains the equivalence of a given pair of terms. In this paper we present an algorithm satisfying all these requirements. First, building on ideas from abstract congruence closure algorithms, we present a very simple and clean incremental congruence closure algorithm and show that it runs in the best known time O(n logn). After that, we introduce a proof-producing union-find data structure that is then used for extending our congruence closure algorithm, without increasing the overall O(n logn) time, in order to produce a k-step explanation for a given equation in almost optimal time (quasi-linear in k). Finally, we show that the previous algorithms can be smoothly extended, while still obtaining the same asymptotic time bounds, in order to support the interpreted functions symbols successor and predecessor, which have been shown to be very useful in applications such as microprocessor verification.

63 citations


Journal ArticleDOI
TL;DR: A new method for automatically proving termination of left-linear term rewriting systems on a given regular language of terms is presented, a generalization of the match bound method for string rewriting and two methods to construct the enrichment are presented.
Abstract: We present a new method for automatically proving termination of left-linear term rewriting systems on a given regular language of terms. It is a generalization of the match bound method for string rewriting. To prove that a term rewriting system terminates we first construct an enriched system over a new signature that simulates the original derivations. The enriched system is an infinite system over an infinite signature, but it is locally terminating: every restriction of the enriched system to a finite signature is terminating. We then construct iteratively a finite tree automaton that accepts the enriched given regular language and is closed under rewriting modulo the enriched system. If this procedure stops, then the enriched system is compact: every enriched derivation involves only a finite signature. Therefore, the original system terminates. We present two methods to construct the enrichment: roof heights for left-linear systems, and match heights for linear systems. For linear systems, the method is strengthened further by a forward closure construction. Using these methods, we give examples for automated termination proofs that cannot be obtained by standard methods.

62 citations


Journal ArticleDOI
TL;DR: In this article, a similarity measure, Eros (extended Frobenius norm), an index structure, Muse (multilevel distance-based index structure for Eros), and a feature subset selection technique, Ropes, are presented.
Abstract: Multivariate time series (MTS) datasets are common in various multimedia, medical and financial applications. In order to efficiently perform k nearest neighbor searches for MTS datasets, we present a similarity measure, Eros (extended Frobenius norm), an index structure, Muse (multilevel distance-based index structure for Eros), and a feature subset selection technique, Ropes (recursive feature elimination on common principal components for Eros). Eros is based on principal component analysis, and computes the similarity between two MTS items by measuring how close the corresponding principal components are using the eigenvalues as weights. Muse constructs each level as a distance-based index structure without using the weights, up to z levels, which are combined at the query time with the weights. Ropes utilizes both the common principal components and the weights recursively in order to select a subset of features for Eros. The experimental results show the superiority of our techniques as compared to earlier approaches.

Journal ArticleDOI
TL;DR: The edit distance (or Levenshtein distance) between two words is the smallest number of substitutions, insertions, and deletions of symbols that can be used to transform one of the words into the other.
Abstract: The edit distance (or Levenshtein distance) between two words is the smallest number of substitutions, insertions, and deletions of symbols that can be used to transform one of the words into the other In this paper, we consider the problem of computing the edit distance of a regular language (the set of words accepted by a given finite automaton) This quantity is the smallest edit distance between any pair of distinct words of the language We show that the problem is of polynomial time complexity In particular, for a given finite automaton A with n transitions, over an alphabet of r symbols, our algorithm operates in time O(n2r2q2( q+r)), where q is either the diameter of A (if A is deterministic), or the square of the number of states in A (if A is nondeterministic) Incidentally, we also obtain an upper bound on the edit distance of a regular language in terms of the automaton accepting the language

Journal ArticleDOI
TL;DR: In this article, it was shown that the state hierarchy of d-state automata is not contiguous, and that there are holes in the hierarchy, i.e., magic numbers in between values that are not magic.
Abstract: A number d is magic for n, if there is no regular language for which an optimal nondeterministic finite state automaton (nfa) uses exactly n states and, at the same time, the optimal deterministic finite state automaton (dfa) uses exactly d states. We show that, in the case of unary regular languages, the state hierarchy of dfa's, for the family of languages accepted by n-state nfa's, is not contiguous. There are some ''holes'' in the hierarchy, i.e., magic numbers in between values that are not magic. This solves, for automata with a single letter input alphabet, an open problem of existence of magic numbers. Actually, most of the numbers is magic in the unary case. As an additional bonus, we also get a new universal lower bound for the conversion of unary d-state dfa's into equivalent nfa's: nondeterminism does not reduce the number of states below log^2d, not even in the best case.

Journal ArticleDOI
TL;DR: A polynomial-time method for solving the problem of testing whether a given regular expression E with numeric occurrence indicators is 1-unambiguous or not and a formal proof of its correctness.
Abstract: Regular expressions with numeric occurrence indicators are an extension of traditional regular expressions, which let the required minimum and the allowed maximum number of iterations of subexpressions be described with numeric parameters. We consider the problem of testing whether a given regular expression E with numeric occurrence indicators is 1-unambiguous or not. This condition means, informally, that any prefix of any word accepted by expression E determines a unique path of matching symbol positions in E. One-unambiguity appears as a validity constraint in popular document schema languages such as SGML and XML DTDs (document type definitions) and XML Schema; the last one both includes numeric occurrence indicators and requires one-unambiguity of expressions. Previously published solutions for testing the one-unambiguity of regular expressions with numeric occurrence indicators are either erroneous or require exponential time. The main contribution of this paper is a polynomial-time method for solving this problem, and a formal proof of its correctness.

Journal ArticleDOI
TL;DR: The operational behaviour of the calculus and some of its fundamental properties such as confluence, preservation of strongnormalisation, strong normalisation of simply typed terms, step by step simulation of @b-reduction and full composition are shown.
Abstract: We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear Logic's proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties such as confluence, preservation of strong normalisation, strong normalisation of simply typed terms, step by step simulation of @b-reduction and full composition.

Journal ArticleDOI
TL;DR: It is proved that, without recursion, the linear and exponential versions of the logic correspond to significant fragments of first- order (FO) and monadic second-order (MSO) Logics; the two versions are actually equivalent to FO and MSO on graphs representing strings.
Abstract: We investigate the complexity and expressive power of a spatial logic for reasoning about graphs. This logic was previously introduced by Cardelli, Gardner and Ghelli, and provides the simplest setting in which to explore such results for spatial logics. We study several forms of the logic: the logic with and without recursion, and with either an exponential or a linear version of the basic composition operator. We study the combined complexity and the expressive power of the four combinations. We prove that, without recursion, the linear and exponential versions of the logic correspond to significant fragments of first-order (FO) and monadic second-order (MSO) Logics; the two versions are actually equivalent to FO and MSO on graphs representing strings. However, when the two versions are enriched with @m-style recursion, their expressive power is sharply increased.Both are able to express PSPACE-complete problems, although their combined complexity and data complexity still belong to PSPACE.

Journal ArticleDOI
TL;DR: The operational semantics of qCCS is given in terms of probabilistic labeled transition system, which has many different features compared with the proposals in the available literature in order to describe the input and output of quantum systems which are possibly correlated with other components.
Abstract: Modeling and reasoning about concurrent quantum systems is very important for both distributed quantum computing and quantum protocol verification. As a consequence, a general framework formally describing communication and concurrency in complex quantum systems is necessary. For this purpose, we propose a model named qCCS. It is a natural quantum extension of classical value-passing CCS which can deal with input and output of quantum states, and unitary transformations and measurements on quantum systems. The operational semantics of qCCS is given in terms of probabilistic labeled transition system. This semantics has many different features compared with the proposals in the available literature in order to describe the input and output of quantum systems which are possibly correlated with other components. Based on this operational semantics, the notions of strong probabilistic bisimulation and weak probabilistic bisimulation between quantum processes are introduced. Furthermore, some properties of these two probabilistic bisimulations, such as congruence under various combinators, are examined.

Journal ArticleDOI
TL;DR: This work presents a novel game-based approach to abstraction-refinement for the full @m-calculus, interpreted over 3-valued semantics, where a novel notion of non-losing strategy is introduced and exploited for refinement.
Abstract: This work presents a novel game-based approach to abstraction-refinement for the full @m-calculus, interpreted over 3-valued semantics. A novel notion of non-losing strategy is introduced and exploited for refinement. Previous works on refinement in the context of 3-valued semantics require a direct algorithm for solving a 3-valued model checking game. This was necessary in order to have the information needed for refinement available on one game board. In contrast, while still considering a 3-valued model checking game, here we reduce the problem of solving the game to solving two 2-valued model checking (parity) games. In case the result is indefinite (don't know), the corresponding non-losing strategies, when combined, hold all the information needed for refinement. This approach is beneficial since it can use any solver for 2-valued parity games. Thus, it can take advantage of newly developed such algorithms with improved complexity.

Journal ArticleDOI
TL;DR: FHMG (Fraenkel-Mostowski Generalised) set theory is introduced, a generalisation of FM set theory which allows binding of infinitely many names instead of just finitely many names.
Abstract: We introduce FMG (Fraenkel-Mostowski Generalised) set theory, a generalisation of FM set theory which allows binding of infinitely many names instead of just finitely many names. We apply this generalisation to show how three presentations of syntax-de Bruijn indices, FM sets, and name-carrying syntax-have a relation generalising to all sets and not only sets of syntax trees. We also give syntax-free accounts of Barendregt representatives, scope extrusion, and other phenomena associated to @a-equivalence. Our presentation uses a novel presentation based not on a theory but on a concrete model U.

Journal ArticleDOI
TL;DR: It is shown that monadic second-order logic has the selection and the uniformization properties over the extensions of (Nat,<) by monadic predicates and a self-contained proof of this result is provided.
Abstract: A fundamental result of Buchi states that the set of monadic second-order formulas true in the structure (Nat, <) is decidable. A natural question is: what monadic predicates (sets) can be added to (Nat, <) while preserving decidability? Elgot and Rabin found many interesting predicates P for which the monadic theory of is decidable. The Elgot and Rabin automata theoretical method has been generalized and sharpened over the years and their results were extended to a variety of unary predicates. We give a sufficient and necessary model-theoretical condition for the decidability of the monadic theory of (Nat,<,P"1,...,P"n).We reformulate this condition in an algebraic framework and show that a sufficient condition proposed previously by O. Carton and W.Thomas is actually necessary. A crucial argument in the proof is that monadic second-order logic has the selection and the uniformization properties over the extensions of (Nat,<) by monadic predicates. We provide a self-contained proof of this result.

Journal ArticleDOI
TL;DR: This work addresses the problem of proving hardness results for (fully) dense problems, which has been neglected despite the fruitful effort put in upper bounds, and proves hardness results of dense instances of a broad family of CSP problems, as well as a broadfamily of ranking problems which are referred to as CSP-Rank.
Abstract: In the past decade, there has been a stream of work in designing approximation schemes for dense instances of NP-Hard problems. These include the work of Arora, Karger and Karpinski from 1995 and that of Frieze and Kannan from 1996. We address the problem of proving hardness results for (fully) dense problems, which has been neglected despite the fruitful effort put in upper bounds. In this work, we prove hardness results of dense instances of a broad family of CSP problems, as well as a broad family of ranking problems which we refer to as CSP-Rank. Our techniques involve a construction of a pseudorandom hypergraph coloring, which generalizes the well-known Paley graph, recently used by Alon to prove hardness of feedback arc-set in tournaments.

Journal ArticleDOI
TL;DR: A natural subclass of regular languages (Alphabetic Pattern Constraints, APC) which is effectively closed under permutation rewriting, i.e., under iterative application of rules of the form ab->ba is proposed.
Abstract: We propose a natural subclass of regular languages (Alphabetic Pattern Constraints, APC) which is effectively closed under permutation rewriting, i.e., under iterative application of rules of the form ab->ba. It is well-known that regular languages do not have this closure property, in general. Our result can be applied for example to regular model checking, for verifying properties of parametrized linear networks of regular processes, and for modeling and verifying properties of asynchronous distributed systems. We also consider the complexity of testing membership in APC and show that the question is complete for PSPACE when the input is an NFA, and complete for NLOGSPACE when it is a DFA. Moreover, we show that both the inclusion problem and the question of closure under permutation rewriting are PSPACE-complete when we restrict to the class APC.

Journal ArticleDOI
TL;DR: It is shown that the number of words of length n on a finite alphabet that avoid p grows exponentially with n as long as the alphabet has at least four letters.
Abstract: We study words on a finite alphabet avoiding a finite collection of patterns. Given a pattern p in which every letter that occurs in p occurs at least twice, we show that the number of words of length n on a finite alphabet that avoid p grows exponentially with n as long as the alphabet has at least four letters. Moreover, we give lower bounds describing this exponential growth in terms of the size of the alphabet and the number of letters occurring in p. We also obtain analogous results for the number of words avoiding a finite collection of patterns. We conclude by giving some questions.

Journal ArticleDOI
TL;DR: The semantic theory of a foundational language for modelling applications over global computers whose interconnection structure can be explicitly manipulated is developed and an alternative characterisation in terms of a labelled bisimulation is provided.
Abstract: We develop the semantic theory of a foundational language for modelling applications over global computers whose interconnection structure can be explicitly manipulated. Together with process distribution, process mobility and remote asynchronous communication through distributed data repositories, the language has primitives for explicitly modelling inter-node connections and for dynamically activating and deactivating them. For the proposed language, we define natural notions of extensional observations and study their closure under operational reductions and/or language contexts to obtain barbed congruence and may testing equivalence. We then focus on barbed congruence and provide an alternative characterisation in terms of a labelled bisimulation. To test practical usability of the semantic theory, we model a system of communicating mobile devices and use the introduced proof techniques to verify one of its key properties.

Journal ArticleDOI
TL;DR: This paper fully extend Winskel's approach to single-pushout grammars providing them with a categorical concurrent semantics expressed as a coreflection between the category of (semi-weighted) graph grammar and the categoryof prime algebraic domains, which factorises through the categoryOf occurrence grammARS and the categories of asymmetric event structures.
Abstract: Several attempts have been made of extending to graph grammars the unfolding semantics originally developed by Winskel for (safe) Petri nets, but only partial results were obtained. In this paper, we fully extend Winskel's approach to single-pushout grammars providing them with a categorical concurrent semantics expressed as a coreflection between the category of (semi-weighted) graph grammars and the category of prime algebraic domains, which factorises through the category of occurrence grammars and the category of asymmetric event structures. For general, possibly nonsemi-weighted single-pushout grammars, we define an analogous functorial concurrent semantics, which, however, is not characterised as an adjunction. Similar results can be obtained for double-pushout graph grammars, under the assumptions that nodes are never deleted.

Journal ArticleDOI
TL;DR: The results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner's ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learners' choice of hypotheses space.
Abstract: U-shaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold's formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner's ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner's choice of hypotheses space.

Journal ArticleDOI
TL;DR: This paper presents, a fully typed λ-calculus based on the intersection-type system discipline, which is a counterpart a la Church of the type assignment system as invented by Coppo and Dezani.
Abstract: In this paper, we present , a fully typed λ-calculus based on the intersection-type system discipline, which is a counterpart a la Church of the type assignment system as invented by Coppo and Dezani. The relationship between and the intersection type assignment system is the standard isomorphism between typed and type assignment system, and so the typed language inherits from the untyped system all the good properties, like subject reduction and strong normalization. Moreover, both type checking and type reconstruction are decidable.

Journal ArticleDOI
TL;DR: This paper investigates the word problem for inverse monoids generated by a set @C subject to relations of the form e=f, where e and f are both idempotents in the free inverse monoid generated by @C, and shows that for every fixed monoid of this form theword problem can be solved both in linear time on a RAM as well as in deterministic logarithmic space.
Abstract: This paper investigates the word problem for inverse monoids generated by a set @C subject to relations of the form e=f, where e and f are both idempotents in the free inverse monoid generated by @C. It is shown that for every fixed monoid of this form the word problem can be solved both in linear time on a RAM as well as in deterministic logarithmic space, which solves an open problem of Margolis and Meakin. For the uniform word problem, where the presentation is part of the input, EXPTIME-completeness is shown. For the Cayley-graphs of these monoids, it is shown that the first-order theory with regular path predicates is decidable. Regular path predicates allow to state that there is a path from a node x to a node y that is labeled with a word from some regular language. As a corollary, the decidability of the generalized word problem is deduced.

Journal ArticleDOI
TL;DR: Algorithms are obtained that are almost optimal both in the worst and the average cases simultaneously, and are very close to the best current results for the case where only rotations, but not lighting invariance, are supported.
Abstract: We address the problem of searching for a two-dimensional pattern in a two-dimensional text (or image), such that the pattern can be found even if it appears rotated and it is brighter or darker than its occurrence. Furthermore, we consider approximate matching under several tolerance models. We obtain algorithms that are almost optimal both in the worst and the average cases simultaneously. The complexities we obtain are very close to the best current results for the case where only rotations, but not lighting invariance, are supported. These are the first results for this problem under a combinatorial approach.

Journal ArticleDOI
TL;DR: In this article, the authors consider metric extensions of qualitative TLs of the real line that are at most PSpace-complete, and analyze the transition from NP to PSpace for such logics.
Abstract: In many cases, the addition of metric operators to qualitative temporal logics (TLs) increases the complexity of satisfiability by at least one exponential: while common qualitative TLs are complete for NP or PSpace, their metric extensions are often ExpSpace-complete or even undecidable. In this paper, we exhibit several metric extensions of qualitative TLs of the real line that are at most PSpace-complete, and analyze the transition from NP to PSpace for such logics. Our first result is that the logic obtained by extending since-until logic of the real line with the operators 'sometime within n time units in the past/future' is still PSpace-complete. In contrast to existing results, we also capture the case where n is coded in binary and the finite variability assumption is not made. To establish containment in PSpace, we use a novel reduction technique that can also be used to prove tight upper complexity bounds for many other metric TLs in which the numerical parameters to metric operators are coded in binary. We then consider metric TLs of the reals that do not offer any qualitative temporal operators. In such languages, the complexity turns out to depend on whether binary or unary coding of parameters is assumed: satisfiability is still PSpace-complete under binary coding, but only NP-complete under unary coding.