scispace - formally typeset
Search or ask a question

Showing papers in "Fundamenta Informaticae in 1996"


Journal ArticleDOI
TL;DR: In tolerance approximation spaces the lower and upper set approximations are defined and the tolerance relation defined by the so called uncertainty function or the positive region of a given partition of objects have been chosen as invariants in the attribute reduction process.
Abstract: We generalize the notion of an approximation space introduced in [8] In tolerance approximation spaces we define the lower and upper set approximations We investigate some attribute reduction problems for tolerance approximation spaces determined by tolerance information systems The tolerance relation defined by the so called uncertainty function or the positive region of a given partition of objects have been chosen as invariants in the attribute reduction process We obtain the solutions of the reduction problems by applying boolean reasoning [1] The solutions are represented by tolerance reducts and relative tolerance reducts

955 citations


Journal ArticleDOI
TL;DR: The concept of application conditions introduced by Ehrig and Habel is restricted to contextual conditions, especially negative ones, and local confluence and the Parallelism Theorem for derivations with application conditions are state.
Abstract: In each graph-grammar approach it is defined how and under which conditions graph productions can be applied to a given graph in order to obtain a derived graph. The conditions under which productions can be applied are called application conditions. Although the generative power of most of the known general graph-grammar approaches is sufficient to generate any recursively enumerable set of graphs, it is often convenient to have specific application conditions for each production. Such application conditions, on the one hand, include context conditions like the existence or non-existence of nodes, edges, or certain subgraphs in the given graph as well as embedding restrictions concerning the morphisms from the left-hand side of the production to the given graph. In this paper, the concept of application conditions introduced by Ehrig and Habel is restricted to contextual conditions, especially negative ones. In addition to the general concept, we state local confluence and the Parallelism Theorem for derivations with application conditions. Finally we study context-free graph grammars with application conditions with respect to their generative power.

348 citations


Journal Article
TL;DR: A process-based semantics for graph grammars is developed, which represents each equivalence class of derivations as a graph process, which can be seen as an acyclic graph grammar, plus a mapping of its items onto the items of the given grammar.
Abstract: We first give a new definition of graph grammars, which, although following the algebraic double-pushout approach, is more general than the classical one because of the use of a graph of types where all involved graphs are mapped to. Then, we develop a process-based semantics for such (typed) graph grammars, in the line of processes as normally used for providing a semantics to Petri nets. More precisely, we represent each equivalence class of derivations as a graph process, which can be seen as an acyclic graph grammar, plus a mapping of its items onto the items of the given grammar. Finally, we show that such processes represent exactly the equivalence classes of derivations up to the well-known shift-equivalence, which have always been used in the literature on graph grammars. Therefore graph processes are attractive alternative representatives for such classes. The advantage of dealing with graph processes instead of equivalence classes (or also representatives belonging to the equivalence classes) is that dependency and/or concurrency of derivation steps is explicitly represented.

229 citations


Journal ArticleDOI
TL;DR: This paper is an extension of the paper presented at the international workshop on Rough Sets and Knowledge Discovery (RSKD'93) and provides a synthesis of the two important data modeling techniques: conceptual scaling of Formal Concept Analysis, and Entity-Relationship database modeling.
Abstract: The theory introduced, presented and developed in this paper, is concerned with Rough Concept Analysis. This theory is a synthesis of the theory of Rough Sets pioneered by Zdzislaw Pawlak [10] with the theory of Formal Concept Analysis pioneered by Rudolf Wille [11]. The central notion in this paper of a rough formal concept combines in a natural fashion the two notions of rough set and formal concept — to use a slogan: “rough set + formal concept = rough formal concept”. This paper is an extension of the paper [5] presented at the international workshop on Rough Sets and Knowledge Discovery (RSKD'93). A related paper [8] using distributed constraints provides a synthesis of the two important data modeling techniques: conceptual scaling of Formal Concept Analysis, and Entity-Relationship database modeling. A follow-up paper [9] will extend rough concept analysis from formal contexts to distributed constraints.

137 citations


Journal ArticleDOI
TL;DR: It is observed that rough algebra is more structured than a tqBa, and two more structures, viz. pre-rough algebra and rough algebra, are denned.
Abstract: While studying rough equality within the framework of the modal system S 5, an algebraic structure called rough algebra [1], came up. Its features were abstracted to yield a topological quasi-Boolean algebra (tqBa). In this paper, it is observed that rough algebra is more structured than a tqBa. Thus, enriching the tqBa with additional axioms, two more structures, viz. pre-rough algebra and rough algebra, are denned. Representation theorems of these algebras are also obtained. Further, the corresponding logical systems L 1 L 2 are proposed and eventually, L 2 is proved to be sound and complete with respect to a rough set semantics.

134 citations


Journal ArticleDOI
TL;DR: The definition of rough function for the domain of real numbers is introduced and its properties are investigated in detail including the generalization of the standard notion of function continuity known in the theory of real functions.
Abstract: The paper explores the concepts of approximate relations and functions in the framework of the theory of rough sets. The difficulties with the application of the idea of rough relation to general rough function definition are discussed. The definition of rough function for the domain of real numbers is introduced and its properties are investigated in detail including the generalization of the standard notion of function continuity known in the theory of real functions.

132 citations


Journal ArticleDOI
TL;DR: The generalization of the rough set approach consists in handling the case of multiple descriptors for both condition and decision attributes, a special way of modelling the first three types of uncertainty using fuzzy sets, which boils them down to the fourth type, called shortly, several descriptors.
Abstract: Rough set theory refers to classification of objects described by well-defined values of qualitative and quantitative attributes The values of attributes defined for each pair [object, attribute], called descriptors, are assumed to be unique and precise In practice, however, these attribute values may be neither unique nor precise, ie they can be uncertain We are distinguishing four types of uncertainty affecting values of attributes: uncertain discretization of quantitative attributes, imprecision of values of numerical attributes, unknown (missing) values of attributes, multiple values possible for one pair [object, attribute] We propose a special way of modelling the first three types of uncertainty using fuzzy sets, which boils them down to the fourth type, called shortly, multiple descriptors Thus, the generalization of the rough set approach consists in handling the case of multiple descriptors for both condition and decision attributes The generalization preserves all characteristic features of the rough set approach while enabling reasoning about uncertain data This capacity is illustrated by a simple example

95 citations


Journal ArticleDOI
TL;DR: In this paper, an equational framework for term graph rewriting with cycles is presented, where the usual notion of homomorphism is phrased in terms of the notion of bisimulation, which is well-known in process algebra and concurrency theory.
Abstract: We present an equational framework for term graph rewriting with cycles. The usual notion of homomorphism is phrased in terms of the notion of bisimulation, which is well-known in process algebra and concurrency theory. Specifically, a homomorphism is a functional bisimulation. We prove that the bisimilarity class of a term graph, partially ordered by functional bisimulation, is a complete lattice. It is shown how Equational Logic induces a notion of copying and substitution on term graphs, or systems of recursion equations, and also suggests the introduction of hidden or nameless nodes in a term graph. Hidden nodes can be used only once. The general framework of term graphs with copying is compared with the more restricted copying facilities embodied in the μ-rule. Next, orthogonal term graph rewrite systems, also in the presence of copying and hidden nodes, are shown to be confluent.

94 citations


Journal ArticleDOI
TL;DR: A generalization of the original idea of rough sets as introduced by Pawlak, called the Variable Precision Rough Sets Model with Asymmetric Bounds, is aimed at modeling decision situations characterized by uncertain information expressed in terms of probability distributions estimated form frequency distributions observed in empirical data.
Abstract: We present a generalization of the original idea of rough sets as introduced by Pawlak. The generalization, called the Variable Precision Rough Sets Model with Asymmetric Bounds, is aimed at modeling decision situations characterized by uncertain information expressed in terms of probability distributions estimated form frequency distributions observed in empirical data. The model presented is a direct extension of the previous concept, the Variable Precision Rough Sets Model. The properties of the extended model are investigated and compared to the original model. Also, a real life problem of identifying the factors which most affect the likelihoods of specified events in the steel industry is discussed in the context of this theory.

93 citations


Journal ArticleDOI
TL;DR: This paper shows, that many of classical results can be transferred into the order-based GAs, and includes the Schema Theorem and Markov chain modelling of order- based GA.
Abstract: A lot of research on genetic algorithms theory is concentrated on classical, binary case. However, there are many other types of useful genetic algorithms (GA), e.g. tree-based (genetic programming), or order-based ones. This paper shows, that many of classical results can be transferred into the order-based GAs. The analysis includes the Schema Theorem and Markov chain modelling of order-based GA.

83 citations


Journal ArticleDOI
Yiyu Yao1, Xining Li1
TL;DR: It is argued that the rough-set model corresponds to the modal logic system S 5, while the interval-set models extend set theory in the same manner as the logic systems S 5 and K 3 extend standard propositional logic.
Abstract: In the rough-set model, a set is represented by a pair of ordinary sets called the lower and upper approximations. In the interval-set model, a pair of sets is referred to as the lower and upper bounds which define a family of sets. A significant difference between these models lies in the definition and interpretation of their extended set-theoretic operators. The operators in the rough-set model are not truth-functional, while the operators in the interval-set model are truth-functional. Within the framework of possible-worlds analysis, we show that the rough-set model corresponds to the modal logic system S 5, while the interval-set model corresponds to Kleene's three-valued logic system K 3. It is argued that these two models extend set theory in the same manner as the logic systems S 5 and K 3 extend standard propositional logic. Their relationships to probabilistic reasoning are also examined.

Journal ArticleDOI
TL;DR: The weak versions of Apt/Blair/Walker's stratified semantics M supp P and of Van Gelder/Ross/Schlipf's well-founded semantics WFS are investigated and it is shown that credulous entailment for both semantics is NP-complete (consequently, sceptical entailment is co-NP-complete).
Abstract: It is wellknown that Minker's semantics GCWA for positive disjunctive programs P, i.e. to decide if a literal is true in all minimal models of P is Π P 2-complete. This is in contrast to the same entailment problem for semantics of non-disjunctive programs such as STABLE and SUPPORTED (both are co-NP-complete) as well as M supp P and WFS (that are even polynomial). Recently, the idea of reducing disjunctive to non-disjunctive programs by using so called shift-operations was introduced independently by Bonatti, Dix/Gottlob/Marek, and Schaerf. In fact, Schaerf associated to each semantics SEM for normal programs a corresponding semantics Weak-SEM for disjunctive programs and asked for the properties of these weak semantics, in particular for the complexity of their entailment relations. While Schaerf concentrated on Weak-STABLE and Weak-SUPPORTED, we investigate the weak versions of Apt/Blair/Walker's stratified semantics M supp P and of Van Gelder/Ross/Schlipf's well-founded semantics WFS. We show that credulous entailment for both semantics is NP-complete (consequently, sceptical entailment is co-NP-complete). Thus, unlike GCWA, the complexity of these semantics belongs to the first level of the polynomial hierarchy. Note that, unlike Weak-WFS, the semantics Weak-M supp P is not always defined: testing consistency of Weak-M supp P is also NP-complete. We also show that Weak-WFS and Weak-M supp P are cumulative and rational and that., in addition, Weak-WFS satisfies some of the well-behaved principles introduced by Dix.

Journal ArticleDOI
TL;DR: This paper investigates a Rough Sets System as a finite semi-simple Nelson algebra whose structure is inherently described using the properties of the underlying Approximation Space, and reveals the weak Nelson negation to be a dual pseudocomplementation in the lattice of Rough Sets.
Abstract: Any Rough Sets System induced by an Approximation Space can be given several logic-algebraic interpretations. In this paper a Rough Sets System is investigated as a finite semi-simple Nelson algebra whose structure is inherently described using the properties of the underlying Approximation Space. Moreover some of the most characterizing features of Rough Sets Systems are derived from this interpretation in logic-algebraic terms. Particularly the logic-algebraic structure given to the Rough Sets System, qua a Nelson algebra is equipped by a weak negation and a strong negation, and, since it is a finite distributive lattice, it can be regarded also as a Heyting algebra equipped by its own pseudocomplementation. Moreover the weak Nelson negation reveals to be a dual pseudocomplementation in the lattice of Rough Sets. In this way we are able, for instance, to recover the well-known fact that Rough Sets Systems are double Stone algebras, and to exploit both their properties and the general properties of Nelson algebras in order to analyse the notions of ”definable set” and ”rough top (bottom) equality” in Approximation Spaces.

Journal ArticleDOI
TL;DR: It is shown that crowns are intractable and a feasible poset is defined to be one whose potential obstacles to satisfiability are representable by a certain formula of the first-order extended by the least fixed operator.
Abstract: We consider tractable and intractable cases of the satisfiability problem for conjunctions of inequalities between variables and constants in a fixed finite poset. We show that crowns are intractable. We study members and closure properties of the class of tractable posets. We define a feasible poset to be one whose potential obstacles to satisfiability are representable by a certain formula of the first-order extended by the least fixed operator. For bipartite posets we give a complete classification of hard posets and feasible ones.

Journal ArticleDOI
TL;DR: It is shown that some natural families of two-dimensional languages (finite languages, regular languages, locally testable languages) are recognizable and that the emptyness problem is undecidable for this family of languages.
Abstract: The purpose of this paper is to investigate about a new notion of finite state recognizability for two-dimensional (picture) languages. This notion takes as starting point the characterization of one-dimensional recognizable languages in terms of local languages and projections. Such notion can be extended in a natural way to the two-dimensional case. We first introduce a notion of local picture language and then we define,a recognizable picture language as a projection of a local picture language. The family of recognizable picture languages is denoted by REC. We study some combinatorial and language-theoretic properties of family REC. In particular we prove some closure properties with respect to different kinds of operations. From this, we derive that some natural families of two-dimensional languages (finite languages, regular languages, locally testable languages) are recognizable. Further we give some necessary conditions for recognizability which provides tools to show that certain languages are not recognizable. Although REC shares several properties of recognizable string languages, however, differently from the case of words, we prove here that REC is not closed under complementation and that the emptyness problem is undecidable for this family of languages. Finally, we report some characterizations of family REC by means of machine-based models and logic-based formalisms.

Journal ArticleDOI
TL;DR: A new formal logic system based on axioms of rough logic, called First-Order Logic for Rough Approximation or simply Rough Logic, is proposed, which integrates imperfect observations into an approximation of actual world.
Abstract: Earlier the authors have shown that rough sets can be characterized by six topological properties. In this paper, a new formal logic system based on such axioms is proposed. It will be called First-Order Logic for Rough Approximation or simply Rough Logic. The axiom schemas of rough logic turn out to be the same as those of the modal logic S 5. In other words, topological and modal logic considerations led to the same conclusion. So rough logic must have captured the intrinsic meaning of approximate reasoning However, their interpretations are different. To reflect the differences in semantics, possible worlds are renamed as observable worlds. Each observable world represents a different rough observation of the actual world. Rough logic also provides a frame work for approximation. It integrates imperfect observations (observable worlds) into an approximation of actual world. Any good approximation theory should have a convergence theorem-its details are deferred to next paper. A sample theorem is as follows: If there is a convergent sequence of rough observations (namely, equivalence relations), then the corresponding rough models converge to the Tarskian model of first-order classical logic.

Journal ArticleDOI
TL;DR: A new logic based framework for the formal treatment of graph rewriting systems as special cases of programmed rewriting systems for arbitrary relational structures that surpasses almost all variants of nonparallel algebraic as well as algorithmic graph grammar approaches by offering set-oriented pattern matching facilities and nonmonotonic reasoning capabilities for checking pre- and postconditions of rewrite rules.
Abstract: This paper presents a new logic based framework for the formal treatment of graph rewriting systems as special cases of programmed rewriting systems for arbitrary relational structures. Considering its expressive power, the new formalism surpasses almost all variants of nonparallel algebraic as well as algorithmic graph grammar approaches by offering set-oriented pattern matching facilities as well as nonmonotonic reasoning capabilities for checking pre- and postconditions of rewrite rules. Furthermore, the formalism closes the gap between the operation-oriented manipulation of data structures by means of rewrite rules and the declaration-oriented description of data structures by means of logic based languages. Finally, the formalism even offers recursively defined (sub-)programs, by means of which the application of rewrite rules may be regulated. A denotational semantics definition for these (sub-)programs relies on a special variant of fixpoint theory for noncontinuous but monotonic functions.

Journal ArticleDOI
TL;DR: In this article, the main objective of machine discovery is the determination of relations between data and data models, and a method for discovery of data models represented by concurrent systems from experimental tables is described.
Abstract: The main objective of machine discovery is the determination of relations between data and of data models. In the paper we describe a method for discovery of data models represented by concurrent systems from experimental tables. The basic step consists in a determination of rules which yield a decomposition of experimental data tables; the components are then used to define fragments of the global system corresponding to a table. The method has been applied to automatic data models discovery from experimental tables with Petri nets as models for concurrency.

Journal ArticleDOI
TL;DR: A set-theoretical approach to finding reducts of composed information systems is presented and it is shown how the search space can be represented in form of a pair of boundaries.
Abstract: A set-theoretical approach to finding reducts of composed information systems is presented. It is shown how the search space can be represented in form of a pair of boundaries. It is also shown, how reducts of composing information systems can be used to reduce the search space of the composed system. Presented solutions are implied directly from the properties of composed monotonic Boolean functions.

Journal ArticleDOI
TL;DR: A notion of type assignment on Curryfied Term Rewriting Systems is introduced that uses Intersection Types of Rank 2, and in which all function symbols are assumed to have a type.
Abstract: A notion of type assignment on Curryfied Term Rewriting Systems is introduced that uses Intersection Types of Rank 2, and in which all function symbols are assumed to have a type. Type assignment will consist of specifying derivation rules that describe how types can be assigned to terms, using the types of function symbols. Using a modified unification procedure, for each term the principal pair (of basis and type) will be defined in the following sense: from these all admissible pairs can be generated by chains of operations on pairs, consisting of the operations substitution, copying, and weakening. In general, given an arbitrary typeable GTRS, the subject reduction property does not hold. Using the principal type for the left-hand side of a rewrite rule, a sufficient and decidable condition will be formulated that typeable rewrite rules should satisfy in order to obtain this property.

Journal ArticleDOI
TL;DR: It turns out that a small number of generic goals and aSmall number of data structures, when combined recursively, can lead to complex discovery processes and to the discovery of complex theories.
Abstract: We define the problem of empirical search for knowledge by interaction with a setup experiment, and we present a solution implemented in the FAHRENHEIT discovery system. FAHRENHEIT autonomously explores multi-dimensional empirical spaces of numerical parameters, making experiments, generalizing them into empirical equations, finding the scope of applications for each equation, and setting new discovery goals, until it reaches the empirically complete theory. It turns out that a small number of generic goals and a small number of data structures, when combined recursively, can lead to complex discovery processes and to the discovery of complex theories. We present FAHRENHEIT's knowledge representation and the ways in which the discovery mechanism interacts with the emerging knowledge. Brief descriptions of several real-world applications demonstrate the system's discovery potential.

Journal ArticleDOI
TL;DR: This note introduces subclasses of even linear languages for which there exist inference algorithms using positive samples only and describes these subclasses as follows:.
Abstract: This note introduces subclasses of even linear languages for which there exist inference algorithms using positive samples only.

Journal ArticleDOI
TL;DR: A generalization of Ackermann's Lemma is capitalize on in order to deal with a subclass of universal formulas called semi-Horn formulas and a fixpoint calculus which is shown to be sound and complete for bounded fixpoint formulas.
Abstract: Circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic and commonsense reasoning, but difficult to apply in practice due to the use of second-order formulas. One proposal for dealing with the computational problems is to identify classes of first-order formulas whose circumscription can be shown to be equivalent to a first-order formula. In previous work, we presented an algorithm which reduces certain classes of second-order circumscription axioms to logically equivalent first-order formulas. The basis for the algorithm is an elimination lemma due to Ackermann. In this paper, we capitalize on the use of a generalization of Ackermann's Lemma in order to deal with a subclass of universal formulas called semi-Horn formulas. Our results subsume previous results by Kolaitis and Papadimitriou regarding a characterization of circumscribed definite logic programs which are first-order expressible. The method for distinguishing which formulas are reducible is based on a boundedness criterion. The approach we use is to first reduce a circumscribed semi-Horn formula to a fixpoint formula which is reducible if the formula is bounded, otherwise not. In addition to a number of other extensions, we also present a fixpoint calculus which is shown to be sound and complete for bounded fixpoint formulas.

Journal ArticleDOI
TL;DR: The task of information retrieval in response to a query that refers to several levels of a power set hierarchy built from the set of keywords is defined and retrieval strategies based on rough sets are proposed and discussed.
Abstract: The purpose of this paper is to define and investigate document retrieval strategies based on rough sets. A subject classification index is modelled by means of a family of set-theoretic information systems. The family induces a power set hierarchy built from the set of keywords. In the paper, the task of information retrieval in response to a query that refers to several levels of that hierarchy is defined. Retrieval strategies are proposed and discussed. The strategies are, roughly speaking, based on various types of similarity between sets of keywords included in a query and in a document. It is not necessarily required that the document matches the query; we allow for approximate matching. Measures of relevance of documents to a query are proposed for the given strategies.

Journal ArticleDOI
TL;DR: This paper shows how the Autoepistemic Logic of Beliefs, AEB, obtained by augmenting classical propositional logic with a belief operator, β, can be revised to prevent belief originated inconsistencies, and also to introduce declarative language level control over the revision level of beliefs.
Abstract: In order to be able to explicitly reason about beliefs, we've introduced a non-monotonic formalism, called the Autoepistemic Logic of Beliefs, AEB, obtained by augmenting classical propositional logic with a belief operator, β. For this language we've defined the static autoepistemic expansions semantics. The resulting nonmonotonic knowledge representation framework turned out to be rather simple and yet quite powerful. Moreover, it has some very natural properties which sharply contrast with those of Moore's AEL. While static expansions seem to provide a natural and intuitive semantics for many belief theories, and, in particular, for all affirmative belief theories (which include the class of all normal and disjunctive logic programs), they often can lead to inconsistent expansions for theories in which (subjective) beliefs clash with the known (objective) information or with some other beliefs. In particular, this applies to belief theories (and to logic programs) with strong or explicit negation. In this paper we generalize AEB to avoid the acceptance of inconsistency provoking beliefs. We show how such AEB theories can be revised to prevent belief originated inconsistencies, and also to introduce declarative language level control over the revision level of beliefs, and apply it to the domains of diagnosis and declarative debugging. The generality of our AEB framework can capture and justify the methods that have been deployed to solve similar revision problems within the logic programming paradigm.

Journal ArticleDOI
TL;DR: This paper presents a formal description of rough sets within the framework of the generalized set theory, which is interpreted in the set approximation theory, and defined by means of the Pawlak's rough sets.
Abstract: In the paper we present a formal description of rough sets within the framework of the generalized set theory, which is interpreted in the set approximation theory. The rough sets are interpreted as approximations, which are defined by means of the Pawlak's rough sets.

Journal ArticleDOI
TL;DR: The aim of the method is to provide tools for data filtering when there is no directly available geometric structure in the data set.
Abstract: We propose a method called analytical morphology for data filtering. The method was created on the basis of some ideas of rough set theory and mathematical morphology. Mathematical morphology makes an essential use of geometric structure of objects while the aim of our method is to provide tools for data filtering when there is no directly available geometric structure in the data set.

Journal ArticleDOI
TL;DR: This work considers contextual grammars with parallel derivations, in which the whole current string participates to a derivation step in the sense that it is splitted into substrings to which contexts are adjoined in a parallel manner.
Abstract: Continuing the work begun in [14], we consider contextual grammars (as introduced in [6] with linguistic motivation) with parallel derivations, in which the whole current string participates to a derivation step in the sense that it is splitted into substrings to which contexts are adjoined in a parallel manner. The generative power of such grammars is investigated, when the parallelism is total or partial, and when the selection of contexts is limited to strings in sets of a given type (finite, regular etc.) Then we consider the languages consisting of strings which cannot be further derived (we call them blocking languages). Some open problems are also formulated.

Journal ArticleDOI
TL;DR: A new family of languages named synchronization languages is introduced which is used to give a precise semantic description for SEs, and it is conjecture that closure under the above rewriting rules is a sufficient condition for a regular st-language to be a synchronization language.
Abstract: New constructs for synchronization termed synchronization expressions (SEs) have been developed as high-level language constructs for parallel programming languages [8, 9]. Statements that are constrained by certain synchronization requirements are tagged, and synchronization requests are specified as expressions of statement tags. In this paper, we introduce a new family of languages named synchronization languages which we use to give a precise semantic description for SEs. Under this description, relations such as equivalence and inclusion between SEs can be easily understood and tested. In practice, it also provides us with a systematic way for the implementation as well as the simplification of SEs in parallel programming languages. We show that each synchronization language is closed under the following rewriting rules: (1) asbs → bsas, (2) atbt → btat, (3) asbt → btas, (4) atasbtbs → btbsatas and also h(atasbtbs) → h(btbsatas) for any morphism h that satisfies certain conditions which will be specified in the paper. We conjecture that closure under the above rewriting rules is a sufficient condition for a regular st-language to be a synchronization language. Several other properties of synchronization languages are also studied.

Journal ArticleDOI
TL;DR: A change theory based on abductive reasoning that presents a unified view of standard change operators and abductive change operators rather than a new and independent change theory for abductive changes is described.
Abstract: This paper describes a change theory based on abductive reasoning. We take the AGM postulates for revisions, expansions and contractions, and Katsuno and Mendelzon postulates for updates and incorporate abduction into them. A key feature of the theory is that presents a unified view of standard change operators and abductive change operators rather than a new and independent change theory for abductive changes. Abductive operators reduce to standard change operators in the limiting cases.