scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Mathematics and Artificial Intelligence in 1999"


Journal ArticleDOI
TL;DR: It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view.
Abstract: Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported.

967 citations


Journal ArticleDOI
TL;DR: The study of equivalent transformations of programs with nested expressions shows that any such program is equivalent to a set of disjunctive rules, possibly with negation as failure in the heads.
Abstract: We extend the answer set semantics to a class of logic programs with nested expressions permitted in the bodies and heads of rules. These expressions are formed from literals using negation as failure, conjunction (,) and disjunction (s) that can be nested arbitrarily. Conditional expressions are introduced as abbreviations. The study of equivalent transformations of programs with nested expressions shows that any such program is equivalent to a set of disjunctive rules, possibly with negation as failure in the heads. The generalized answer set semantics is related to the Lloyd–Topor generalization of Clark’s completion and to the logic of minimal belief and negation as failure.

361 citations


Journal ArticleDOI
TL;DR: An extension of logic programming (LP) that is suitable not only for the “rational” component of a single agent but also for the "reactive" component and that can encompass multi‐agent systems is presented.
Abstract: In this paper we present an extension of logic programming (LP) that is suitable not only for the “rational” component of a single agent but also for the “reactive” component and that can encompass multidagent systems. We modify an earlier abductive proof procedure and embed it within an agent cycle. The proof procedure incorporates abduction, definitions and integrity constraints within a dynamic environment, where changes can be observed as inputs. The definitions allow rational planning behaviour and the integrity constraints allow reactive, conditiondaction type behaviour. The agent cycle provides a resourcedbounded mechanism that allows the agent’s thinking to be interrupted for the agent to record and assimilate observations as input and execute actions as output, before resuming further thinking. We argue that these extensions of LP, accommodating multidtheories embedded in a shared environment, provide the necessary multidagent functionality. We argue also that our work extends Shoham’s Agent0 and the BDI architecture.

164 citations


Journal ArticleDOI
TL;DR: Prohairetic Deontic Logic (PDL), a preference‐based dyadic deontic logic, is introduced, which uses the definition of “α should be (done) if β is ( done)” to formalize contrary‐to‐duty reasoning.
Abstract: In this paper we introduce Prohairetic Deontic Logic (PDL), a preferencedbased dyadic deontic logic. In our preferencedbased interpretation of obligations “a should be (done) if b is (done)” is true if (1) no ¬a ∧ b state is as preferable as an a ∧ b state and (2) the preferred b states are a states. We show that this representation solves different problems of deontic logic. The first part of the definition is used to formalize contrarydtodduty reasoning, which, for example, occurs in Chisholm’s and Forrester’s notorious deontic paradoxes. The second part is used to make deontic dilemmas inconsistent.

97 citations


Journal ArticleDOI
TL;DR: This paper describes tabling as it is implemented in the XSB system and shows how it can be used to construct meta‐interpreters (or preprocessors) for two sample formalisms: the Well‐Founded Semantics with Explicit Negation, and Generalized Annotated Logic Programs.
Abstract: Nondmonotonic extensions add power to logic programs. However, the main logic programming language, Prolog, is widely recognized as inadequate to implement these extensions due to its weak termination and complexity properties. By extending Prolog’s SLD resolution with tabling, Prolog can be improved in several ways. Tabling can allow a logic programming system to compute the welldfounded semantics for programs with bounded term depth, and to do so with polynomial data complexity. By exploiting these properties, tabling allows a variety of nondmonotonic extensions to be efficiently implemented, and used to solve practical problems. In this paper we describe tabling as it is implemented in the XSB system and show how it can be used to construct metadinterpreters (or preprocessors) for two sample formalisms: the WelldFounded Semantics with Explicit Negation, and Generalized Annotated Logic Programs. We also describe how nondmonotonic extensions are used in practical applications such as psychiatric diagnosis, extraction of information from poorly structured textual data, and model checking.

87 citations


Journal ArticleDOI
TL;DR: The classes of EPH indicate some astonishing relationships in light of earlier results on the expressive power of non‐monotonic logics presented by Gottlob as well as Bonatti and Eiter: Moore’s autoepistemic logic and prerequisite‐free default logic are of equal expressive power and less expressive than Reiter's default logic and Marek and Truszczyński's strong autoepistsic logic.
Abstract: This paper concentrates on comparing the expressive powers of five nondmonotonic logics that have appeared in the literature. For this purpose, the concept of a polynomial, faithful and modular (PFM) translation function is adopted from earlier work by Gottlob, but a weaker notion of faithfulness is proposed. The existence of a PFM translation function from one nondmonotonic logic to another is interpreted to indicate that the latter logic is capable of expressing everything that the former logic does. Several translation functions are presented in the paper and shown to be PFM. Moreover, it is shown that PFM translation functions are impossible in certain cases, which indicates that the expressive powers of the logics involved differ strictly. The comparisons made in terms of PFM translation functions give rise to an exact classification of nondmonotonic logics, which is then named as the expressive power hierarchy (EPH) of nondmonotonic logics. Three syntactically restricted variants of default logic are also analyzed, and EPH is refined accordingly. Most importantly, the classes of EPH indicate some astonishing relationships in light of earlier results on the expressive power of nondmonotonic logics presented by Gottlob as well as Bonatti and Eiter: Moore’s autoepistemic logic and prerequisitedfree default logic are of equal expressive power and less expressive than Reiter’s default logic and Marek and Truszczynski’s strong autoepistemic logic.

42 citations


Journal ArticleDOI
TL;DR: A new testing policy is proposed that can be executed in polynomial time in the input size, and it is shown that it is cost-minimal in the average case sense, for certain double regular systems that include regular (in particular, threshold) systems with identical components.
Abstract: We consider the problem of testing sequentially the components of a multidcomponent system, when the testing of each component is costly. We propose a new testing policy, that can be executed in polynomial time in the input size, and show that it is costdminimal in the average case sense, for certain double regular systems that include regular (in particular, threshold) systems with identical components. This result generalizes known results for series, parallel, and, more generally, for kdoutdofdn systems.

41 citations


Journal ArticleDOI
TL;DR: The entailment problem is proven to be decidable, based on a suggested algorithm for computing sound and complete disjunctions of monotonicity and equality constraints that hold in the intentional database.
Abstract: Datalog (i.e., functiondfree logic) programs with monotonicity constraints on extensional predicates are considered. A monotonicity constraint states that one argument of a predicate or a constant is always less than another argument or a constant, according to some strict partial order. Relations of an extensional database are required to satisfy the monotonicity constraints imposed on their predicates. More specifically, a strict partial order is defined on the domain (i.e., set of constants) of the database, and every tuple of each relation satisfies the monotonicity constraints imposed on its predicate. This paper focuses on the problem of entailment of monotonicity constraints in the intensional database from monotonicity constraints in the extensional database. The entailment problem is proven to be decidable, based on a suggested algorithm for computing sound and complete disjunctions of monotonicity and equality constraints that hold in the intentional database. It is also shown that the entailment of monotonicity constraints in programs is a complete problem for exponential time. For linear programs, this problem is complete for polynomial space.

28 citations


Journal ArticleDOI
TL;DR: A knowledge representation language which extends logic programming with disjunction, inheritance, true negation and modularization is proposed, and the results show that inheritance does not increase at all the complexity of any fragment of the language, while it does increase the expressive power of some fragments.
Abstract: The paper proposes a knowledge representation language which extends logic programming with disjunction, inheritance, true negation and modularization. The resulting language is called Disjunctive Ordered Logic (\mathcal{DOL} for short). A modeldtheoretic semantics for \mathcal{DOL} is given, and it is shown to extend the stable model semantics of disjunctive logic programs. A number of examples show the suitability of \mathcal{DOL} for knowledge representation and commonsense reasoning. Among other things, the proposed language appears to be a powerful tool for the description of diagnostic processes which are based on stepwise refinements. The complexity and the expressiveness of the language are carefully analyzed. The analysis pays particular attention to the relative power and complexity of inheritance, negation and disjunction. An interesting result in this course concerns the role played by inheritance. Indeed, our results show that inheritance does not increase at all the complexity of any fragment of the language, while it does increase the expressive power of some \mathcal{DOL} fragments.

21 citations


Journal ArticleDOI
TL;DR: This work provides set-theoretic completeness results for a number of epistemic and conditional logics, and contrast the expressive power of the syntactic and set- theoretic approaches.
Abstract: The standard approach to logic in the literature in philosophy and mathematics, which has also been adopted in computer science, is to define a language (the syntax), an appropriate class of models together with an interpretation of formulas in the language (the semantics), a collection of axioms and rules of inference characterizing reasoning (the proof theory), and then relate the proof theory to the semantics via soundness and completeness results. Here we consider an approach that is more common in the economics literature, which works purely at the semantic, setdtheoretic level. We provide setdtheoretic completeness results for a number of epistemic and conditional logics, and contrast the expressive power of the syntactic and setdtheoretic approaches.

21 citations


Journal ArticleDOI
TL;DR: The paper presents an algorithm to combine two arbitrary autarkies to form a larger autarky, which achieves speedup greater than the number of processors for many of the formulas.
Abstract: A parallel satisfiability testing algorithm called Parallel Modoc is presented. Parallel Modoc is based on Modoc, which is based on propositional Model Elimination with an added capability to prune away certain branches that cannot lead to a successful subrefutation. The pruning information is encoded in a partial truth assignment called an autarky. Parallel Modoc executes multiple instances of Modoc as separate processes and allows processes to cooperate by sharing lemmas and autarkies as they are found. When a Modoc process finds a new autarky or a new lemma, it makes the information available to other Modoc processes via a “blackboard”. Combining autarkies generally is not straightforward because two autarkies found by two separate processes may have conflicting assignments. The paper presents an algorithm to combine two arbitrary autarkies to form a larger autarky. Experimental results show that for many of the formulas, Parallel Modoc achieves speedup greater than the number of processors. Formulas that could not be solved in an hour by Modoc were often solved by Parallel Modoc in the order of minutes and, in some cases, in seconds.

Journal ArticleDOI
TL;DR: In this article, a hybrid system composed of a connectionist module and an agentd-based module is proposed to combine subdsymbolic and symbolic levels to represent musical knowledge.
Abstract: The system presented here shows the feasibility of modeling the knowledge involved in a complex musical activity by integrating subdsymbolic and symbolic processes. This research focuses on the question of whether there is any advantage in integrating a neural network together with a distributed artificial intelligence approach within the music domain. The primary purpose of our work is to design a model that describes the different aspects a user might be interested in considering when involved in a musical activity. The approach we suggest in this work enables the musician to encode his knowledge, intuitions, and aesthetic taste into different modules. The system captures these aspects by computing and applying three distinct functions: rules, fuzzy concepts, and learning. As a case study, we began experimenting with first species twodpart counterpoint melodies. We have developed a hybrid system composed of a connectionist module and an agentdbased module to combine the subdsymbolic and symbolic levels to achieve this task. The technique presented here to represent musical knowledge constitutes a new approach for composing polyphonic music.

Journal ArticleDOI
TL;DR: This paper formalizes and analyzes cognitive transitions between artificial perceptions that consist of an analogical or metaphorical transference of perception, and shows how structural aspects of ‘better’ analogies and metaphors can be captured and evaluated by the same categorical setting.
Abstract: This paper formalizes and analyzes cognitive transitions between artificial perceptions that consist of an analogical or metaphorical transference of perception. The formalization is performed within a mathematical framework that has been used before to formalize other aspects of artificial perception and cognition. The mathematical infrastructure consists of a basic category of ‘artificial perceptions’. Each ‘perception’ consists of a set of ‘world elements’, a set of ‘connotations’, and a three valued (true, false, undefined) predicative connection between the two sets. ‘Perception morphisms’ describe structure preserving paths between perceptions. Quite a few artificial cognitive processes can be viewed and formalized as perception morphisms or as other categorical constructs. We show here how analogical transitions can be formalized in a similar way. A factorization of every analogical transition is shown to formalize metaphorical perceptions that are inspired by the analogy. It is further shown how structural aspects of ‘better’ analogies and metaphors can be captured and evaluated by the same categorical setting, as well as generalizations that emerge from analogies. The results of this study are then embedded in the existing mathematical formalization of other artificial cognitive processes within the same premises. A fallout of the rigorous unified mathematical theory is that structured analogies and metaphors share common formal aspects with other perceptually acute cognitive processes.

Journal ArticleDOI
TL;DR: It is proved that the error surface of the two-layer XOR network with two hidden units has a number of regions with local minima such that one or both hidden nodes are saturated for at least two patterns.
Abstract: All local minima of the error surface of the 2d2d1 XOR network are described. A local minimum is defined as a point such that all points in a neighbourhood have an error value greater than or equal to the error value in that point. It is proved that the error surface of the twodlayer XOR network with two hidden units has a number of regions with local minima. These regions of local minima occur for combinations of the weights from the inputs to the hidden nodes such that one or both hidden nodes are saturated for at least two patterns. However, boundary points of these regions of local minima are saddle points. It will be concluded that from each finite point in weight space a strictly decreasing path exists to a point with error zero. This also explains why experiments using higher numerical precision find less “local minima”.

Journal ArticleDOI
TL;DR: This paper proposes a computational mechanism for extended abduction by introducing its transaction program for computing extended abduction, which is a set of non‐deterministic production rules that declaratively specify addition and deletion of abductive hypotheses.
Abstract: To explain observations from nonmonotonic background theories, one often needs removal of some hypotheses as well as addition of other hypotheses. Moreover, some observations should not be explained, while some are to be explained. In order to formalize these situations, extended abduction was introduced by Inoue and Sakama (1995) to generalize traditional abduction in the sense that it can compute negative explanations by removing hypotheses and antidexplanations to unexplain negative observations. In this paper, we propose a computational mechanism for extended abduction. When a background theory is written in a normal logic program, we introduce its transaction program for computing extended abduction. A transaction program is a set of nonddeterministic production rules that declaratively specify addition and deletion of abductive hypotheses. Abductive explanations are then computed by the fixpoint of a transaction program using a bottomdup model generation procedure. The correctness of the proposed procedure is shown for the class of acyclic covered abductive logic programs. In the context of deductive databases, a transaction program provides a declarative specification of database update.

Journal ArticleDOI
TL;DR: A new semantics for intention is presented that is both dynamic and causal in the sense that it is given in terms of the relation of an intention to both previous and subsequent mental states as well as to the choice of physical action.
Abstract: This paper explores the design of rational agent architectures from the perspective of the dynamics of information change. The procedural elements that guide an agent’s behavior and that reflect the evolution of prodattitudes (for example, from desire to intention to plan) are described in terms of McCarthy’s notion of a reified mental action. The function of each module of an agent architecture is exactly specified by identifying processes with each module and then describing the effects of those processes or mental actions (such as updating beliefs, elaborating plans, deliberating, reconsidering, revising intentions, filtering intentions, and monitoring) in the same way as one would describe the effects of physical actions. A new semantics for intention is presented that is both dynamic and causal in the sense that it is given in terms of the relation of an intention to both previous and subsequent mental states as well as to the choice of physical action. Desires are given a syntactic analysis while the prodattitude of intentionsdthat, which has been proposed in the SharedPlans framework of Grosz and Kraus, is axiomatized in terms of an evolving commitment to certain deliberative, mental actions that evolve as a function of knowledge of the state of the joint activity.

Journal ArticleDOI
TL;DR: The paper introduces and experiment with an “iterative learning” algorithm which records additional constraints uncovered during search and shows that on a class of randomly generated maintenance scheduling problems, iterative learning reduces the time required to find a good schedule.
Abstract: The paper focuses on evaluating constraint satisfaction search algorithms on application based random problem instances. The application we use is a welldstudied problem in the electric power industry: optimally scheduling preventive maintenance of power generating units within a power plant. We show how these scheduling problems can be cast as constraint satisfaction problems and used to define the structure of randomly generated nondbinary CSPs. The random problem instances are then used to evaluate several previously studied algorithms. The paper also demonstrates how constraint satisfaction can be used for optimization tasks. To find an optimal maintenance schedule, a series of CSPs are solved with successively tighter costdbound constraints. We introduce and experiment with an “iterative learning” algorithm which records additional constraints uncovered during search. The constraints recorded during the solution of one instance with a certain costdbound are used again on subsequent instances having tighter costdbounds. Our results show that on a class of randomly generated maintenance scheduling problems, iterative learning reduces the time required to find a good schedule.

Journal ArticleDOI
TL;DR: Applications of some recent developments in the theory of logic programming to knowledge representation and reasoning in common sense domains are illustrated and realization theorems which allow us to transform specifications built by applying these constructors to declarative logic programs are discussed.
Abstract: The main goal of this paper is to illustrate applications of some recent developments in the theory of logic programming to knowledge representation and reasoning in common sense domains We are especially interested in better understanding the process of development of such representations together with their specifications We build on the previous work of Gelfond and Przymusinska in which the authors suggest that, at least in some cases, a formal specification of the domain can be obtained from specifications of its parts by applying certain operators on specifications called specification constructors and that a better understanding of these operators can substantially facilitate the programming process by providing the programmer with a useful heuristic guidance We discuss some of these specification constructors and their realization theorems which allow us to transform specifications built by applying these constructors to declarative logic programs Proofs of two such theorems, previously announced in a paper by Gelfond and Gabaldon, appear here for the first time The method of specifying knowledge representation problems via specification constructors and of using these specifications for the development of their logic programming representations is illustrated by design of a simple, but fairly powerful program representing simple hierarchical domains

Journal ArticleDOI
TL;DR: A new algorithm called Modoc, which has achieved performance comparable to the fastest known satisfiability methods, including stochastic search methods, on planning problems that have been reported by other researchers, as well as formulas derived from other applications.
Abstract: Classical STRIPSdstyle planning problems are formulated as theorems to be proven from a new point of view: that the problem is not solvable. The result for a refutationdbased theorem prover may be a propositional formula that is to be proven unsatisfiable. This formula is identical to the formula that may be derived directly by various “SAT compilers”, but the theoremdproving view provides valuable additional information not in the formula, namely, the theorem to be proven. Traditional satisfiability methods, most of which are based on model search, are unable to exploit this additional information. However, a new algorithm called “Modoc” is able to exploit this information and has achieved performance comparable to the fastest known satisfiability methods, including stochastic search methods, on planning problems that have been reported by other researchers, as well as formulas derived from other applications. Unlike most theorem provers, Modoc performs well on both satisfiable and unsatisfiable formulas. Modoc works by a combination of backdchaining from the theorem clauses and forwarddchaining on tractable subformulas. In some cases, Modoc is able to solve a planning problem without finding a complete assignment because the backdchaining methodology is able to ignore irrelevant clauses. Although backdchaining is well known in the literature, a high level of search redundancy existed in previous methodss Modoc incorporates a new technique called “autarky pruning”, which reduces search redundancy to manageable levels, permitting the benefits of backdchaining to emerge, for certain problem classes. Experimental results are presented for planning problems and formulas derived from other applications.

Journal ArticleDOI
TL;DR: Experimental data show that on random 3CNF formulas at the “hard” ratio of 4.27 clauses per variable, Modoc is not as effective as recently reported model-searching methods, however, on more structured formulas from applications, such as circuit-fault detection, it is superior.
Abstract: This paper describes new “lemma” and “cut” strategies that are efficient to apply in the setting of propositional Model Elimination Previous strategies for managing lemmas and Cdliterals in Model Elimination were oriented toward firstdorder theorem proving The original “cumulative” strategy remembers lemmas forever, and was found to be too inefficient The previously reported Cdliteral and unitdlemma strategies, such as “strong regularity”, forget them unnecessarily soon in the propositional domain An intermediate strategy, called “quasidpersistent” lemmas, is introduced Supplementing this strategy, methods for “eager” lemmas and two forms of controlled “cut” provide further efficiencies The techniques have been incorporated into “Modoc”, which is an implementation of Model Elimination, extended with a new pruning method that is designed to eliminate certain refutation attempts that cannot succeed Experimental data show that on random 3CNF formulas at the “hard” ratio of 427 clauses per variable, Modoc is not as effective as recently reported modeldsearching methods However, on more structured formulas from applications, such as circuitdfault detection, it is superior

Journal ArticleDOI
TL;DR: A general approach to characterizing minimal information in a modal context is given and three characterizations of minimal information are given and conditions under which these characterizations are equivalent are provided.
Abstract: We give a general approach to characterizing minimal information in a modal context. Our modal treatment can be used for many applications, but is especially relevant under epistemic interpretations of the operator \Box. Relative to an arbitrary modal system, we give three characterizations of minimal information and provide conditions under which these characterizations are equivalent. We then study information orders based on bisimulations and Ehrenfeucht–Fraisse games. Moving to the area of epistemic logics, we show that for one of these orders almost all systems trivialize the notion of minimal information. Another order which we present is much more promising as it permits to minimize with respect to positive knowledge. In S5, the resulting notion of minimal knowledge coincides with welldestablished proposals. For S4 we compare the two orders.

Journal ArticleDOI
TL;DR: The correct statement of the theorem is that F is an almost-uniform frame iff every instance of Miller’s principle is valid in F .
Abstract: There is an error in theorem 7.1 in “The relationship between knowledge, belief, and certainty” (Annals of Artificial Intelligence and Mathematics 4 (1991) 301–322). The theorem says that F is a uniform frame iff every instance of Miller’s principle is valid in F . It is true that every instance of Miller’s principle is valid in a uniform frame. The converse does not hold in general. The problem in the proof of theorem 7.1 occurs in the proof of claim (6) on p. 320. It is claimed that if a 6= b, then we can find an interval I = [d, e] such that a ∈ I , b / ∈ I , and d > 0. This is false if a = 0. The proof is correct as long as a 6= 0 and, in fact, this argument is basically the key to proving the corrected version of the theorem, as we now show. The correct statement of the theorem is that F is an almost-uniform frame iff every instance of Miller’s principle is valid in F . An almost-uniform frame is one where uniformity holds almost everywhere in the following sense. Using the notation of the paper, given a frame F = (S, PR), let [s] = {t ∈ S: PR(s) = PR(t)}. Note the sets [s] form a partition of S. Let GF = {s: PR(s)([s]) = 1}. Then F is almost uniform if

Journal ArticleDOI
TL;DR: TCSPs for which subsets of the minimal network can be computed without having to compute the whole network are characterized, and it is shown that the sim/2-tree characterization is a minimal set of conditions.
Abstract: Temporal Constraint Satisfaction Problems (TCSP) is a welldknown approach for representing and processing temporal knowledge. Important properties of the knowledge can be inferred by computing the minimal networks of TCSPs. Consistency and feasible values are immediately obtaineds computing solutions can be assisted. Yet, in general, computing the minimal network of a disjunctive TCSP is intractable. The minimal network approach requires computation of the full network in order to answer a query. In this paper we characterize TCSPs for which subsets of the minimal network can be computed without having to compute the whole network. The partial computation is enabled by decomposition of the problem into a tree of subdproblems that share at most pairs of time points. Such decompositions are termed sim/2dtree decompositions. For TCSPs that have sim/2dtree decompositions, minimal constraints of input propositions can be computed by independent computations of the minimal networks of the subdproblems at most twice. It is also shown that the sim/2dtree characterization is a minimal set of conditions. The sim/2dtree decomposition extends former results about decomposition of a TCSP into bidconnected components. An algorithm for identifying a sim/2dtree decomposition of a TCSP is provided as well. Finally, the sim/2dtree decomposition is generalized in an inductive manner, which enables components of a decomposition to be further decomposed. For that purpose a model of Structured Temporal Constraint Satisfaction Problems (STCSP^{(n)},\ 0 \leq n), where STCSP^{(0)} is simply TCSP, STCSP^{(1)} is a set of STCSP^{(0)}s, and in general, STCSP^{(n)} for 1 \leq n is a set of STCSP^{(n-1)}s, is introduced.

Journal ArticleDOI
TL;DR: Reduction strategies are introduced for the future fragment of a temporal propositional logic on linear discrete time, named FNext, based on the information collected from the syntactic structure of the formula, which allows to improve the performance of any automated theorem prover.
Abstract: Reduction strategies are introduced for the future fragment of a temporal propositional logic on linear discrete time, named FNext. These reductions are based on the information collected from the syntactic structure of the formula, which allows the development of efficient strategies to decrease the size of temporal propositional formulas, viz. new criteria to detect the validity or unsatisfiability of subformulas, and a strong generalisation of the pure literal rule. These results, used as an inner processing step, allow to improve the performance of any automated theorem prover.

Journal ArticleDOI
TL;DR: It is shown that, for the imperfect monitoring case, there exists an efficient stochastic policy that ensures that the competitive ratio is obtained for all agents at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence.
Abstract: We consider a group of several nondBayesian agents that can fully coordinate their activities and share their past experience in order to obtain a joint goal in face of uncertainty. The reward obtained by each agent is a function of the environment state but not of the action taken by other agents in the group. The environment state (controlled by Nature) may change arbitrarily, and the reward function is initially unknown. Two basic feedback structures are considered. In one of them – the perfect monitoring case – the agents are able to observe the previous environment state as part of their feedback, while in the other – the imperfect monitoring case – all that is available to the agents are the rewards obtained. Both of these settings refer to partially observable processes, where the current environment state is unknown. Our study refers to the competitive ratio criterion. It is shown that, for the imperfect monitoring case, there exists an efficient stochastic policy that ensures that the competitive ratio is obtained for all agents at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. It is also shown that if the agents are restricted only to deterministic policies then such a policy does not exist, even in the perfect monitoring case.

Journal ArticleDOI
TL;DR: Two Pspace-hard optimization versions of propositional planning with Strips-style operators are studied and tight upper and lower bounds on their approximability are provided.
Abstract: The computational complexity of planning with Stripsdstyle operators has received a considerable amount of interest in the literature. However, the approximability of such problems has only received minute attention. We study two Pspacedhard optimization versions of propositional planning and provide tight upper and lower bounds on their approximability.

Journal ArticleDOI
TL;DR: This paper proposes new algorithms for the generation of a GLB and gives precise characterization of the computational complexity of the problem of generating such lower bounds, thus addressing in a formal way the question “how many queries are needed to amortize the overhead of compilation?”
Abstract: Propositional greatest lower bounds (GLBs) are logicallyddefined approximations of a knowledge base. They were defined in the context of Knowledge Compilation, a technique developed for addressing high computational cost of logical inference. A GLB allows for polynomialdtime complete ondline reasoning, although soundness is not guaranteed. In this paper we propose new algorithms for the generation of a GLB. Furthermore, we give precise characterization of the computational complexity of the problem of generating such lower bounds, thus addressing in a formal way the question “how many queries are needed to amortize the overhead of compilationq”

Journal ArticleDOI
TL;DR: A problem in default reasoning in Reiter’s Default Logic and related systems is identified: elements which are similar given the axioms only, become distinguishable in extensions.
Abstract: The paper identifies a problem in default reasoning in Reiter’s Default Logic and related systems: elements which are similar given the axioms only, become distinguishable in extensions. We explain why, sometimes, this is considered undesirable. Two approaches are presented for guaranteeing similarity preservation: One approach formalizes a way of uniformly applying the defaults to all similar elements by introducing generic extensions, which depend only on similarity types of objects. According to the second approach, for a restricted class of default theories, a default theory is viewed as a “shorthand notation” to what is “really meant” by its formulation. In this approach we propose a rewriting of defaults in a form that guarantees similarity preservation of the modified theory. It turns out that the above two approaches yield the same result.

Journal ArticleDOI
TL;DR: The optical thin-film multilayer (OTFM) model is capable of approximating virtually any kind of nonlinear mapping and can be used as a computational learning model.
Abstract: This paper describes a computational learning model inspired by the technology of optical thindfilm multilayers from the field of optics. With the thicknesses of thindfilm layers serving as adjustable “weights” for the computation, the optical thindfilm multilayer (OTFM) model is capable of approximating virtually any kind of nonlinear mapping. This paper describes the architecture of the model and how it can be used as a computational learning model. Some sample simulation calculations that are typical of connectionist models, including a pattern recognition of alphabetic characters, iris plant classification, and time series modelling of a gas furnace process, are given to demonstrate the model’s learning capability.

Journal ArticleDOI
TL;DR: A model for the representation of the parallel searches produced by clausal contraction‐based strategies and their parallelization by distributed search is presented, and the bounded‐search‐spaces approach to the measurement of search complexity in infinite search spaces is extended to distributed search.
Abstract: While various approaches to parallel theorem proving have been proposed, their usefulness is evaluated only empirically. This research is a contribution towards the goal of machinedindependent analysis of theoremdproving strategies. This paper considers clausal contractiondbased strategies and their parallelization by distributed search, with subdivision of the search space and propagation of clauses by messagedpassing (e.g., a la ClausedDiffusion). A model for the representation of the parallel searches produced by such strategies is presented, and the boundeddsearchdspaces approach to the measurement of search complexity in infinite search spaces is extended to distributed search. This involves capturing both its advantages, e.g., the subdivision of work, and disadvantages, e.g., the cost of communication, in terms of search space. These tools are applied to compare the evolution of the search space of a contractiondbased strategy with that of its parallelization in the above sense.