scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Mathematics and Artificial Intelligence in 2006"


Journal ArticleDOI
TL;DR: This paper proposes a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions.
Abstract: Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a trade-off between the performance of model-based algorithm selection, and the cost of learning the model. In this paper, we treat this trade-off in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the model-based shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark; and with a set of solvers for the Auction Winner Determination problem.

134 citations


Journal ArticleDOI
TL;DR: This work uses possibility theory to extend the non monotonic semantics of stable models for logic programs with default negation by means of a possibility distribution, and defines a clear semantics of such programs by introducing what is a possibilistic stable model.
Abstract: In this work, we introduce a new framework able to deal with a reasoning that is at the same time non monotonic and uncertain. In order to take into account a certainty level associated to each piece of knowledge, we use possibility theory to extend the non monotonic semantics of stable models for logic programs with default negation. By means of a possibility distribution we define a clear semantics of such programs by introducing what is a possibilistic stable model. We also propose a syntactic process based on a fix-point operator to compute these particular models representing the deductions of the program and their certainty. Then, we show how this introduction of a certainty level on each rule of a program can be used in order to restore its consistency in case of the program has no model at all. Furthermore, we explain how we can compute possibilistic stable models by using available softwares for Answer Set Programming and we describe the main lines of the system that we have developed to achieve this goal.

87 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to demonstrate that A-Prolog is a powerful language for the construction of reasoning systems by describing in detail the design of USA-Advisor, an A- prolog based decision support system for the Space Shuttle flight controllers.
Abstract: The aim of this paper is to demonstrate that A-Prolog is a powerful language for the construction of reasoning systems. In fact, A-Prolog allows to specify the initial situation, the domain model, the control knowledge, and the reasoning modules. Moreover, it is efficient enough to be used for practical tasks and can be nicely integrated with programming languages such as Java. An extension of A-Prolog (CR-Prolog) allows to further improve the quality of reasoning by specifying requirements that the solutions should satisfy if at all possible. The features of A-Prolog and CR-Prolog are demonstrated by describing in detail the design of USA-Advisor, an A-Prolog based decision support system for the Space Shuttle flight controllers.

77 citations


Journal ArticleDOI
TL;DR: A model for distributed performance measures for measuring performance of distributed search algorithms is presented and the delay of messages is found to have a strong negative effect on single search process algorithms, whether synchronous or asynchronous.
Abstract: Distributed constraint satisfaction problems (DisCSPs) are composed of agents, each holding its own variables, that are connected by constraints to variables of other agents. Due to the distributed nature of the problem, message delay can have unexpected effects on the behavior of distributed search algorithms on DisCSPs. This has been recently shown in experimental studies of asynchronous backtracking algorithms (Bejar et al., Artif. Intell., 161:117---148, 2005; Silaghi and Faltings, Artif. Intell., 161:25---54, 2005). To evaluate the impact of message delay on the run of DisCSP search algorithms, a model for distributed performance measures is presented. The model counts the number of non concurrent constraints checks, to arrive at a solution, as a non concurrent measure of distributed computation. A simpler version measures distributed computation cost by the non-concurrent number of steps of computation. An algorithm for computing these distributed measures of computational effort is described. The realization of the model for measuring performance of distributed search algorithms is a simulator which includes the cost of message delays. Two families of distributed search algorithms on DisCSPs are investigated. Algorithms that run a single search process, and multiple search processes algorithms. The two families of algorithms are described and associated with existing algorithms. The performance of three representative algorithms of these two families is measured on randomly generated instances of DisCSPs with delayed messages. The delay of messages is found to have a strong negative effect on single search process algorithms, whether synchronous or asynchronous. Multi search process algorithms, on the other hand, are affected very lightly by message delay.

68 citations


Journal ArticleDOI
TL;DR: The simplification and generalization of the theorem on loop formulas makes the idea of a loop formula applicable to stable models in the sense of a very general definition that covers disjunctive programs, programs with nested expressions, and more.
Abstract: The theorem on loop formulas due to Fangzhen Lin and Yuting Zhao shows how to turn a logic program into a propositional formula that describes the program's stable models. In this paper we simplify and generalize the statement of this theorem. The simplification is achieved by modifying the definition of a loop in such a way that a program is turned into the corresponding propositional formula by adding loop formulas directly to the conjunction of its rules, without the intermediate step of forming the program's completion. The generalization makes the idea of a loop formula applicable to stable models in the sense of a very general definition that covers disjunctive programs, programs with nested expressions, and more.

55 citations


Journal ArticleDOI
Thomas Dean1
TL;DR: The technical challenges involved in combining key features from several theories of the visual cortex in a single coherent model are addressed and the resulting model is a hierarchical Bayesian network factored into modular component networks embedding variable-order Markov models.
Abstract: We address the technical challenges involved in combining key features from several theories of the visual cortex in a single coherent model. The resulting model is a hierarchical Bayesian network factored into modular component networks embedding variable-order Markov models. Each component network has an associated receptive field corresponding to components residing in the level directly below it in the hierarchy. The variable-order Markov models account for features that are invariant to naturally occurring transformations in their inputs. These invariant features give rise to increasingly stable, persistent representations as we ascend the hierarchy. The receptive fields of proximate components on the same level overlap to restore selectivity that might otherwise be lost to invariance.

43 citations


Journal ArticleDOI
TL;DR: The concepts of strong and uniform equivalence of logic programs can be generalized to an abstract algebraic setting of operators on complete lattices for several nonmonotonic logics including logic programming with aggregates, default logic and a version of autoepistemic logic.
Abstract: We show that the concepts of strong and uniform equivalence of logic programs can be generalized to an abstract algebraic setting of operators on complete lattices. Our results imply characterizations of strong and uniform equivalence for several nonmonotonic logics including logic programming with aggregates, default logic and a version of autoepistemic logic.

41 citations


Journal ArticleDOI
TL;DR: This work presents an effective method for finding a PPA in which the share of processor time allocated to each algorithm is fixed and presents bounds on the performance of the PPA over random instances and evaluates the performance empirically on a collection of 23 state-of-the-art SAT algorithms.
Abstract: A wide range of combinatorial optimization algorithms have been developed for complex reasoning tasks. Frequently, no single algorithm outperforms all the others. This has raised interest in leveraging the performance of a collection of algorithms to improve performance. We show how to accomplish this using a Parallel Portfolio of Algorithms (PPA). A PPA is a collection of diverse algorithms for solving a single problem, all running concurrently on a single processor until a solution is produced. The performance of the portfolio may be controlled by assigning different shares of processor time to each algorithm. We present an effective method for finding a PPA in which the share of processor time allocated to each algorithm is fixed. Finding the optimal static schedule is shown to be an NP-complete problem for a general class of utility functions. We present bounds on the performance of the PPA over random instances and evaluate the performance empirically on a collection of 23 state-of-the-art SAT algorithms. The results show significant performance gains over the fastest individual algorithm in the collection.

39 citations


Journal ArticleDOI
TL;DR: Within this modeling framework one can express data clustering models, logic programs, ordinary and stochastic differential equations, branching processes, graph grammars, and stochy chemical reaction kinetics, which makes the framework particularly suitable for applications in machine learning and multiscale scientific modeling.
Abstract: We define a class of probabilistic models in terms of an operator algebra of stochastic processes, and a representation for this class in terms of stochastic parameterized grammars. A syntactic specification of a grammar is formally mapped to semantics given in terms of a ring of operators, so that composition of grammars corresponds to operator addition or multiplication. The operators are generators for the time-evolution of stochastic processes. The dynamical evolution occurs in continuous time but is related to a corresponding discrete-time dynamics. An expansion of the exponential of such time-evolution operators can be used to derive a variety of simulation algorithms. Within this modeling framework one can express data clustering models, logic programs, ordinary and stochastic differential equations, branching processes, graph grammars, and stochastic chemical reaction kinetics. The mathematical formulation connects these apparently distant fields to one another and to mathematical methods from quantum field theory and operator algebra. Such broad expressiveness makes the framework particularly suitable for applications in machine learning and multiscale scientific modeling.

39 citations


Journal Article
TL;DR: In this paper, the authors reformulate algorithm selection as a time allocation problem, where all candidate algorithms are run in parallel, and their relative priorities are continually updated based on runtime information, with the aim of minimizing the time to reach a desired performance level.
Abstract: Traditional Meta-Learning requires long training times, and is often focused on optimizing performance quality, neglecting computational complexity. Algorithm Portfolios are more robust, but present similar limitations. We reformulate algorithm selection as a time allocation problem: all candidate algorithms are run in parallel, and their relative priorities are continually updated based on runtime information, with the aim of minimizing the time to reach a desired performance level. Each algorithm’s priority is set based on its current time to solution, estimated according to a parametric model that is trained and used while solving a sequence of problems, gradually increasing its impact on the priority attribution. The use of censored sampling allows to train the model efficiently.

37 citations


Journal ArticleDOI
TL;DR: This work defines the approximate containment, and its variants k-containment and reliable contain-ment, and shows that it is polynomial time equivalent to the notorious limitedness problem in distance automata.
Abstract: We give a general framework for approximate query processing in semistructured databases. We focus on regular path queries, which are the integral part of most of the query languages for semistructured databases. To enable approximations, we allow the regular path queries to be distorted. The distortions are expressed in the system by using weighted regular expressions, which correspond to weighted regular transducers. After defining the notion of weighted approximate answers we show how to compute them in order of their proximity to the query. In the new approximate setting, query containment has to be redefined in order to take into account the quantitative proximity information in the query answers. For this, we define the approximate containment, and its variants k-containment and reliable contain-ment. Then, we give an optimal algorithm for deciding the k-containment. Regarding the reliable approximate containment, we show that it is polynomial time equivalent to the notorious limitedness problem in distance automata.

Journal ArticleDOI
TL;DR: Finite axiomatisations of functional, multivalued and both functional andMultivalued dependencies in nested databases supporting record and list constructor are proposed.
Abstract: The impact of the list constructor on two important classes of relational dependencies is investigated. Lists represent an inevitable data structure whenever order matters and data is allowed to occur repeatedly. The list constructor is therefore supported by many advanced data models such as genomic sequence, deductive and object-oriented data models including XML. The article proposes finite axiomatisations of functional, multivalued and both functional and multivalued dependencies in nested databases supporting record and list constructor. In order to capture different data models at a time, an abstract algebraic approach based on nested attributes is taken. The presence of the list constructor calls for a new inference rule which allows to infer non-trivial functional dependencies from multivalued dependencies. Further differences to the relational theory become apparent when the independence of the inference rules is investigated. The extension of the relational theory to nested databases allows to specify more real-world constraints and increases therefore the number of application domains.

Journal ArticleDOI
TL;DR: It is proved that, when the multi issue utility functions are linear, the problem of computing the equilibrium is tractable and the related complexity is polynomial with the number of issues and linear with the deadline of bargaining.
Abstract: In this paper we study multi issue alternating-offers bargaining in a perfect information finite horizon setting, we determine the pertinent subgame perfect equilibrium, and we provide an algorithm to compute it. The equilibrium is determined by making a novel use of backward induction together with convex programming techniques in multi issue settings. We show that the agents reach an agreement immediately and that such an agreement is Pareto efficient. Furthermore, we prove that, when the multi issue utility functions are linear, the problem of computing the equilibrium is tractable and the related complexity is polynomial with the number of issues and linear with the deadline of bargaining.

Journal ArticleDOI
TL;DR: CoLPs integrate, in one unifying framework, the best of both the logic programming paradigm (a flexible rule-based representation and nonmonotonicity by means of negation as failure) and the description logics paradigm (decidable open domain reasoning).
Abstract: Open answer set programming (OASP) solves the lack of modularity in closed world answer set programming by allowing for the grounding of logic programs with an arbitrary non-empty countable superset of the program's constants. However, OASP is, in general, undecidable: the undecidable domino problem can be reduced to it. In order to regain decidability, we restrict the shape of logic programs, yielding conceptual logic programs (CoLPs). CoLPs are logic programs with unary and binary predicates (possibly inverted) where rules have a tree shape. Decidability of satisfiability checking of predicates w.r.t. CoLPs is shown by a reduction to non-emptiness checking of two-way alternating tree automata. We illustrate the expressiveness of CoLPs by simulating the description logic $\mathcal{SHIQ}$ . CoLPs thus integrate, in one unifying framework, the best of both the logic programming paradigm (a flexible rule-based representation and nonmonotonicity by means of negation as failure) and the description logics paradigm (decidable open domain reasoning).

Journal ArticleDOI
TL;DR: This proposal provides the foundational grounds for developing computational methods for implementing the proposed semantics and makes it clearer how to characterize non-monotonic negation in probabilistic logic programming frameworks for commonsense reasoning.
Abstract: This paper presents a novel revision of the framework of Hybrid Probabilistic Logic Programming, along with a complete semantics characterization, to enable the encoding of and reasoning about real-world applications. The language of Hybrid Probabilistic Logic Programs framework is extended to allow the use of non-monotonic negation, and two alternative semantical characterizations are defined: stable probabilistic model semantics and probabilistic well-founded semantics. These semantics generalize the stable model semantics and well-founded semantics of traditional normal logic programs, and they reduce to the semantics of Hybrid Probabilistic Logic programs for programs without negation. It is the first time that two different semantics for Hybrid Probabilistic Programs with non-monotonic negation as well as their relationships are described. This proposal provides the foundational grounds for developing computational methods for implementing the proposed semantics. Furthermore, it makes it clearer how to characterize non-monotonic negation in probabilistic logic programming frameworks for commonsense reasoning.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a signed theory for repairing inconsistent databases, which can be used by a variety of off-the-shelf computational models in order to compute the corresponding solutions.
Abstract: We introduce a simple and practical method for repairing inconsistent databases. Given a possibly inconsistent database, the idea is to properly represent the underlying problem, i.e., to describe the possible ways of restoring its consistency. We do so by what we call signed formulae, and show how the `signed theory' that is obtained can be used by a variety of off-the-shelf computational models in order to compute the corresponding solutions, i.e., consistent repairs of the database.

Journal ArticleDOI
TL;DR: A tableau-based algorithm for obtaining a Büchi automaton from a formula in Dynamic Linear Time Temporal Logic (DLTL), a logic which extends LTL by indexing the until operator with regular programs.
Abstract: We present a tableau-based algorithm for obtaining a Buchi automaton from a formula in Dynamic Linear Time Temporal Logic (DLTL), a logic which extends LTL by indexing the until operator with regular programs. The construction of the states of the automaton is similar to the standard construction for LTL, but a different technique must be used to verify the fulfillment of until formulas. The resulting automaton is a Buchi automaton rather than a generalized one. The construction can be done on-the-fly, while checking for the emptiness of the automaton. We also extend the construction to the Product Version of DLTL.

Journal ArticleDOI
TL;DR: A composite approach is developed that symmetrically approximates the primal and dual optimization variables (effectively approximating both the objective function and the feasible region of the LP), leading to a formulation that is computationally feasible and suitable for solving constrained MDPs.
Abstract: A weakness of classical Markov decision processes (MDPs) is that they scale very poorly due to the flat state-space representation. Factored MDPs address this representational problem by exploiting problem structure to specify the transition and reward functions of an MDP in a compact manner. However, in general, solutions to factored MDPs do not retain the structure and compactness of the problem representation, forcing approximate solutions, with approximate linear programming (ALP) emerging as a promising MDP-approximation technique. To date, most ALP work has focused on the primal-LP formulation, while the dual LP, which forms the basis for solving constrained Markov problems, has received much less attention. We show that a straightforward linear approximation of the dual optimization variables is problematic, because some of the required computations cannot be carried out efficiently. Nonetheless, we develop a composite approach that symmetrically approximates the primal and dual optimization variables (effectively approximating both the objective function and the feasible region of the LP), leading to a formulation that is computationally feasible and suitable for solving constrained MDPs. We empirically show that this new ALP formulation also performs well on unconstrained problems.

Journal ArticleDOI
TL;DR: This question is addressed for closed update strategies, which are based upon the constant-complement approach of Bancilhon and Spyratos, and is to address a more general question – that of characterizing the complexity of axiomatization of views, relative to the complexity to the main schema.
Abstract: It is well known that the complexity of testing the correctness of an arbitrary update to a database view can be far greater than the complexity of testing a corresponding update to the main schema. However, views are generally managed according to some protocol which limits the admissible updates to a subset of all possible changes. The question thus arises as to whether there is a more tractable relationship between these two complexities in the presence of such a protocol. In this paper, this question is addressed for closed update strategies, which are based upon the constant-complement approach of Bancilhon and Spyratos. The approach is to address a more general question --- that of characterizing the complexity of axiomatization of views, relative to the complexity of axiomatization of the main schema. For schemata constrained by denial or consistency constraints, that is, statements which rule out certain situations, such as the equality-generating dependencies (EGDs) or, more specifically, the functional dependencies (FDs) of the relational model, a broad and comprehensive result is obtained in a very general framework which is not tied to the relational model in any way. It states that every such schema is governed by an equivalent set of constraints which embed into the component views, and which are no more complex than the original set. For schemata constrained by generating dependencies, of which tuple-generating dependencies (TGDs) in general and, more specifically, both join dependencies (JDs) and inclusion dependencies (INDs) are examples within the relational model, a similar result is obtained, but only within a context known as meet-uniform decompositions, which fails to recapture some important situations. To address the all-important case of relational schemata constrained by both FDs and INDs, a hybrid approach is also developed, in which the general theory regarding denial constraints is blended with a focused analysis of a special but very practical subset of the INDs known as fanout-free unary inclusion dependencies (fanout-free UINDs), to obtain results parallel to the above-mentioned cases: every such schema is governed by an equivalent set of constraints which embed into the component views, and which are no more complex than the original set. In all cases, the question of view update complexity is then answered via a corollary to this main result.

Journal ArticleDOI
TL;DR: A formalization in Coq of Common Knowledge Logic is presented and its adequacy on case studies is checked and it is found that it allows exploring experimentally the proof-theoretic side of Common knowledge Logic.
Abstract: This paper presents a formalization in Coq of Common Knowledge Logic and checks its adequacy on case studies. Those studies allow exploring experimentally the proof-theoretic side of Common Knowledge Logic. This work is original in that nobody has considered Higher Order Common Knowledge Logic from the point of view of proofs performed on a proof assistant. As a matter of facts, it is experimental by nature as it tries to draw conclusions from experiments.

Journal ArticleDOI
TL;DR: This paper defines the novel notion of strong order equivalence for logic programs with preferences (ordered logic programs), and gives, for several semantics for preference handling, necessary and sufficient conditions for programs to be strongly order equivalent.
Abstract: Recently, strong equivalence for Answer Set Programming has been studied intensively, and was shown to be beneficial for modular programming and automated optimization. In this paper we define the novel notion of strong order equivalence for logic programs with preferences (ordered logic programs). Based on this definition we give, for several semantics for preference handling, necessary and sufficient conditions for programs to be strongly order equivalent. These results allow us also to associate a so-called SOE structure to each ordered logic program, such that two ordered logic programs are strongly order equivalent if and only if their SOE structures coincide. We also present the relationships among the studied semantics with respect to strong order equivalence, which differs considerably from their relationships with respect to preferred answer sets. Furthermore, we study the computational complexity of several reasoning tasks associated to strong order equivalence. Finally, based on the obtained results, we present --- for the first time --- simplification methods for ordered logic programs.

Journal ArticleDOI
TL;DR: New sequent forms* of the famous Herbrand theorem are proved for first-order classical logic without equality and give an approach to the construction and theoretical investigation of computer-oriented calculi for efficient logical inference search in the signature of an initial theory.
Abstract: New sequent forms* of the famous Herbrand theorem are proved for first-order classical logic without equality. These forms use the original notion of an admissible substitution and a certain modification of the Herbrand universe, which is constructed from constants, special variables, and functional symbols occurring only in the signature of an initial theory. Other well-known forms of the Herbrand theorem are obtained as special cases of the sequent ones. Besides, the sequent forms give an approach to the construction and theoretical investigation of computer-oriented calculi for efficient logical inference search in the signature of an initial theory. In a comparably simple way, they provide us with some technique for proving the completeness and soundness of the calculi.

Journal ArticleDOI
TL;DR: This paper presents a family of logics, based on an alternative interpretation of a default conditional, which are weaker than the commonly-accepted base conditional approach for defeasible reasoning.
Abstract: In nonmonotonic reasoning, a default conditional ??β has most often been informally interpreted as a defeasible version of a classical conditional, usually the material conditional. There is however an alternative interpretation, in which a default is regarded essentially as a rule, leading from premises to conclusion. In this paper, we present a family of logics, based on this alternative interpretation. A general semantic framework under this rule-based interpretation is developed, and associated proof theories for a family of weak conditional logics is specified. Nonmonotonic inference is easily defined in these logics. Interestingly, the logics presented here are weaker than the commonly-accepted base conditional approach for defeasible reasoning. However, this approach resolves problems that have been associated with previous approaches.

Journal ArticleDOI
TL;DR: A semantic model, Probabilistic Constraint Nets (PCN), is developed for probabilistic hybrid systems, allowing systems with discrete and continuous time/variables, synchronous as well as asynchronous event structures and uncertain dynamics to be modeled in a unitary framework.
Abstract: The development of autonomous agents, such as mobile robots and software agents, has generated considerable research in recent years Robotic systems, which are usually built from a mixture of continuous (analog) and discrete (digital) components, are often referred to as hybrid dynamical systems Traditional approaches to real-time hybrid systems usually define behaviors purely in terms of determinism or sometimes non-determinism However, this is insufficient as real-time dynamical systems very often exhibit uncertain behavior To address this issue, we develop a semantic model, Probabilistic Constraint Nets (PCN), for probabilistic hybrid systems PCN captures the most general structure of dynamic systems, allowing systems with discrete and continuous time/variables, synchronous as well as asynchronous event structures and uncertain dynamics to be modeled in a unitary framework Based on a formal mathematical paradigm exploiting abstract algebra, topology and measure theory, PCN provides a rigorous formal programming semantics for the design of hybrid real-time embedded systems exhibiting uncertainty

Journal ArticleDOI
TL;DR: Temporal Representation and Reasoning are vital elements within many of the most interesting and useful developments in Computer Science and Artificial Intelligence, and are becoming increasingly important with the advent of ubiquitous computing, grid computing and the Internet.
Abstract: Temporal Representation and Reasoning are vital elements within many of the most interesting and useful developments in Computer Science and Artificial Intelligence. These areas are becoming increasingly important with the advent of ubiquitous computing, grid computing and the Internet, where large amounts of information and processes are available, and where all these may be evolving in time. Temporal techniques are particularly important in: handling large quantities of data, especially analysing and mining this data; in simulating and analysing the temporal evolution of natural processes and organisms; in assessing security and safety aspects; in dynamic knowledge management; in representing and reasoning about service description and composition on the WWW; and in ensuring the dependability of complex dynamic and distributed systems. All these aspects occur within growing areas of research and

Journal ArticleDOI
TL;DR: An adaptive approach to merging possibilistic knowledge bases that deploys multiple operators instead of a single operator in the merging process that results in a possibillistic knowledge base which contains more information than that obtained by the t-conorm based merging methods.
Abstract: We propose an adaptive approach to merging possibilistic knowledge bases that deploys multiple operators instead of a single operator in the merging process The merging approach consists of two steps: the splitting step and the combination step The splitting step splits each knowledge base into two subbases and then in the second step, different classes of subbases are combined using different operators Our merging approach is applied to knowledge bases which are self-consistent and results in a knowledge base which is also consistent Two operators are proposed based on two different splitting methods Both operators result in a possibilistic knowledge base which contains more information than that obtained by the t-conorm (such as the maximum) based merging methods In the flat case, one of the operators provides a good alternative to syntax-based merging operators in classical logic

Journal ArticleDOI
TL;DR: It is proved that reasoning on temporal class diagrams is an undecidable problem on both unrestricted models and on finite ones.
Abstract: This paper introduces a temporal class diagram language useful to model temporal varying data. The atemporal portion of the language contains the core constructors available in both EER diagrams and UML class diagrams. The temporal part of the language is able to distinguish between temporal and atemporal constructs, and it has the ability to represent dynamic constraints between classes. The language is characterized by a model-theoretic (temporal) semantics. Reasoning services as logical implication and satisfiability are also defined. We show that reasoning on finite models is different from reasoning on unrestricted ones. Then, we prove that reasoning on temporal class diagrams is an undecidable problem on both unrestricted models and on finite ones.

Journal ArticleDOI
TL;DR: A representation language is defined which enables us to handle each temporal point as a complex object enriched with all the structure it is immersed in, and then the semantics is used in order to provide a Presburger semantics for classes of symbolic languages coping with periodicity.
Abstract: In several areas, including Temporal DataBases (TDB), Presburger arithmetic has been chosen as a standard reference for the semantics of languages representing periodic time, and to study their expressiveness. On the other hand, the proposal of most symbolic languages in the AI literature has not been paired with an adequate semantic counterpart, making the task of studying the expressiveness of such languages and of comparing them a very complex one. In this paper, we first define a representation language which enables us to handle each temporal point as a complex object enriched with all the structure it is immersed in, and then we use it in order to provide a Presburger semantics for classes of symbolic languages coping with periodicity. Finally, we use the semantics to compare a few AI and TDB symbolic approaches.

Journal ArticleDOI
TL;DR: This paper presents an explicit semidefinite programming problem with dimension linear in the size of the Tseitin instance, and proves that it characterizes the satisfiability of these instances, thus providing an explicit certificate of satisfiability or unsatisfiability.
Abstract: This paper is concerned with the application of semidefinite programming to the satisfiability problem, and in particular with using semidefinite liftings to efficiently obtain proofs of unsatisfiability. We focus on the Tseitin satisfiability instances which are known to be hard for many proof systems. For Tseitin instances based on toroidal grid graphs, we present an explicit semidefinite programming problem with dimension linear in the size of the Tseitin instance, and prove that it characterizes the satisfiability of these instances, thus providing an explicit certificate of satisfiability or unsatisfiability.

Journal ArticleDOI
TL;DR: It is shown how any ECTL+ formula can be translated to a normal form the structure of which was initially defined for CTL and then used for ECTT, which enables the applicability of the clausal resolution technique to ECTl+ a resolution technique defined over the set of clauses.
Abstract: We expand the applicability of the clausal resolution technique to the branching-time temporal logic ECTL+. ECTL+ is strictly more expressive than the basic computation tree logic CTL and its extension, ECTL, as it allows Boolean combinations of fairness and single temporal operators. We show how any ECTL+ formula can be translated to a normal form the structure of which was initially defined for CTL and then used for ECTL. This enables us to apply to ECTL+ a resolution technique defined over the set of clauses. Both correctness of the method and complexity of the transformation procedure are given.