scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Mathematics and Artificial Intelligence in 2001"


Journal ArticleDOI
TL;DR: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space, and organizes the coverage algorithms into heuristic, approximate, partial-approximate and exact cellular decompositions.
Abstract: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms.

1,206 citations


Journal ArticleDOI
TL;DR: A general formula for the density of a vine dependent distribution is derived, which generalizes the well-known density formula for belief nets based on the decomposition of belief nets into cliques and allows a simple proof of the Information Decomposition Theorem for a regular vine.
Abstract: A vine is a new graphical model for dependent random variables Vines generalize the Markov trees often used in modeling multivariate distributions They differ from Markov trees and Bayesian belief nets in that the concept of conditional independence is weakened to allow for various forms of conditional dependence A general formula for the density of a vine dependent distribution is derived This generalizes the well-known density formula for belief nets based on the decomposition of belief nets into cliques Furthermore, the formula allows a simple proof of the Information Decomposition Theorem for a regular vine The problem of (conditional) sampling is discussed, and Gibbs sampling is proposed to carry out sampling from conditional vine dependent distributions The so-called ‘canonical vines’ built on highest degree trees offer the most efficient structure for Gibbs sampling

836 citations


Journal ArticleDOI
TL;DR: This paper clarifies a pervasive confusion between possibility theory axioms and fuzzy set basic connectives by demonstrating that any belief representation where compositionality is taken for granted is bound to at worst collapse to a Boolean truth assignment and at best to a poorly expressive tool.
Abstract: There has been a long-lasting misunderstanding in the literature of artificial intelligence and uncertainty modeling, regarding the role of fuzzy set theory and many-valued logics. The recurring question is that of the mathematical and pragmatic meaningfulness of a compositional calculus and the validity of the excluded middle law. This confusion pervades the early developments of probabilistic logic, despite early warnings of some philosophers of probability. This paper tries to clarify this situation. It emphasizes three main points. First, it suggests that the root of the controversies lies in the unfortunate confusion between degrees of belief and what logicians call “degrees of truth”. The latter are usually compositional, while the former cannot be so. This claim is first illustrated by laying bare the non-compositional belief representation embedded in the standard propositional calculus. It turns out to be an all-or-nothing version of possibility theory. This framework is then extended to discuss the case of fuzzy logic versus graded possibility theory. Next, it is demonstrated that any belief representation where compositionality is taken for granted is bound to at worst collapse to a Boolean truth assignment and at best to a poorly expressive tool. Lastly, some claims pertaining to an alleged compositionality of possibility theory are refuted, thus clarifying a pervasive confusion between possibility theory axioms and fuzzy set basic connectives.

387 citations


Journal ArticleDOI
TL;DR: This paper surveys the temporal extensions of description logics appearearing in the literature and considers a large spectrum of approaches – from the loosely coupled approaches – to the most principled ones – which consider a combined semantics for the abstract and the temporal domains.
Abstract: This paper surveys the temporal extensions of description logics appearearing in the literature. The analysis considers a large spectrum of approaches appearearing in the temporal description logics area: from the loosely coupled approaches – which comprise, for example, the enhancement of simple description logics with a constraint based mechanism – to the most principled ones – which consider a combined semantics for the abstract and the temporal domains. It will be shown how these latter approaches have a strict connection with temporal logics. Advantages of using temporal description logics are their high expressivity combined with desirable computational properties – such as decidability, soundness and completeness of deduction procedures. In this survey the computational properties of various families of temporal description logics will be pointed out.

204 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new sensing modality for multi-robot exploration based on using a pair of robots that observe each other, and act in concert to reduce odometry errors.
Abstract: This paper presents a new sensing modality for multirobot exploration. The approach is based on using a pair of robots that observe each other, and act in concert to reduce odometry errors. We assume the robots can both directly sense nearby obstacles and see each other. The proposed approach improves the quality of the map by reducing the inaccuracies that occur over time from dead reckoning errors. Furthermore, by exploiting the ability of the robots to see each other, we can detect opaque obstacles in the environment independently of their surface reflectance properties. Two different algorithms, based on the size of the environment, are introduced, with a complexity analysis, and experimental results in simulation and with real robots.

161 citations


Journal ArticleDOI
TL;DR: This work studies the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance, and studies two simple real-time search methods that differ only in how the markings are updated.
Abstract: Ant robots are simple creatures with limited sensing and computational capabilities. They have the advantage that they are easy to program and cheap to build. This makes it feasible to deploy groups of ant robots and take advantage of the resulting fault tolerance and parallelism. We study, both theoretically and in simulation, the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance. Ant robots cannot use conventional planning methods due to their limited sensing and computational capabilities. To overcome these limitations, we study navigation methods that are based on real-time (heuristic) search and leave markings in the terrain, similar to what real ants do. These markings can be sensed by all ant robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The ant robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. We study two simple real-time search methods that differ only in how the markings are updated. We show experimentally that both real-time search methods robustly cover terrain even if the ant robots are moved without realizing this (say, by people running into them), some ant robots fail, and some markings get destroyed. Both real-time search methods are algorithmically similar, and our experimental results indicate that their cover time is similar in some terrains. Our analysis is therefore surprising. We show that the cover time of ant robots that use one of the real-time search methods is guaranteed to be polynomial in the number of locations, whereas the cover time of ant robots that use the other real-time search method can be exponential in (the square root of) the number of locations even in simple terrains that correspond to (planar) undirected trees.

159 citations


Journal ArticleDOI
TL;DR: TALplanner is presented, a forward-chaining planner based on the use of domain-dependent search control knowledge represented as formulas in the Temporal Action Logic, a narrative based linear metric time logic used for reasoning about action and change in incompletely specified dynamic environments.
Abstract: We present TALplanner, a forward-chaining planner based on the use of domain-dependent search control knowledge represented as formulas in the Temporal Action Logic (TAL). TAL is a narrative based linear metric time logic used for reasoning about action and change in incompletely specified dynamic environments. TAL is used as the formal semantic basis for TALplanner, where a TAL goal narrative with control formulas is input to TALplanner which then generates a TAL narrative that entails the goal and control formulas. The sequential version of TALplanner is presented. The expressivity of plan operators is then extended to deal with an interesting class of resource types. An algorithm for generating concurrent plans, where operators have varying durations and internal state, is also presented. All versions of TALplanner have been implemented. The potential of these techniques is demonstrated by applying TALplanner to a number of standard planning benchmarks in the literature.

159 citations


Journal ArticleDOI
TL;DR: It is shown in this paper how the emergence of fuzzy set theory and the theory of monotone measures considerably expanded the framework for formalizing uncertainty and suggested many new types of uncertainty theories.
Abstract: It is shown in this paper how the emergence of fuzzy set theory and the theory of monotone measures considerably expanded the framework for formalizing uncertainty and suggested many new types of uncertainty theories. The paper focuses on issues regarding the measurement of the amount of relevant uncertainty (predictive, prescriptive, diagnostic, etc.) in nondeterministic systems formalized in terms of the various uncertainty theories. It is explained how information produced by an action can be measured by the reduction of uncertainty produced by the action. Results regarding measures of uncertainty (and uncertainty-based information) in possibility theory, Dempster–Shafer theory, and the various theories of imprecise probabilities are surveyed. The significance of these results in developing sound methodological principles of uncertainty and uncertainty-based information is discussed.

145 citations


Journal ArticleDOI
TL;DR: A dialectical argumentation framework for qualitative representation of epistemic uncertainty in scientific domains is articulate and a formalism for representating uncertainty has value in domains with only limited knowledge, where experimental evidence is ambiguous or conflicting, or where agreement between different stakeholders on the quantification of uncertainty is difficult to achieve.
Abstract: We articulate a dialectical argumentation framework for qualitative representation of epistemic uncertainty in scientific domains. The framework is grounded in specific philosophies of science and theories of rational mutual discourse. We study the formal properties of our framework and provide it with a game theoretic semantics. With this semantics, we examine the relationship between the snaphots of the debate in the framework and the long run position of the debate, and prove a result directly analogous to the standard (Neyman–Pearson) approach to statistical hypothesis testing. We believe this formalism for representating uncertainty has value in domains with only limited knowledge, where experimental evidence is ambiguous or conflicting, or where agreement between different stakeholders on the quantification of uncertainty is difficult to achieve. All three of these conditions are found in assessments of carcinogenic risk for new chemicals.

92 citations


Journal ArticleDOI
TL;DR: It is shown how the concept of separoid unifies a variety of notions of ‘irrelevance’ arising out of different formalisms for representing uncertainty in Probability, Statistics, Artificial Intelligence, and other fields.
Abstract: We introduce an axiomatic definition of a mathematical structure that we term a i>separoid. We develop some general mathematical properties of separoids and related axiom systems, as well as connections with other mathematical structures, such as distributive lattices, Hilbert spaces, and graphs. And we show, by means of a detailed account of a number of models of the separoid axioms, how the concept of separoid unifies a variety of notions of ‘irrelevance’ arising out of different formalisms for representing uncertainty in Probability, Statistics, Artificial Intelligence, and other fields.

82 citations


Journal ArticleDOI
TL;DR: This paper describes the TAME strategies, their use, and how their implementation exploits the structure of specifications and various PVS features, and describes several features, currently unsupported in PVS, that would either allow additional “natural” proof steps in TAME or allow existing TAME proof steps to be improved.
Abstract: TAME (Timed Automata Modeling Environment), an interface to the theorem proving system PVS, is designed for proving properties of three classes of automata: I/O automata, Lynch–Vaandrager timed automata, and SCR automata. TAME provides templates for specifying these automata, a set of auxiliary theories, and a set of specialized PVS strategies that rely on these theories and on the structure of automata defined using the templates. Use of the TAME strategies simplifies the process of proving automaton properties, particularly state and transition invariants. TAME provides two types of strategies: strategies for “automatic” proof and strategies designed to implement “natural” proof steps, i.e., proof steps that mimic the high-level steps in typical natural language proofs. TAME's “natural” proof steps can be used both to mechanically check hand proofs in a straightforward way and to create proof scripts that can be understood without executing them in the PVS proof checker. Several new PVS features can be used to obtain better control and efficiency in user-defined strategies such as those used in TAME. This paper describes the TAME strategies, their use, and how their implementation exploits the structure of specifications and various PVS features. It also describes several features, currently unsupported in PVS, that would either allow additional “natural” proof steps in TAME or allow existing TAME proof steps to be improved. Lessons learned from TAME relevant to the development of similar specialized interfaces to PVS or other theorem provers are discussed.

Journal ArticleDOI
TL;DR: The general, principled Broadcast of Local Eligibility (BLE) technique for role-assumption in behavior-space-situated systems, and experimental results from the CMOMMT target-tracking task are provided.
Abstract: Ant-like systems take advantage of agents' i>situatedness to reduce or eliminate the need for centralized control or global knowledge. This reduces the need for complexity of individuals and leads to robust, scalable systems. Such insect-inspired situated approaches have proven effective both for task performance and task allocation. The desire for general, principled techniques for situated interaction has led us to study the exploitation of i>abstract situatedness – situatedness in non-physical environments. The i>port-arbitrated behavior-based control approach provides a well-structured abstract i>behavior space in which agents can participate in situated interaction. We focus on the problem of i>role assumption, distributed task allocation in which each agent selects its own task-performing role. This paper details our general, principled Broadcast of Local Eligibility (BLE) technique for role-assumption in such behavior-space-situated systems, and provides experimental results from the CMOMMT target-tracking task.

Journal ArticleDOI
TL;DR: A construction algorithm is obtained that even for RRBNs that represent models for complex first-order and statistical dependencies generates standard Bayesian networks of size polynomial in the size of the domain given in a specific application instance.
Abstract: A number of representation systems have been proposed that extend the purely propositional Bayesian network paradigm with representation tools for some types of first-order probabilistic dependencies. Examples of such systems are dynamic Bayesian networks and systems for knowledge based model construction. We can identify the representation of probabilistic relational models as a common well-defined semantic core of such systems. Recursive relational Bayesian networks (RRBNs) are a framework for the representation of probabilistic relational models. A main design goal for RRBNs is to achieve greatest possible expressiveness with as few elementary syntactic constructs as possible. The advantage of such an approach is that a system based on a small number of elementary constructs will be much more amenable to a thorough mathematical investigation of its semantic and algorithmic properties than a system based on a larger number of high-level constructs. In this paper we show that with RRBNs we have achieved our goal, by showing, first, how to solve within that framework a number of non-trivial representation problems. In the second part of the paper we show how to construct from a RRBN and a specific query, a standard Bayesian network in which the answer to the query can be computed with standard inference algorithms. Here the simplicity of the underlying representation framework greatly facilitates the development of simple algorithms and correctness proofs. As a result we obtain a construction algorithm that even for RRBNs that represent models for complex first-order and statistical dependencies generates standard Bayesian networks of size polynomial in the size of the domain given in a specific application instance.

Journal ArticleDOI
TL;DR: It turns out that this function t(E❘H) can be taken as a general conditional uncertainty measure, and gets the “natural” axioms for many different (besides probability) conditional measures.
Abstract: Our starting point is a definition of conditional event i>E|i>H which differs from many seemingly “similar” ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same “third” value i>u (“undetermined”) to i>all conditional events, but make it depend on i>E|i>H, it turns out that this function i>t(i>E|i>H) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, “compulsory” – choice of the relevant operations among conditional events) the “natural” axioms for many different (besides probability) conditional measures.


Journal ArticleDOI
TL;DR: A complete characterization of closed sets of cardinality constraints is obtained and Armstrong databases are constructed for these constraint sets, which are of special interest for example-based deduction in database design.
Abstract: In database design, integrity constraints are used to express database semantics. They specify the way by that the elements of a database are associated to each other. The implication problem asks whether a given set of constraints entails further constraints. In this paper, we study the finite implication problem for cardinality constraints. Our main result is a complete characterization of closed sets of cardinality constraints. Similar results are obtained for constraint sets containing cardinality constraints, but also key and functional dependencies. Moreover, we construct Armstrong databases for these constraint sets, which are of special interest for example-based deduction in database design.

Journal ArticleDOI
TL;DR: The paper analyzes the expressiveness of known symbolic formalisms for the representation of granularities, using the mathematical characterization as a reference model, and proposes a significant extension to the collection formalism defined in [15] in order to capture a practically interesting class of periodic Granularities.
Abstract: In the recent literature on time representation, an effort has been made to characterize the notion of time granularity and the relationships between granularities. The main goals are having a common framework for their specification, and allowing the interoperability of systems adopting different time granularities. This paper considers the mathematical characterization of finite and periodic time granularities, and investigates the requirements for a user-friendly symbolic formalism that could be used for their specification. Instead of proposing yet another formalism, the paper analyzes the expressiveness of known symbolic formalisms for the representation of granularities, using the mathematical characterization as a reference model. Based on this analysis, a significant extension to the collection formalism defined in [15] is proposed, in order to capture a practically interesting class of periodic granularities.

Journal ArticleDOI
TL;DR: A logical language to represent and reason with knowledge about dynamic worlds in which actions have uncertain effects and the notion of Randomly Reactive Automata is developed in order to specify the semantics of the Probabilistic Situation Calculus.
Abstract: In this article we propose a Probabilistic Situation Calculus logical language to represent and reason with knowledge about dynamic worlds in which actions have uncertain effects Uncertain effects are modeled by dividing an action into two subparts: a deterministic (agent produced) input and a probabilistic reaction (produced by nature) We assume that the probabilities of the reactions have known distributions Our logical language is an extension to Situation Calculae in the style proposed by Raymond Reiter There are three aspects to this work First, we extend the language in order to accommodate the necessary distinctions (eg, the separation of actions into inputs and reactions) Second, we develop the notion of Randomly Reactive Automata in order to specify the semantics of our Probabilistic Situation Calculus Finally, we develop a reasoning system in MATHEMATICA capable of performing temporal projection in the Probabilistic Situation Calculus

Journal ArticleDOI
TL;DR: This paper presents a taxonomy of parallel theorem-proving methods based on the control of search, the granularity of parallelism and the nature of the method, and analyzes how the different approaches to parallelization affect theControl of search.
Abstract: This paper presents a taxonomy of parallel theorem-proving methods based on the control of search (e.g., master–slaves versus peer processes), the granularity of parallelism (e.g., fine, medium and coarse grain) and the nature of the method (e.g., ordering-based versus subgoal-reduction). We analyze how the different approaches to parallelization affect the control of search: while fine and medium–grain methods, as well as master-slaves methods, generally do not modify the sequential search plan, parallel-search methods may combine sequential search plans (i>multi-search) or extend the search plan with the capability of subdividing the search space (i>distributed search). Precisely because the search plan is modified, the latter methods may produce radically different searches than their sequential base, as exemplified by the first distributed proof of the i>Robbins theorem generated by the i>Modified Clause-Diffusion prover i>Peers-mcd. An overview of the state of the field and directions for future research conclude the paper.

Journal ArticleDOI
TL;DR: It is proved that maintaining the object orientation imposed by the prior model will increase the learning speed in object-oriented domains, and a method to efficiently estimate the probability parameters in domains that are not strictly object oriented.
Abstract: This paper describes a method for parameter learning in Object-Oriented Bayesian Networks (OOBNs). We propose a methodology for learning parameters in OOBNs, and prove that maintaining the object orientation imposed by the prior model will increase the learning speed in object-oriented domains. We also propose a method to efficiently estimate the probability parameters in domains that are i>not strictly object oriented. Finally, we attack type uncertainty, a special case of model uncertainty typical to object-oriented domains.

Journal ArticleDOI
TL;DR: It is shown that most of the query equivalences of classical relational algebra carry over to the algebra on probabilistic complex value relations, which means that query optimization techniques for Classical relational algebra can easily be applied to optimize queries on Probabilisticcomplex value relations.
Abstract: We present a probabilistic data model for complex values. More precisely, we introduce probabilistic complex value relations, which combine the concept of probabilistic relations with the idea of complex values in a uniform framework. We elaborate a model-theoretic definition of probabilistic combination strategies, which has a rigorous foundation on probability theory. We then define an algebra for querying database instances, which comprises the operations of selection, projection, renaming, join, Cartesian product, union, intersection, and difference. We prove that our data model and algebra for probabilistic complex values generalizes the classical relational data model and algebra. Moreover, we show that under certain assumptions, all our algebraic operations are tractable. We finally show that most of the query equivalences of classical relational algebra carry over to our algebra on probabilistic complex value relations. Hence, query optimization techniques for classical relational algebra can easily be applied to optimize queries on probabilistic complex value relations.

Journal ArticleDOI
TL;DR: This work reviews this highly parallel and distributed form of information processing, discussing its potential sophistication, its actual performance in various groups of social insects, its general strengths and liabilities, and finally, the adaptations that compensate for these liabilities.
Abstract: In a social insect colony, large numbers of individuals all follow the same set of behavioral rules. Without centralized control, these individuals' interactions with each other and with their environment result in the allocation of individuals to various tasks, and in the distribution of foragers among available food sources. We review this highly parallel and distributed form of information processing, discussing its potential sophistication, its actual performance in various groups of social insects, its general strengths and liabilities, and finally, the adaptations that compensate for these liabilities.

Journal ArticleDOI
TL;DR: Ethology has accumulated numerous results showing that animals' interactions could be rather simple signals and it is possible to interact with animals not only by mimicking their behaviors but also by making specially designed and often simple artifacts, so robotics in turn may help ethology to explore animal behavior.
Abstract: In this paper we try to define – as ethologists – the easiest ways for creating such a synergy around a common project: mixed groups of interacting animals and robots. The following aspects are explored. (1) During this century, ethology has accumulated numerous results showing that animals' interactions could be rather simple signals and it is possible to interact with animals not only by mimicking their behaviors but also by making specially designed and often simple artifacts. (2) The theory of self-organization in animal societies shows that very simple, but numerous, interactions taking place between individuals may ensure complex performances and produce Collective Intelligence (CI) at the level of the group. This context is the most interesting to develop mixed animal–robots interactions. (3) An experiment using an artifact interacting within a CI system in the wild (gull flocks) is developed. (4) Cases of robots making CI on their own have been developed. (5) Considering (4) and (5), what are the expected difficulties to mix robots and animals in CI systems. (6) Why develop such mixed societies? The control of interactions between artificial systems and living organisms is a key aspect in the design of artificial systems, as well as in many agricultural, medical, scientific and technical fields. Such developments refer generally to human–robots interactions, leading to further complexity of the behavior and algorithms of robots. However, complex performances do not always require complex individual behavior and interesting developments may also refer to simpler interactions. As far as we know, experiments studying animal–robots interactions are rather anecdotal, with a naive point of view on animal behavior and are often published in non-scientific journals. However, we are very convinced that robotics has much to learn from ethology while robotics in turn may surely help ethology to explore animal behavior. In this paper we try to define – as ethologists – the easiest ways for creating such a synergy around a common project: mixed groups of interacting animals and robots.

Journal ArticleDOI
Gert de Cooman1
TL;DR: It is shown that a possibility measure is fully conglomerable and satisfies Walley's regularity axiom for conditioning, ensuring that it can be coherently extended to a conditional possibility measure using both the methods of natural and regular extension.
Abstract: The paper discusses integration and some aspects of conditioning in numerical possibility theory, where possibility measures have the behavioural interpretation of upper probabilities, that is, systems of upper betting rates. In such a context, integration can be used to extend upper probabilities to upper previsions. It is argued that the role of the fuzzy integral in this context is limited, as it can only be used to define a coherent upper prevision if the associated upper probability is 0–1-valued, in which case it moreover coincides with the Choquet integral. These results are valid for arbitrary coherent upper probabilities, and therefore also relevant for possibility theory. It follows from the discussion that in a numerical context, the Choquet integral is better suited than the fuzzy integral for producing coherent upper previsions starting from possibility measures. At the same time, alternative expressions for the Choquet integral associated with a possibility measure are derived. Finally, it is shown that a possibility measure is fully conglomerable and satisfies Walley's regularity axiom for conditioning, ensuring that it can be coherently extended to a conditional possibility measure using both the methods of natural and regular extension.

Journal ArticleDOI
TL;DR: An axiomatic characterization of these independence models is given and it is compared to the classic ones and the case of (finite) discrete random variables is studied.
Abstract: A definition of stochastic independence which avoids the inconsistencies (related to events of probability 0 or 1) of the classic one has been proposed by Coletti and Scozzafava for two events. We extend it to i>conditional independence among finite sets of events. In particular, the case of (finite) discrete random variables is studied. We check which of the relevant properties connected with graphical structures hold. Hence, an axiomatic characterization of these independence models is given and it is compared to the classic ones.

Journal ArticleDOI
TL;DR: It is possible for bugs to capture their prey without all bugs simultaneously doing so even for non-collinear initial positions, if the initial positions are picked randomly according to a smooth probability distribution.
Abstract: In cyclic pursuit i>n bugs chase each other in cyclic order, each moving at unit speed. Mathematical problems and puzzles of pursuit, and cyclic pursuit in particular, have attracted interest for many years. In 1971 Klamkin and Newman [17] showed that if i>ne3 and the initial positions of the bugs are not collinear, then all three bugs capture their prey simultaneously, i.e., no bug captures its prey prior to the moment when the pursuit collapses to a single point. They asked whether the result generalizes to more bugs. Behroozi and Gagnon [4] showed that it does generalize to i>ne4 if the bugs' initial positions form a convex polygon. In this paper we resolve the general question in i>k dimensions: It is possible for bugs to capture their prey without all bugs simultaneously doing so even for non-collinear initial positions. The set of initial conditions which give rise to non-mutual captures is, however, a sub-manifold in the manifold of all possible initial conditions. Hence, if the initial positions are picked randomly according to a smooth probability distribution, then the probability that a non-mutual capture will occur is zero.

Journal ArticleDOI
TL;DR: This work considers some easy cases of PSAT, where it is possible to give a compact representation of the set of consistent probability assignments, and follows two different approaches, based on two different representations of CNF formulas.
Abstract: The Probabilistic Satisfiability problem (PSAT) can be considered as a probabilistic counterpart of the classical SAT problem. In a PSAT instance, each clause in a CNF formula is assigned a probability of being true; the problem consists in checking the consistency of the assigned probabilities. Actually, PSAT turns out to be computationally much harder than SAT, e.g., it remains difficult for some classes of formulas where SAT can be solved in polynomial time. A i>column generation approach has been proposed in the literature, where the pricing sub-problem reduces to a Weighted Max-SAT problem on the original formula. Here we consider some easy cases of PSAT, where it is possible to give a compact representation of the set of consistent probability assignments. We follow two different approaches, based on two different representations of CNF formulas. First we consider a representation based on i>directed hypergraphs. By extending a well-known integer programming formulation of SAT and Max-SAT, we solve the case in which the hypergraph does not contain cycles; a linear time algorithm is provided for this case. Then we consider the co-occurrence graph associated with a formula. We provide a solution method for the case in which the co-occurrence graph is a partial 2-tree, and we show how to extend this result to partial i>k-trees with i>k>2.

Journal ArticleDOI
TL;DR: This paper demonstrates the use of Stratego in eliminating intermediate data structures from functional programs via the warm fusion algorithm of Launchbury and Sheard and provides further evidence that programs generated from Strate go specifications are suitable for integration into real systems.
Abstract: Stratego is a domain-specific language for the specification of program transformation systems. The design of Stratego is based on the paradigm of rewriting strategies: user-definable programs in a little language of strategy operators determine where and in what order transformation rules are (automatically) applied to a program. The separation of rules and strategies supports modularity of specifications. Stratego also provides generic features for specification of program traversals. In this paper we present a case study of Stratego as applied to a non-trivial problem in program transformation. We demonstrate the use of Stratego in eliminating intermediate data structures from (also known as i>deforesting) functional programs via the i>warm fusion algorithm of Launchbury and Sheard. This algorithm has been specified in Stratego and embedded in a fully automatic transformation system for kernel Haskell. The entire system consists of about 2600 lines of specification code, which breaks down into 1850 lines for a general framework for Haskell transformation and 750 lines devoted to a highly modular, easily extensible specification of the warm fusion transformer itself. Its successful design and construction provides further evidence that programs generated from Stratego specifications are suitable for integration into real systems, and that rewriting strategies are a good paradigm for the implementation of such systems.

Journal ArticleDOI
TL;DR: This work designs an asynchronous algorithm that can cope with failures of network edges and nodes and is self-stabilizing in the sense that it can be started with arbitrary initializations and scalable – new agents can be added while other agents are already running.
Abstract: We consider a problem of decentralized exploration of a faulty network by several simple, memoryless agents. The model we adopt for a network is a directed graph. We design an asynchronous algorithm that can cope with failures of network edges and nodes. The algorithm is self-stabilizing in the sense that it can be started with arbitrary initializations and scalable – new agents can be added while other agents are already running.

Journal ArticleDOI
TL;DR: The syntax of CAPSUL is described in detail, including its layers of abstraction and four types of constraints, including the concept of interference between patterns and the expressive power of the language.
Abstract: We use a constraint-based language to specify repeating temporal patterns. The Constraint-based Pattern Specification Language (CAPSUL) is simple to use, but allows a wide variety of patterns to be expressed. This paper describes in detail the syntax of CAPSUL, including its layers of abstraction and four types of constraints. We also discuss the semantics of CAPSUL, including the concept of interference between patterns and the expressive power of the language. We have implemented CAPSUL in a temporal-abstraction system called Resume, and used it in a graphical knowledge-acquisition tool to acquire domain-specific knowledge from experts about patterns to be found in large databases. We summarize the results of preliminary experiments using the pattern-specification and pattern-detection tools on data about patients who have cancer and have been seen at the Rush Presbyterian/St. Luke's Medical Center.