scispace - formally typeset

Showing papers in "Artificial Intelligence in 1998"


Journal ArticleDOI
TL;DR: A novel algorithm for solving pomdps off line and how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP is outlined.
Abstract: In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions.

3,746 citations


Journal ArticleDOI
TL;DR: This paper presents several solutions to the problem of task allocation among autonomous agents, and suggests that the agents form coalitions in order to perform tasks or improve the efficiency of their performance.
Abstract: Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximation algorithms for NP-hard problems. We first present an approach to agent coalition formation where each agent must be a member of only one coalition. Next, we present the domain of overlapping coalitions. We proceed with a discussion of the domain where tasks may have a precedence order. Finally, we discuss the case of implementation in an open, dynamic agent system. For each case we provide an algorithm that will lead agents to the formation of coalitions, where each coalition is assigned a task. Our algorithms are any-time algorithms, they are simple, efficient and easy to implement.

1,128 citations


Journal ArticleDOI
TL;DR: This paper describes an approach that integrates both paradigms: grid-based and topological, which gains advantages from both worlds: accuracy/consistency and efficiency.
Abstract: Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments.

1,072 citations


Journal ArticleDOI
TL;DR: This dissertation discusses the application domain of decision tree learning and extends it towards the first order logic context of Inductive Logic Programming.
Abstract: A first-order framework for top-down induction of logical decision trees is introduced. The expressivity of these trees is shown to be larger than that of the flat logic programs which are typically induced by classical ILP systems, and equal to that of first-order decision lists. These results are related to predicate invention and mixed variable quantification. Finally, an implementation of this framework, the TILDE system, is presented and empirically evaluated.

711 citations


Journal ArticleDOI
TL;DR: The Remote Agent is described, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future of space exploration.
Abstract: Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous fleets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing these explorers with a form of computational intelligence that we call remote agents . In this paper we describe the Remote Agent, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future. This architecture addresses the unique characteristics of the spacecraft domain that require highly reliable autonomous operations over long periods of time with tight deadlines, resource constraints, and concurrent activity among tightly coupled subsystems. The Remote Agent integrates constraintbased temporal planning and scheduling, robust multi-threaded execution, and model-based mode identification and reconfiguration. The demonstration of the integrated system as an on-board controller for Deep Space One, NASA's first New Millennium mission, is scheduled for a period of a week in mid 1999. The development of the Remote Agent also provided the opportunity to reassess some of AI's conventional wisdom about the challenges of implementing embedded systems, tractable reasoning, and knowledge representation. We discuss these issues, and our often contrary experiences, throughout the paper.

710 citations


Journal ArticleDOI
TL;DR: A logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals is presented and a general Automated Negotiation Agent is implemented, based on the logical model.
Abstract: In a multi-agent environment, where self-motivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals. We present argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. We look at argumentation as a mechanism for achieving cooperation and agreements. Using categories identified from human multi-agent negotiation, we demonstrate how the logic can be used to specify argument formulation and evaluation. We also illustrate how the developed logic can be used to describe different types of agents. Furthermore, we present a general Automated Negotiation Agent which we implemented, based on the logical model. Using this system, a user can analyze and explore different methods to negotiate and argue in a noncooperative environment where no centralized mechanism for coordination exists. The development of negotiating agents in the framework of the Automated Negotiation Agent is illustrated with an example where the agents plan, act, and resolve conflicts via negotiation in a Blocks World environment.

579 citations


Journal ArticleDOI
Abstract: In the new AI of the 90s an important stream is artificial social intelligence. In this work basic ontological categories for social action, structure , and mind are introduced. Sociality (social action, social structure) is let emerge from the action and intelligence of individual agents in a common world. Also some aspects of the way-down—how emergent collective phenomena shape the individual mind—are examined. First, interference and dependence are defined, and then different kinds of coordination (reactive versus anticipatory; unilateral versus bilateral; selfish versus collaborative) are characterised. “Weak social action”, based on beliefs about the mind of the other agents, and “strong social action”, based on goals about others' minds and their actions, are distinguished. Special attention is paid to Goal Delegation and Goal Adoption that are considered as the basic ingredients of social commitment and contract, and then of exchange, cooperation, group action, and organisation. Different levels of delegation and then of autonomy of the delegated agent are described; and different levels of goal-adoption are shown to characterise true collaboration. Social goals in the minds of the group members are argued to be the real glue of joint activity, and the notion of social commitment , as different from individual and from collective commitment, is underlined. The necessity for modelling social objective structures and constraints is emphasised and the “shared mind” view of groups and organisations is criticised. The spontaneous and unaware emergence of a dependence structure is explained, as well as its feedback on the participants' minds and behaviours. Critical observations are presented on current confusions such as that between “social” and “collective” action, or between communication and social action. The main claims of the paper are the following: (a) The real foundation of all sociality (cooperation, competition, groups, organisation, etc.) is the individual social action and mind . One cannot reduce or connect action at the collective level to action at the individual level unless one passes through the social character of the individual action, (b) Important levels of coordination and cooperation necessarily require minds and cognitive agents (beliefs, desires, intentions, etc.). (c) However, cognition, communication and agreement are not enough for modelling and implementing cooperation: emergent pre-cognitive structures and constraints should be formalised, and emergent forms of cooperation are needed also among planning and deliberative agents, (d) We are going towards a synthetic paradigm in AI and Cognitive Science, reconciling situatedness and plans, reactivity and mental representations, cognition, emergence and self-organisation.

517 citations


Journal ArticleDOI
TL;DR: It is shown that in general, the reasoning problem for recursive carin - A LCNR knowledge bases is undecidable, and the constructors of ALCNR causing the undecidability is identified.
Abstract: We describe carin , a novel family of representation languages, that combine the expressive power of Horn rules and of description logics. We address the issue of providing sound and complete inference procedures for such languages. We identify existential entailment as a core problem in reasoning in carin , and describe an existential entailment algorithm for the ALCNR description logic. As a result, we obtain a sound and complete algorithm for reasoning in non-recursive carin ALCNR knowledge bases, and an algorithm for rule subsumption over ALCNR . We show that in general, the reasoning problem for recursive carin - ALCNR knowledge bases is undecidable, and identify the constructors of ALCNR causing the undecidability. We show two ways in which carin - ALCNR knowledge bases can be restricted while obtaining sound and complete reasoning.

394 citations


Journal ArticleDOI
TL;DR: This work presents three model selection criteria, using information theoretic entropy in the spirit of the minimum description length principle, based on the principle of indifference combined with the maximum entropy principle, thus keeping external model assumptions to a minimum.
Abstract: The main statistics used in rough set data analysis, the approximation quality, is of limited value when there is a choice of competing models for predicting a decision variable. In keeping within the rough set philosophy of non-invasive data analysis, we present three model selection criteria, using information theoretic entropy in the spirit of the minimum description length principle. Our main procedure is based on the principle of indifference combined with the maximum entropy principle, thus keeping external model assumptions to a minimum. The applicability of the proposed method is demonstrated by a comparison of its error rates with results of C4.5, using 14 published data sets.

379 citations


Journal ArticleDOI
TL;DR: Artificial intelligence will have less difficulty in modelling the generation of new ideas than in automating their evaluation, and will be able to make transformations that enable thegeneration of previously impossible ideas.
Abstract: Creativity is a fundamental feature of human intelligence, and a challenge for AI. AI techniques can be used to create new ideas in three ways: by producing novel combinations of familiar ideas; by exploring the potential of conceptual spaces; and by making transformations that enable the generation of previously impossible ideas. AI will have less difficulty in modelling the generation of new ideas than in automating their evaluation.

331 citations


Journal ArticleDOI
TL;DR: This paper vindicates Belnap's thesis by showing that the logical role that the four-valued structure has among Ginsberg's bilattices is similar to the roles that the two-valued algebra has among Boolean algebras.
Abstract: In his well-known paper “How computer should think” Belnap (1977) argues that four-valued semantics is a very suitable setting for computerized reasoning. In this paper we vindicate this thesis by showing that the logical role that the four-valued structure has among Ginsberg's bilattices is similar to the role that the two-valued algebra has among Boolean algebras. Specifically, we provide several theorems that show that the most useful bilattice-valued logics can actually be characterized as four-valued inference relations. In addition, we compare the use of three-valued logics with the use of four-valued logics, and show that at least for the task of handling inconsistent or uncertain information, the comparison is in favor of the latter.

Journal ArticleDOI
TL;DR: A simple algebraic property is described which characterises all possible constraint types for which strong k-consistency is sufficient to ensure global consistency, for each k > 2.
Abstract: Although the constraint satisfaction problem is NP-complete in general, a number of constraint classes have been identified for which some fixed level of local consistency is sufficient to ensure global consistency. In this paper we describe a simple algebraic property which characterises all possible constraint types for which strong k-consistency is sufficient to ensure global consistency, for each k > 2. We give a number of examples to illustrate the application of this result.

Journal ArticleDOI
TL;DR: It is argued that similarity must be augmented by deeper, adaptation knowledge about whether a case can be easily modified to fit a target problem, and implemented in a new technique, called adaptation-guided retrieval (AGR), which provides a direct link between retrieval similarity and adaptation needs.
Abstract: One of the major assumptions in Artificial Intelligence is that similar experiences can guide future reasoning, problem solving and learning; what we will call, the similarity assumption. The similarity assumption is used in problem solving and reasoning systems when target problems are dealt with by resorting to a previous situation with common conceptual features. In this article, we question this assumption in the context of case-based reasoning (CBR). In CBR, the similarity assumption plays a central role when new problems are solved, by retrieving similar cases and adapting their solutions. The success of any CBR system is contingent on the retrieval of a case that can be successfully reused to solve the target problem. We show that it is often unwarranted to assume that the most similar case is also the most appropriate from a reuse perspective. We argue that similarity must be augmented by deeper, adaptation knowledge about whether a case can be easily modified to fit a target problem. We implement this idea in a new technique, called adaptation-guided retrieval (AGR), which provides a direct link between retrieval similarity and adaptation needs. This technique uses specially formulated adaptation knowledge, which, during retrieval, facilitates the computation of a precise measure of a case's adaptation requirements. In closing, we assess the broader implications of AGR and argue that it is just one of a growing number of methods that seek to overcome the limitations of the traditional similarity assumption in an effort to deliver more sophisticated and scalable reasoning systems.

Journal ArticleDOI
TL;DR: Computational methods for the rough analysis of databases, a relatively new mathematical tool for use in computer applications in circumstances which are characterized by vagueness and uncertainty, are discussed.
Abstract: Rough set theory is a relatively new mathematical tool for use in computer applications in circumstances which are characterized by vagueness and uncertainty. The technique called rough analysis can be applied very fruitfully in artificial intelligence and cognitive sciences. Although this methodology has been shown to be successful in dealing with the vagueness of many real-life applications, there are still several theoretical problems to be solved, and we also need to consider practical issues if we want to apply the theory. It is the latter set of issues we address here, in the context of handling and analysing large data sets during the knowledge representation process. Some of the associated problems (for example, the general problem of finding all “keys”) have been shown to be NP-hard. Thus, it is important to seek efficient computational methods for the theory. In rough set theory, a table called an information system or a database relation is used as a special kind of formal language to represent knowledge syntactically. Semantically, knowledge is defined as classifications of information systems. The use of rough analysis does not involve the details of rough set theory directly, but it uses the same basic classification techniques. We discuss computational methods for the rough analysis of databases.

Journal ArticleDOI
TL;DR: An experimental setup is introduced for concretising and validating specific mechanisms based on a set of principles and a general architecture that may explain how language and meaning may originate and complexify in a group of physically grounded distributed agents.
Abstract: The paper proposes a set of principles and a general architecture that may explain how language and meaning may originate and complexify in a group of physically grounded distributed agents. An experimental setup is introduced for concretising and validating specific mechanisms based on these principles. The setup consists of two robotic heads that watch static or dynamic scenes and engage in language games, in which one robot describes to the other what they see. The first results from experiments showing the emergence of distinctions, of a lexicon, and of primitive syntactic structures are reported. 0 1998 Published by Elsevier Science B.V. All rights reserved.

Journal ArticleDOI
Abstract: Multimodal interfaces combining natural language and graphics take advantage of both the individual strength of each communication mode and the fact that several modes can be employed in parallel. The central claim of this paper is that the generation of a multimodal presentation can be considered as an incremental planning process that aims to achieve a given communicative goal. We describe the multimodal presentation system WIP which allows the generation of alternate presentations of the same content taking into account various contextual factors. We discuss how the plan-based approach to presentation design can be exploited so that graphics generation influences the production of text and vice versa. We show that well-known concepts from the area of natural language processing like speech acts, anaphora, and rhetorical relations take on an extended meaning in the context of multimodal communication. Finally, we discuss two detailed examples illustrating and reinforcing our theoretical claims.

Journal ArticleDOI
TL;DR: A new algorithm, called Complete Karmarkar-Karp (CKK), is presented, that optimally solves the general number-partitioning problem, and significantly outperforms the best previously-known algorithms for large problem instances.
Abstract: Given a set of numbers, the two-way number partitioning problem is to divide them into two subsets, so that the sum of the numbers in each subset are as nearly equal as possible. The problem is NP-complete. Based on a polynomial-time heuristic due to Karmarkar and Karp, we present a new algorithm, called Complete Karmarkar-Karp (CKK), that optimally solves the general number-partitioning problem, and significantly outperforms the best previously-known algorithms for large problem instances. For numbers with twelve significant digits or less, CKK can optimally solve two-way partitioning problems of arbitrary size in practice. For numbers with greater precision, CKK first returns the Karmarkar-Karp solution, then continues to find better solutions as time allows. Over seven orders of magnitude improvement in solution quality is obtained in less than an hour of running time. Rather than building a single solution one element at a time, or modifying a complete solution, CKK constructs subsolutions, and combines them together in all possible ways. This approach may be effective for other NP-hard problems as well.

Journal ArticleDOI
TL;DR: It is shown that the epistemic operator formalizes procedural rules, as provided in many knowledge representation systems, and enables sophisticated query formulation, including various forms of closed-world reasoning.
Abstract: Description logics (also called terminological logics, or concept languages) are fragments of first-order logic that provide a formal account of the basic features of frame-based systems. However, there are aspects of frame-based systems—such as nonmonotonic reasoning and procedural rules—that cannot be characterized in a standard first-order framework. Such features are needed for real applications, and a clear understanding of the logic underlying them is necessary for principled implementations. We show how description logics enriched with an epistemic operator can formalize such aspects. The logic obtained is a fragment of a first-order nonmonotonic modal logic. We show that the epistemic operator formalizes procedural rules, as provided in many knowledge representation systems, and enables sophisticated query formulation, including various forms of closed-world reasoning. We provide an effective procedure for answering epistemic queries posed to a knowledge base expressed in a description logic and extend this procedure in order to deal with rules. We also address the computational complexity of reasoning with the epistemic operator, identifying cases in which an appropriate use of the epistemic operator can help in decreasing the complexity of reasoning.

Journal ArticleDOI
TL;DR: The computation on a general Bayesian network with convex sets of conditional distributions is formalized as a global optimization problem and it is shown that such a problem can be reduced to a combinatorial problem, suitable to exact algorithmic solutions.
Abstract: This paper addresses the problem of computing posterior probabilities in a discrete Bayesian network where the conditional distributions of the model belong to convex sets. The computation on a general Bayesian network with convex sets of conditional distributions is formalized as a global optimization problem. It is shown that such a problem can be reduced to a combinatorial problem, suitable to exact algorithmic solutions. An exact propagation algorithm for the updating of a polytree with binary variables is derived. The overall complexity is linear to the size of the network, when the maximum number of parents is fixed.

Journal ArticleDOI
TL;DR: This paper shows that approximating MAPs with a constant ratio bound is also NP-hard, and applies to networks with constrained in-degree and out-degree, applies to randomized approximation, and even applies if the ratio bound, instead of being constant, is allowed to be a polynomial function of various aspects of the network topology.
Abstract: Finding maximum a posteriori (MAP) assignments, also called Most Probable Explanations, is an important problem on Bayesian belief networks. Shimony has shown that finding MAPs is NP-hard. In this paper, we show that approximating MAPs with a constant ratio bound is also NP-hard. In addition, we examine the complexity of two related problems which have been mentioned in the literature. We show that given the MAP for a belief network and evidence set, or the family of MAPs if the optimal is not unique, it remains NP-hard to find, or approximate, alternative next-best explanations. Furthermore, we show that given the MAP, or MAPs, for a belief network and an initial evidence set, it is also NP-hard to find, or approximate, the MAP assignment for the same belief network with a modified evidence set that differs from the initial set by the addition or removal of even a single node assignment. Finally, we show that our main result applies to networks with constrained in-degree and out-degree, applies to randomized approximation, and even still applies if the ratio bound, instead of being constant, is allowed to be a polynomial function of various aspects of the network topology.

Journal ArticleDOI
TL;DR: This work identifies restrictions on the underlying state-transition graph which can tractably be tested and presents a planning algorithm which is correct and runs in polynomial time under these restrictions, and presents an exhaustive map of the complexity results for planning under all combinations of four previously studied syntactical restrictions and five new structural restrictions.
Abstract: Computationally tractable planning problems reported in the literature so far have almost exclusively been defined by syntactical restrictions. To better exploit the inherent structure in problems, it is probably necessary to study also structural restrictions on the underlying state-transition graph. The exponential size of this graph, though, makes such restrictions costly to test. Hence, we propose an intermediate approach, using a state-variable model for planning and defining restrictions on the separate state-transition graphs for each state variable. We identify such restrictions which can tractably be tested and we present a planning algorithm which is correct and runs in polynomial time under these restrictions. The algorithm has been implemented and it outperforms Graphplan on a number of test instances. In addition, we present an exhaustive map of the complexity results for planning under all combinations of four previously studied syntactical restrictions and our five new structural restrictions. This complexity map considers both the optimal and non-optimal plan generation problem.

Journal ArticleDOI
TL;DR: The results show that using bilingual corpora for automated extraction of term equivalences in context outperforms dictionarybased methods and is comparable to that of other statistical corpus-based methods.
Abstract: Translingual information retrieval (TLIR) consists of providing a query in one language and searching document collections in one or more different languages. This paper introduces new TLIR methods and reports on comparative TLIR experiments with these new methods and with previously reported ones in a realistic setting. Methods fall into two categories: query translation and statistical-IR approaches establishing translingual associations. The results show that using bilingual corpora for automated extraction of term equivalences in context outperforms dictionarybased methods. Translingual versions of the Generalized Vector Space Model (GVSM) and Latent Semantic Indexing (LSI) perform well, as does translingual pseudo-relevance feedback (PRF) and Example-Based Term-in-context translation (EBT). All showed relatively small performance loss between monolingual and translingual versions, ranging between 87–101% of monolingual IR performance. Query translation based on a general machine-readable bilingual dictionary—heretofore the most popular method—did not match the performance of other, more sophisticated methods. Also, the previous very high LSI results in the literature based on “mate-finding” were superseded by more realistic relevance-based evaluations; LSI performance proved comparable to that of other statistical corpus-based methods.

Journal ArticleDOI
TL;DR: In this paper, it is shown how probabilistic conditionals allow a new and constructive approach to the principle of minimum cross-entropy, and four principles that describe their handling in a reasonable and consistent way are developed.
Abstract: The principle of minimum cross-entropy (ME-principle) is often used as an elegant and powerful tool to build up complete probability distributions when only partial knowledge is available. The inputs it may be applied to are a prior distribution P and some new information R , and it yields as a result the one distribution P ∗ that satisfies R and is closest to P in an information-theoretic sense. More generally, it provides a “best” solution to the problem “How to adjust P to R ?” In this paper, we show how probabilistic conditionals allow a new and constructive approach to this important principle. Though popular and widely used for knowledge representation, conditionals quantified by probabilities are not easily dealt with. We develop four principles that describe their handling in a reasonable and consistent way, taking into consideration the conditional-logical as well as the numerical and probabilistic aspects. Finally, the ME-principle turns out to be the only method for adjusting a prior distribution to new conditional information that obeys all these principles. Thus a characterization of the ME-principle within a conditional-logical framework is achieved, and its implicit logical mechanisms are revealed clearly.

Journal ArticleDOI
Abstract: We describe a program called Sketch IT that transforms a single sketch of a mechanical device into multiple families of new designs. It represents each of these families with a “BEP-Model”, a parametric model augmented with constraints that ensure the device produces the desired behavior. The program is based on qualitative configuration space (qc-space), a novel representation that captures mechanical behavior while abstracting away its implementation. The program employs a paradigm of abstraction and resynthesis: it abstracts the initial sketch into qc-space, then uses a library of primitive mechanical interactions to map from qc-space to new implementations.

Journal ArticleDOI
TL;DR: It is shown that equilibrium point strategies for optimal play exist for this model, and an algorithm capable of computing such strategies is defined, and this model allows for clearly state the limitations of such architectures in producing expert analysis.
Abstract: We examine search algorithms in games with incomplete information, formalising a best defence model of such games based on the assumptions typically made when incomplete information problems are analysed in expert texts. We show that equilibrium point strategies for optimal play exist for this model, and define an algorithm capable of computing such strategies. Using this algorithm as a reference we then analyse search architectures that have been proposed for the incomplete information game of Bridge. These architectures select strategies by analysing some statistically significant collection of complete information sub-games. Our model allows us to clearly state the limitations of such architectures in producing expert analysis, and to precisely formalise and distinguish the problems that lead to sub-optimality. We illustrate these problems with simple game trees and with actual play situations from Bridge itself.

Journal ArticleDOI
TL;DR: A framework in which different dimensions for temporal model-based diagnosis can be analyzed at the knowledge level is defined, pointing out which are the alternatives along each dimension and showing in which cases each one of these alternatives is adequate.
Abstract: Model-based diagnosis (MBD) tackles the problem of troubleshooting systems starting from a description of their structure and function (or behavior). Time is a fundamental dimension in MBD: the behavior of most systems is time-dependent in one way or another. Temporal MBD, however, is a difficult task and indeed many simplifying assumptions have been adopted in the various approaches in the literature. These assumptions concern different aspects such as the type and granularity of the temporal phenomena being modeled, the definition of diagnosis, the ontology for time being adopted. Unlike the atemporal case, moreover, there is no general “theory” of temporal MBD which can be used as a knowledge-level characterization of the problem. In this paper we present a general characterization of temporal model-based diagnosis. We distinguish between different temporal phenomena that can be taken into account in diagnosis and we introduce a modeling language which can capture all such phenomena. Given a suitable logical semantics for such a modeling language, we introduce a general characterization of the notions of diagnostic problem and explanation, showing that in the temporal case these definitions involve different parameters. Different choices for the parameters lead to different approaches to temporal diagnosis. We define a framework in which different dimensions for temporal model-based diagnosis can be analyzed at the knowledge level, pointing out which are the alternatives along each dimension and showing in which cases each one of these alternatives is adequate. In the final part of the paper we show how various approaches in the literature can be classified within our framework. In this way, we propose some guidelines to choose which approach best fits a given application problem.

Journal ArticleDOI
Abstract: Object identification—the task of deciding that two observed objects are in fact one and the same object—is a fundamental requirement for any situated agent that reasons about individuals. Object identity, as represented by the equality operator between two terms in predicate calculus, is essentially a first-order concept. Raw sensory observations, on the other hand, are essentially propositional—especially when formulated as evidence in standard probability theory. This paper describes patterns of reasoning that allow identity sentences to be grounded in sensory observations, thereby bridging the gap. We begin by defining a physical event space over which probabilities are defined. We then introduce an identity criterion , which selects those events that correspond to identity between observed objects. From this, we are able to compute the probability that any two objects are the same, given a stream of observations of many objects. We show that the appearance probability , which defines how an object can be expected to appear at subsequent observations given its current appearance, is a natural model for this type of reasoning. We apply the theory to the task of recognizing cars observed by cameras at widely separated sites in a freeway network, with new heuristics to handle the inevitable complexity of matching large numbers of objects and with online learning of appearance probability models. Despite extremely noisy observations, we are able to achieve high levels of performance.

Journal ArticleDOI
TL;DR: A model-based Averagereward Reinforcement Learning method called H-learning is introduced and it is shown that it converges more quickly and robustly than its discounted counterpart in the domain of scheduling a simulated Automatic Guided Vehicle (AGV).
Abstract: Reinforcement Learning (RL) is the study of programs that improve their performance by receiving rewards and punishments from the environment. Most RL methods optimize the discounted total reward received by an agent, while, in many domains, the natural criterion is to optimize the average reward per time step. In this paper, we introduce a model-based Averagereward Reinforcement Learning method called H-learning and show that it converges more quickly and robustly than its discounted counterpart in the domain of scheduling a simulated Automatic Guided Vehicle (AGV). We also introduce a version of H-learning that automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this “Auto-exploratory H-Learning” performs better than the previously studied exploration strategies. To scale H-learning to larger state spaces, we extend it to learn action models and reward functions in the form of dynamic Bayesian networks, and approximate its value function using local linear regression. We show that both of these extensions are effective in significantly reducing the space requirement of H-learning and making it converge faster in some AGV scheduling tasks.

Journal ArticleDOI
TL;DR: It is proved that most approaches to tractable temporal constraint reasoning can be encoded as Horn DLRs, including the ORD-Horn algebra by Nebel and Burckert and the simple temporal constraints by Dechter et al.
Abstract: We present a formalism, Disjunctive Linear Relations (DLRs), for reasoning about temporal constraints. DLRs subsume most of the formalisms for temporal constraint reasoning proposed in the literature and is therefore computationally expensive. We also present a restricted type of DLRs, Horn DLRs, which have a polynomial-time satisfiability problem. We prove that most approaches to tractable temporal constraint reasoning can be encoded as Horn DLRs, including the ORD-Horn algebra by Nebel and Burckert and the simple temporal constraints by Dechter et al. Thus, DLRs is a suitable unifying formalism for reasoning about temporal constraints.

Journal ArticleDOI
TL;DR: ‘4-D approach’ integrating expectation-based methods from systems dynamics and control engineering with methods from AI has allowed to create vehicles with unprecedented capabilities in the technical realm.
Abstract: A survey is given on two decades of developments in the field, encompassing an increase in computing power by four orders of magnitude. The ‘4-D approach’ integrating expectation-based methods from systems dynamics and control engineering with methods from AI has allowed to create vehicles with unprecedented capabilities in the technical realm: autonomous road vehicle guidance in public traffic on freeways at speeds beyond 130 km/h, on-board-autonomous landing approaches of aircraft, and landmark navigation for AGV's, for road vehicles including turn-offs onto cross-roads, and for helicopters in low-level flight (real-time, hardware-in-the-loop simulations in the latter case).