scispace - formally typeset
Search or ask a question

Towards a General Theory of Action and Time.

01 Jan 2005-pp 251-276
TL;DR: A formalism for reasoning about actions that is based on a temporal logic allows a much wider range of actions to be described than with previous approaches such as the situation calculus and a framework for planning in a dynamic world with external events and multiple agents is suggested.
Abstract: A formalism for reasoning about actions is proposed that is based on a temporal logic. It allows a much wider range of actions to be described than with previous approaches such as the situation calculus. This formalism is then used to characterize the different types of events, processes, actions, and properties that can be described in simple English sentences. In addressing this problem, we consider actions that involve non-activity as well as actions that can only be defined in terms of the beliefs and intentions of the actors. Finally, a framework for planning in a dynamic world with external events and multiple agents is suggested.
Citations
More filters
Journal ArticleDOI
TL;DR: Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents as discussed by the authors ; agent architectures can be thought of as software engineering models of agents; and agent languages are software systems for programming and experimenting with agents.
Abstract: The concept of an agent has become important in both Artificial Intelligence (AI) and mainstream computer science. Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide these issues into three areas (though as the reader will see, the divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be thought of as software engineering models of agents;researchers in this area are primarily concerned with the problem of designing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; these languages may embody principles proposed by theorists. The paper is not intended to serve as a tutorial introduction to all the issues mentioned; we hope instead simply to identify the most important issues, and point to work that elaborates on them. The article includes a short review of current and potential applications of agent technology.

6,714 citations

Book
01 Nov 2001
TL;DR: A multi-agent system (MAS) as discussed by the authors is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.
Abstract: From the Publisher: An agent is an entity with domain knowledge, goals and actions. Multi-agent systems are a set of agents which interact in a common environment. Multi-agent systems deal with the construction of complex systems involving multiple agents and their coordination. A multi-agent system (MAS) is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.

3,003 citations

Journal ArticleDOI
TL;DR: It is concluded that problems in Cognitive Science's theorizing about purposeful action as a basis for machine intelligence are due to the project of substituting plans for actions, and representations of the situation of action, for action's actual circumstances.
Abstract: This thesis considers two alternative views of purposeful action and shared understanding. The first, adopted by researchers in Cognitive Science, views the organization and significance of action as derived from plans, which are prerequisite to and prescribe action at whatever level of detail one might imagine. Mutual intelligibility on this view is a matter of the recognizability of plans, due to common conventions for the expression of intent, and common knowledge about typical situations and appropriate actions. The second view, drawn from recent work in social science, treats plans as derivative from situated action. Situated action as such comprises necessarily ad hoc responses to the actions of others and to the contingencies of particular situations. Rather than depend upon the reliable recognition of intent, successful interaction consists in the collaborative production of intelligibility through mutual access to situation resources, and through the detection, repair or exploitation of differences in understanding. As common sense formulations designed to accomodate the unforseeable contingences of situated action, plans are inherently vague. Researchers interested in machine intelligence attempt to remedy the vagueness of plans, to make them the basis for artifacts intended to embody intelligent behavior, including the ability to interact with their human users. The idea that computational artifacts might interact with their users is supported by their reactive, linguistic, and internally opaque properties. Those properties suggest the possibility that computers might 'explain themselves: thereby providing a solution to the problem of conveying the designer's purposes to the user, and a means of establishing the intelligence of the artifact itself. I examine the problem of human-machine communication through a case study of people using a machine designed on the planning model, and intended to be intelligent and interactive~ A conversation analysis of \"interactions\" between users and the machine reveals that the machine's insensitivity to particular circumstances is a central design resource, and a fundamental limitation. I conclude that problems in Cognitive Science's theorizing about purposeful action as a basis for machine intelligence are due to the project of substituting plans for actions, and representations of the situation of action, for action's actual circumstances. XEROX PARe. ISL-6. FEBRLARY 1985

2,485 citations

Journal ArticleDOI
TL;DR: In this article, the authors explore principles governing the rational balance among an agent's beliefs, goals, actions, and intentions, and show how agents can avoid intending all the foreseen side-effects of what they actually intend.
Abstract: This paper explores principles governing the rational balance among an agent's beliefs, goals, actions, and intentions. Such principles provide specifications for artificial agents, and approximate a theory of human action (as philosophers use the term). By making explicit the conditions under which an agent can drop his goals, i.e., by specifying how the agent is committed to his goals, the formalism captures a number of important properties of intention. Specifically, the formalism provides analyses for Bratman's three characteristic functional roles played by intentions [7, 9], and shows how agents can avoid intending all the foreseen side-effects of what they actually intend. Finally, the analysis shows how intentions can be adopted relative to a background of relevant beliefs and other intentions or goals. By relativizing one agent's intentions in terms of beliefs about another agent's intentions (or beliefs'), we derive a preliminary account of interpersonal commitments.

2,072 citations

References
More filters
Journal Article
TL;DR: An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation, which is notable in offering a delicate balance between space and time.
Abstract: An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between

7,466 citations

Journal ArticleDOI
01 Mar 1970-Language
TL;DR: A theory of speech acts is proposed in this paper. But it is not a theory of language, it is a theory about the structure of illocutionary speech acts and not of language.
Abstract: Part I. A Theory of Speech Acts: 1. Methods and scope 2. Expressions, meaning and speech acts 3. The structure of illocutionary acts 4. Reference as a speech act 5. Predication Part II. Some Applications of the Theory: 6. Three fallacies in contemporary philosophy 7. Problems of reference 8. Deriving 'ought' from 'is' Index.

6,645 citations

Book ChapterDOI
TL;DR: In this paper, the authors consider the problem of reasoning about whether a strategy will achieve a goal in a deterministic world and present a method to construct a sentence of first-order logic which will be true in all models of certain axioms if and only if a certain strategy can achieve a certain goal.
Abstract: A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which its inputs are interpreted. Designing such a program requires commitments about what knowledge is and how it is obtained. Thus, some of the major traditional problems of philosophy arise in artificial intelligence. More specifically, we want a computer program that decides what to do by inferring in a formal language that a certain strategy will achieve its assigned goal. This requires formalizing concepts of causality, ability, and knowledge. Such formalisms are also considered in philosophical logic. The first part of the paper begins with a philosophical point of view that seems to arise naturally once we take seriously the idea of actually making an intelligent machine. We go on to the notions of metaphysically and epistemo-logically adequate representations of the world and then to an explanation of can, causes, and knows in terms of a representation of the world by a system of interacting automata. A proposed resolution of the problem of freewill in a deterministic universe and of counterfactual conditional sentences is presented. The second part is mainly concerned with formalisms within which it can be proved that a strategy will achieve a goal. Concepts of situation, fluent, future operator, action, strategy, result of a strategy and knowledge are formalized. A method is given of constructing a sentence of first-order logic which will be true in all models of certain axioms if and only if a certain strategy will achieve a certain goal. The formalism of this paper represents an advance over McCarthy (1963) and Green (1969) in that it permits proof of the correctness of strategies that contain loops and strategies that involve the acquisition of knowledge; and it is also somewhat more concise. The third part discusses open problems in extending the formalism of part 2. The fourth part is a review of work in philosophical logic in relation to problems of artificial intelligence and a discussion of previous efforts to program ‘general intelligence’ from the point of view of this paper.

3,588 citations

Book
01 Jan 1969
TL;DR: Most psychologists take it for granted that a scientific account of the behavior of organisms must begin with the definition of fixed, recognizable, elementary units of behavior as mentioned in this paper, which is the essence of the highly successful strategy called scientific analysis.
Abstract: Most psychologists take it for granted that a scientific account of the behavior of organisms must begin with the definition of fixed, recognizable, elementary units of behavior—something a psychologist can use as a biologist uses cells, or an astronomer uses stars, or a physicist uses atoms, and so on. Given a simple unit, complicated phenomena are then describable as lawful compounds. That is the essence of the highly successful strategy called “scientific analysis.”

2,124 citations