scispace - formally typeset
Search or ask a question

Showing papers by "Nils J. Nilsson published in 1994"


Posted Content
TL;DR: Teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based support parameter binding and recursion.
Abstract: A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary difference between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We briefly describe some experimental applications of T-R programs in the control of simulated and actual mobile robots.

349 citations


Book
07 Mar 1994
TL;DR: In this article, it was shown that the probability of Q is under-determined in this case but can be bounded as follows, using a Venn diagram using a simple calculation.
Abstract: Before beginning the research that led to "Probabilistic logic" [11 ], I had participated with Richard Duda, Peter Hart, and Georgia Sutherland on the PROSPECTOR project [3]. There, we used Bayes' rule (with some assumptions about conditional independence) to deduce the probabilities of hypotheses about ore deposits given (sometimes uncertain) geologic evidence collected in the field [4]. At that time, I was also familiar with the use of "certainty factors" by Shortliffe [18], the use of "fuzzy logic" by Zadeh [20], and the Dempster/Shafer formalism [16]. All of these methods made (sometimes implicit and unacknowledged) assumptions about underlying joint probability distributions, and I wanted to know how the mathematics would work out if no such assumptions were made. I began by asking how modus ponens would generalize when one assigned probabilities (instead of binary truth values) to P and P D Q. As can be verified by simple calculations using a Venn diagram, the probability of Q is under-determined in this case but can be bounded as follows:

70 citations


Book
07 Mar 1994
TL;DR: The robot, the environment, and the tasks performed by the system were sufficiently paradigmatic to enable initial explorations of many core issues in the development of intelligent autonomous systems.
Abstract: During the late 1960s and early 1970s, an enthusiastic group of researchers at the SRI AI Laboratory focused their energies on a single experimental project in which a mobile robot was being developed that could navigate and push objects around in a multi-room environment (Nilsson [11]) . The project team consisted of many people over the years, including Steve Coles, Richard Duda, Richard Fikes, Tom Garvey, Cordell Green, Peter Hart, John Munson, Nils Nilsson, Bert Raphael, Charlie Rosen, and Earl Sacerdoti. The hardware consisted of a mobile cart, about the size of a small refrigerator, with touch-sensitive "feelers", a television camera, and an optical range-finder. The cart was capable of rolling around an environment consisting of large boxes in rooms separated by walls and doorways; it could push the boxes from one place to another in its world. Its suite of programs consisted of those needed for visual scene analysis (it could recognize boxes, doorways, and room corners), for planning (it could plan sequences of actions to achieve goals), and for converting its plans into intermediatelevel and low-level actions in its world. When the robot moved, its television camera shook so much that it became affectionately known as "Shakey the Robot". The robot, the environment, and the tasks performed by the system were quite simple by today's standards, but they were sufficiently paradigmatic to enable initial explorations of many core issues in the development of intelligent autonomous systems. In particular, they provided the context and motivation for development of the A* search algorithm (Hart et al. [7] ), the STRIPS (Fikes and Nilsson [4] ) and ABSTRIPS (Sacerdoti [ 14] ) planning systems, programs for generalizing and learning macro-operators

33 citations