scispace - formally typeset
Search or ask a question

Showing papers by "Sebastian Thrun published in 1993"


Journal ArticleDOI
TL;DR: It is argued that knowledge transfer is essential if robots are to learn control with moderate learning times in complex scenarios and two approaches which both capture invariant knowledge about the robot and its environments are presented.

600 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The first results on COLUMBUS, an autonomous mobile robot, are presented, which aims to explore and model the environment efficiently while avoiding collisions with obstacles using an instance-based learning technique for modeling its environment.
Abstract: The first results on COLUMBUS, an autonomous mobile robot, are presented. COLUMBUS operates in initially unknown structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments which the robot is assumed to face. Once trained, these networks allow for the transfer of knowledge across different environments the robot will face over its lifetime. Exploration is achieved by navigating to low confidence regions. A dynamic programming method is employed in background to find minimal-cost paths that, when executed by the robot, maximize exploration. >

130 citations


01 May 1993
TL;DR: This paper describes an approach to neural network rule extraction based on Validity Interval Analysis (VI-Analysis), a generic tool for extracting symbolic knowledge from Backpropagation-style artificial neural networks and describes techniques for generating and testing rule hypotheses.
Abstract: Although connectionist learning procedures have been applied successfully to a variety of real-world scenarios, artificial neural networks have often b en criticized for exhibiting a low degree of comprehensibility. Mechanisms that automatically compile neural networks into symbolic rules offer a promising perspective to overcome this practical shortcoming of neural network repres entations. This paper describes an approach to neural network rule extraction based on Validity Interval Analysis (VI-Analysis). VI-Analysis is a generic tool for extracting symbolic knowledge from Backpropagation-style artificial neural networks. It does this by propagating whole intervals of activations through the network in both the forward and backward directions. In the context of rule extraction, these intervals are used to prove or disprove the correctness of conjectured rules . We describe techniques for generating and testing rule hypotheses, and demonstrate these using some simple classification tasks including the MONK’s benchmark problems. Rules extracted by VI-Analysis are provably correct. No assumpt ions are made about the topology of the network at hand, as well as the procedure employed for training the network.

122 citations


Proceedings Article
28 Aug 1993
TL;DR: A learning method that combines explanation-based learning from a previously learned approximate domain theory, together with inductive learning from observations, based on a neural network representation of domain knowledge that is robust to errors in the domain theory.
Abstract: Many researchers have noted the importance of combining inductive and analytical learning, yet we still lack combined learning methods that are effective in practice. We present here a learning method that combines explanation-based learning from a previously learned approximate domain theory, together with inductive learning from observations. This method, called explanation-based neural network learning (EBNN), is based on a neural network representation of domain knowledge. Explanations are constructed by chaining together inferences from multiple neural networks. In contrast with symbolic approaches to explanation-based learning which extract weakest preconditions from the explanation, EBNN extracts the derivatives of the target concept with respect to the training example features. These derivatives summarize the dependencies within the explanation, and are used to bias the inductive learning of the target concept. Experimental results on a simulated robot control task show that EBNN requires significantly fewer training examples than standard inductive learning. Furthermore, the method is shown to be robust to errors in the domain theory, operating effectively over a broad spectrum from very strong to very weak domain theories.

55 citations


Book ChapterDOI
27 Jun 1993
TL;DR: The EBNN algorithm is summarized, the correspondence between this neural network based EBL method and EBL methods based on symbolic representations is explored, and robustness to errors in the domain theory is explored.
Abstract: Explanation based learning has typically been considered a symbolic learning method. An explanation based learning method that utilizes purely neural network representations (called EBNN) has recently been developed, and has been shown to have several desirable properties, including robustness to errors in the domain theory. This paper briefly summarizes the EBNN algorithm, then explores the correspondence between this neural network based EBL method and EBL methods based on symbolic representations.

24 citations