scispace - formally typeset
Search or ask a question

Showing papers on "Unsupervised learning published in 1986"


Journal ArticleDOI
TL;DR: This paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework, which is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations.
Abstract: Induction of a concept description given noisy instances is difficult and is further exacerbated when the concepts may change over time. This paper presents a solution which has been guided by psychological and mathematical results. The method is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations. Two learning processes incrementally modify this description. One adjusts the characterization weights and another creates new characterizations. The latter process is described in terms of a search through the space of possibilities and is shown to require linear space with respect to the number of attribute-value pairs in the description language. The method utilizes previously acquired concept definitions in subsequent learning by adding an attribute for each learned concept to instance descriptions. A program called STAGGER fully embodies this method, and this paper reports on a number of empirical analyses of its performance. Since understanding the relationships between a new learning method and existing ones can be difficult, this paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework.

490 citations


Journal ArticleDOI
King-Sun Fu1
TL;DR: The basic concept of learning control is introduced, and the following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) Stochastic automata models.
Abstract: The basic concept of learning control is introduced. The following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) stochastic automata models. Potential applications and problems for further research in learning control are outlined.

121 citations


Book
31 Dec 1986
TL;DR: A novel algorithm is examined that combines ASPECTS of REINFORCEMENT LEARNING and a DATA-DIRECTED SEARCH for USEFUL WEIGHTS, and is shown to out perform reinFORMCEMENT-LEARNING ALGORITHMS.
Abstract: THE DIFFICULTIES OF LEARNING IN MULTILAYERED NETWORKS OF COMPUTATIONAL UNITS HAS LIMITED THE USE OF CONNECTIONIST SYSTEMS IN COMPLEX DOMAINS. THIS DISSERTATION ELUCIDATES THE ISSUES OF LEARNING IN A NETWORK''S HIDDEN UNITS, AND REVIEWS METHODS FOR ADDRESSING THESE ISSUES THAT HAVE BEEN DEVELOPED THROUGH THE YEARS. ISSUES OF LEARNING IN HIDDEN UNITS ARE SHOWN TO BE ANALOGOUS TO LEARNING ISSUES FOR MULTILAYER SYSTEMS EMPLOYING SYMBOLIC REPRSENTATIONS. COMPARISONS OF A NUMBER OF ALGORITHMS FOR LEARNING IN HIDDEN UNITS ARE MADE BY APPLYING THEM IN A CONSISTENT MANNER TO SEVERAL TASKS. RECENTLY DEVELOPED ALGORITHMS, INCLUDING RUMELHART, ET AL''S, ERROR BACK-PROPOGATIONS ALGORITHM AND BARTO, ET AL''S, REINFORCEMENT-LEARNING ALGORITHMS, LEARN THE SOLUTIONS TO THE TASKS MUCH MORE SUCCESSFULLY THAN METHODS OF THE PAST. A NOVEL ALGORITHM IS EXAMINED THAT COMBINES ASPECTS OF REINFORCEMENT LEARNING AND A DATA-DIRECTED SEARCH FOR USEFUL WEIGHTS, AND IS SHOWN TO OUT PERFORM REINFORMCEMENT-LEARNING ALGORITHMS. A CONNECTIONIST FRAMEWORK FOR THE LEARNING OF STRATEGIES IS DESCRIBED WHICH COMBINES THE ERROR BACK-PROPOGATION ALGORITHM FOR LEARNING IN HIDDEN UNITS WITH SUTTON''S AHC ALGORITHM TO LEARN EVALUATION FUNCTIONS AND WITH A REINFORCEMENT-LEARNING ALGORITHM TO LEARN SEARCH HEURISTICS. THE GENERAL- ITY OF THIS HYBRID SYSTEM IS DEMONSTRATED THROUGH SUCCESSFUL APPLICATIONS

115 citations


Book
01 Jun 1986
TL;DR: The Judge: A Case-Based Reasoning System and some Approaches to Knowledge Acquisition are reviewed.
Abstract: Judge: A Case-Based Reasoning System.- Changing Language While Learning Recursive Descriptions from Examples.- Learning by Disjunctive Spanning.- Transfer of Knowledge between Teaching and Learning Systems.- Some Approaches to Knowledge Acquisition.- Analogical Learning with Multiple Models.- The World Modelers Project: Objectives and Simulator Architecture.- The Acquisition of Procedural Knowledge through Inductive Learning.- Learning Static Evaluation Functions by Linear Regression.- Plan Invention and Plan Transformation.- A Brief Overview of Explanatory Schema Acquisition.- The EG Project: Recent Progress.- Learning Causal Relations.- Functional Properties and Concept Formation.- Explanation-Based Learning in Logic Circuit Design.- A Proposed Method of Conceptual Clustering for Structured and Decomposable Objects.- Exploiting Functional Vocabularies to Learn Structural Descriptions.- Combining Numeric and Symbolic Learning Techniques.- Learning by Understanding Analogies.- Analogical Reasoning in the Context of Acquiring Problem Solving Expertise.- Planning and Learning in a Design Domain: The Problems Plan Interactions.- Inference of Incorrect Operators.- A Conceptual Framework for Concept Identification.- Neural Modeling as One Approach to Machine Learning.- Steps Toward Building a Dynamic Memory.- Learning by Composition.- Knowledge Acquisition: Investigations and General Principles.- Purpose-Directed Analogy: A Summary of Current Research.- Development of a Framework for Contextual Concept Learning.- On Safely Ignoring Hypotheses.- A Model of Acquiring Problem Solving Expertise.- Another Learning Problem: Symbolic Process Prediction.- Learning at LRI Orsay.- Coper: A Methodology for Learning Invariant Functional Descriptions.- Using Experience as a Guide for Problem Solving.- Heuristics as Invariants and its Application to Learning.- Components of Learning in a Reactive Environment.- The Development of Structures through Interaction.- Complex Learning Environments: Hierarchies and the use of Explanation.- Prediction and Control in an Active Environment.- Better Information Retrieval through Linguistic Sophistication.- Machine Learning Research in the Artificial Intelligence Laboratory at Illinois.- Overview of the Prodigy Learning Apprentice.- A Learning Apprentice System for VLSI Design.- Generalizing Explanations of Narratives into Schemata.- Why Are Design Derivations Hard to Replay?.- An Architecture for Experiential Learning.- Knowledge Extraction through Learning from Examples.- Learning Concepts with a Prototype-Based Model for Concept Representation.- Recent Progress on the Mathematician's Apprentice Project.- Acquiring Domain Knowledge from Fragments of Advice.- Calm: Contestation for Argumentative Learning Machine.- Directed Experimentation for Theory Revision and Conceptual Knowledge Acquisition.- Goal-Free Learning by Analogy.- A Scientific Approach to Practical Induction.- Exploring Shifts of Representation.- Current Research on Learning in Soar.- Learning Concepts in a Complex Robot World.- Learning Evaluation Functions.- Learning from Data with Errors.- Explanation-Based Manipulator Learning.- Learning Classical Physics.- Views and Causality in Discovery: Modelling Human Induction.- Learning Control Information.- An Investigation of the Nature of Mathematical Discovery.- Learning How to Reach a Goal: A Strategy for the Multiple Classes Classification Problem.- Conceptual Clustering Of Structured Objects.- Learning in Intractable Domains.- On Compiling Explainable Models of a Design Domain.- What Can Be Learned?.- Learning Heuristic Rules from Deep Reasoning.- Learning a Domain Theory by Completing Explanations.- Learning Implementation Rules with Operating-Conditions Depending on Internal Structures in VLSI Design.- Overview of the Odysseus Learning Apprentice.- Learning from Exceptions in Databases.- Learning Apprentice Systems Research at Schlumberger.- Language Acquisition: Learning Phrases in Context.- References.

108 citations


Book
03 Jan 1986
TL;DR: In this article, competitive learning is applied to parallel networks of neuron-like elements to discover salient, general features which can be used to classify a set of stimulus input patterns, and these feature detectors form the basis of a multilayer system that serves to learn categorizations of stimulus sets which are not linearly separable.
Abstract: This paper reporis the results of our studies with an unsupervised learning paradigm which we have called “Competitive Learning.” We have examined competitive learning using both computer simulation and formal analysis and hove found that when it is applied to parallel networks of neuron-like elements, many potentially useful learning tasks can be accomplished. We were attracted to competitive learning because it seems to provide o way to discover the salient, general features which can be used to classify o set of patterns. We show how o very simply competitive mechanism con discover a set of feature detectors which capture important aspects of the set of stimulus input patterns. We 0150 show how these feature detectors con form the basis of o multilayer system that con serve to learn categorizations of stimulus sets which ore not linearly separable. We show how the use of correlated stimuli con serve IX o kind of “teaching” input to the system to allow the development of feature detectors which would not develop otherwise. Although we find the competitive learning mechanism o very interesting and powerful learning principle, we do not, of course, imagine thot it is the only learning principle. Competitive learning is cm essentially nonassociative stotisticol learning scheme. We certainly imagine that other kinds of learning mechanisms will be involved in the building of associations among patterns of activation in o more complete neural network. We offer this analysis of these competitive learning mechanisms to further our understanding of how simple adaptive networks can discover features importont in the description of the stimulus environment in which the system finds itself.

105 citations


Proceedings Article
11 Aug 1986
TL;DR: A learning system that employs two different representations: one for learning and one for performance, and many fewer training instances are required to learn the concept, the biases of the learning program are very simple, and the learning system requires virtually no "vocabulary engineering" to learn concepts in a new domain.
Abstract: The task of inductive learning from examples places constraints on the representation of training instances and concepts. These constraints are different from, and often incompatible with, the constraints placed on the representation by the performance task. This incompatibility explains why previous researchers have found it so difficult to construct good representations for inductive learning—they were trying to achieve a compromise between these two sets of constraints. To address this problem, we have developed a learning system that employs two different representations: one for learning and one for performance. The learning system accepts training instances in the "performance representation," converts them into a "learning representation" where they are inductively generalized, and then maps the learned concept back into the "performance representation." The advantages of this approach are (a) many fewer training instances are required to learn the concept, (b) the biases of the learning program are very simple, and (c) the learning system requires virtually no "vocabulary engineering" to learn concepts in a new domain.

63 citations


Proceedings Article
01 Jan 1986
TL;DR: In this article, a soft matching function is provided by the vectar-based [2, 3, 4] and the fuzzy set [S, 6] models to rank documents with respect to the degree of similarity between a sunogate document and the query.
Abstract: The fundamental problem in Information Retrieval (IR) is to identify lhe relevant documents from lhe nonrelevant ones in a collection of documents according to. a particular user's information. needs. One of the majar difficulties in modeUing information retrieval is to choose an appropriate (knowledge) representation of the content of an individual document For example. it is common to describe each document by a set of (weighted) indu. terms ar keywords obtained from an automatic indexing scheme (1, 2, 3]. Since these index tenns (or some other similar "constructs") provide us only with partial knowledge about the contents of the documents, it is unrealistic to expect that lhe system would identify without uncertainty only those docume.nts the user needs. Thus, any relevance judgment based on the surrogate documents and some highly model dependent retrieval strategy is bound to be uncertain. In this regard, the search strategy acq,ted in !he standard Boolean model, for example, used in most commercial systems is generally considered ·to be too restrictive. On the other hand, a soft matching function is provided by the vectarbased [2, 3, 4) and the fuzzy set [S, 6] models to rank documents with respect to the degree of similarity between a sunogate document and the query. In these approaches, it is believed that the query formulated by the user (using whatever query language provided by a particular IR model) can in fact accurately reftect a user's information requirements. However, in practice, more often than not a user may not be able to describe· in a precise way the characteristics of the relevant information items even with a query language of sufficient expressive power [7). This drawbaclt, to some extent, can be remedied by incorporating some intuitive relevance feedback procedure [8) to im~ve the query formulation.

26 citations



01 Feb 1986
TL;DR: This paper considers four common problems studied by machine learning researchers - learning from examples, heuristics learning, conceptual clustering, and learning macro-operators - and proposes a framework for understanding this research, proposing four component tasks involved in learning from experience.
Abstract: Author(s): Langley, Pat; Carbonell, Jaime G. | Abstract: In this paper, we review recent progress in the field of machine learning and examine its implications for computational models of language acquisition. As a framework for understanding this research, we propose four component tasks involved in learning from experience - aggregation, clustering, characterization, and storage. We then consider four common problems studied by machine learning researchers - learning from examples, heuristics learning, conceptual clustering, and learning macro-operators - describing each in terms of our framework. After this, we turn to the problem of grammar acquisition, relating this problem to other learning tasks and reviewing four AI systems that have addressed the problem. Finally, we note some limitations of the earlier work and propose an alternative approach to modeling the mechanisms underlying language acquisition.

17 citations


Proceedings ArticleDOI
01 Sep 1986
TL;DR: In this chapter, a soft matching function is provided by the vectarbased and the fuzzy set models to rank documents with respect to the degree of similarity between a sunogate document and the query.

16 citations


Journal ArticleDOI
01 Mar 1986
TL;DR: It is shown that unsupervised learning is adequate to compute converging estimates of the mean values of the MN random classification costs, one for each combination of M classes and N decisions.
Abstract: Pattern recognition with unknown costs of classification is formulated as a problem of adaptively learning the optimal scheme starting from an ad hoc decision scheme. It is shown that unsupervised learning is adequate to compute converging estimates of the mean values of the MN random classification costs, one for each combination of M classes and N decisions. The quantities required for estimation are 1) the decision taken, 2) the outcome of the cost random variable corresponding to the unknown class and the implemented decision, and 3) the a posteriori probabilities of all the classes. Some of the variations of the above learning scheme are discussed. An application of the proposed methodology for adaptively improving the performance of pattern-recognition trees is presented along with simulation results.

Book ChapterDOI
01 Jun 1986
TL;DR: It is proposed that a neural modeling approach is reasonable for investigating certain low-level learning processes such as are exhibited by invertebrates, which include habituation, sensitization, classical conditioning, and operant conditioning.
Abstract: In this paper I propose that a neural modeling approach is reasonable for investigating certain low-level learning processes such as are exhibited by invertebrates. These include habituation, sensitization, classical conditioning, and operant conditioning. Recent work in invertebrate neurophysiology has begun to provide much knowledge about the underlying mechanisms of learning in these animals. Guided by these findings, I am constructing simulated organisms which will display these basic forms of learning.


Book ChapterDOI
01 Jan 1986
TL;DR: Among several models of neural networks, layered structures are particularly appealing as they lead naturally to a hierarchical representation of the input sets, along with a reduced connectivity between individual cells, and the filtering properties of the network can be continuously tuned.
Abstract: Among several models of neural networks(1–4), layered structures are particularly appealing as they lead naturally to a hierarchical representation of the input sets, along with a reduced connectivity between individual cells. In Ref. 3 and 4, it was shown that such layered networks are able to memorize complicated input patterns, such as alphabetic characters, during unsupervised learning. On top of that, the filtering properties of the network can be continuously tuned from very sharp discrimination between similar patterns, to broad class aggregation when the selectivity of the cells is decreased. Unfortunately, it was also shown(4) that these properties are obtained with a reduced stability of the learning (the learning process does not converge for some values of the selectivity).