scispace - formally typeset
Search or ask a question

Showing papers on "Active learning (machine learning) published in 1986"


Journal ArticleDOI
TL;DR: This paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework, which is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations.
Abstract: Induction of a concept description given noisy instances is difficult and is further exacerbated when the concepts may change over time. This paper presents a solution which has been guided by psychological and mathematical results. The method is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations. Two learning processes incrementally modify this description. One adjusts the characterization weights and another creates new characterizations. The latter process is described in terms of a search through the space of possibilities and is shown to require linear space with respect to the number of attribute-value pairs in the description language. The method utilizes previously acquired concept definitions in subsequent learning by adding an attribute for each learned concept to instance descriptions. A program called STAGGER fully embodies this method, and this paper reports on a number of empirical analyses of its performance. Since understanding the relationships between a new learning method and existing ones can be difficult, this paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework.

490 citations


01 Jun 1986
TL;DR: The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search.
Abstract: : Rumelhart, Hinton and Williams (Rumelhart 86) describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. The speed of learning is strongly dependent on the shape of the surface formed by the error measure in weight space . We give examples of the shape of the error surface for a typical task and illustrate how an acceleration method speeds up descent in weight space. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. We give some preliminary results on scaling and show how the magnitude of the optimal weight changes depends on the fan-in of the units. Additional results illustrate the effects on learning speed of the amount of interaction between the weights. A variation of the learning procedure that back-propagates desired state information rather than error gradients is developed and compared with the standard procedure. Finally, we discuss the relationship between our iterative networks and the analog networks described by Hopefield and Tank (Hopfield 85). The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search.

370 citations


Journal ArticleDOI
King-Sun Fu1
TL;DR: The basic concept of learning control is introduced, and the following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) Stochastic automata models.
Abstract: The basic concept of learning control is introduced. The following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) stochastic automata models. Potential applications and problems for further research in learning control are outlined.

121 citations


Book
01 May 1986
TL;DR: Thank you very much for downloading machine learning applications in expert systems and information retrieval for knowledge that, people have look numerous times for their chosen books like this, but end up in malicious downloads.
Abstract: Thank you very much for downloading machine learning applications in expert systems and information retrieval. Maybe you have knowledge that, people have look numerous times for their chosen books like this machine learning applications in expert systems and information retrieval, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their computer.

115 citations


Proceedings Article
11 Aug 1986
TL;DR: A learning system that employs two different representations: one for learning and one for performance, and many fewer training instances are required to learn the concept, the biases of the learning program are very simple, and the learning system requires virtually no "vocabulary engineering" to learn concepts in a new domain.
Abstract: The task of inductive learning from examples places constraints on the representation of training instances and concepts. These constraints are different from, and often incompatible with, the constraints placed on the representation by the performance task. This incompatibility explains why previous researchers have found it so difficult to construct good representations for inductive learning—they were trying to achieve a compromise between these two sets of constraints. To address this problem, we have developed a learning system that employs two different representations: one for learning and one for performance. The learning system accepts training instances in the "performance representation," converts them into a "learning representation" where they are inductively generalized, and then maps the learned concept back into the "performance representation." The advantages of this approach are (a) many fewer training instances are required to learn the concept, (b) the biases of the learning program are very simple, and (c) the learning system requires virtually no "vocabulary engineering" to learn concepts in a new domain.

63 citations


Proceedings Article
01 Jan 1986
TL;DR: In this article, a soft matching function is provided by the vectar-based [2, 3, 4] and the fuzzy set [S, 6] models to rank documents with respect to the degree of similarity between a sunogate document and the query.
Abstract: The fundamental problem in Information Retrieval (IR) is to identify lhe relevant documents from lhe nonrelevant ones in a collection of documents according to. a particular user's information. needs. One of the majar difficulties in modeUing information retrieval is to choose an appropriate (knowledge) representation of the content of an individual document For example. it is common to describe each document by a set of (weighted) indu. terms ar keywords obtained from an automatic indexing scheme (1, 2, 3]. Since these index tenns (or some other similar "constructs") provide us only with partial knowledge about the contents of the documents, it is unrealistic to expect that lhe system would identify without uncertainty only those docume.nts the user needs. Thus, any relevance judgment based on the surrogate documents and some highly model dependent retrieval strategy is bound to be uncertain. In this regard, the search strategy acq,ted in !he standard Boolean model, for example, used in most commercial systems is generally considered ·to be too restrictive. On the other hand, a soft matching function is provided by the vectarbased [2, 3, 4) and the fuzzy set [S, 6] models to rank documents with respect to the degree of similarity between a sunogate document and the query. In these approaches, it is believed that the query formulated by the user (using whatever query language provided by a particular IR model) can in fact accurately reftect a user's information requirements. However, in practice, more often than not a user may not be able to describe· in a precise way the characteristics of the relevant information items even with a query language of sufficient expressive power [7). This drawbaclt, to some extent, can be remedied by incorporating some intuitive relevance feedback procedure [8) to im~ve the query formulation.

26 citations


01 Feb 1986
TL;DR: This paper considers four common problems studied by machine learning researchers - learning from examples, heuristics learning, conceptual clustering, and learning macro-operators - and proposes a framework for understanding this research, proposing four component tasks involved in learning from experience.
Abstract: Author(s): Langley, Pat; Carbonell, Jaime G. | Abstract: In this paper, we review recent progress in the field of machine learning and examine its implications for computational models of language acquisition. As a framework for understanding this research, we propose four component tasks involved in learning from experience - aggregation, clustering, characterization, and storage. We then consider four common problems studied by machine learning researchers - learning from examples, heuristics learning, conceptual clustering, and learning macro-operators - describing each in terms of our framework. After this, we turn to the problem of grammar acquisition, relating this problem to other learning tasks and reviewing four AI systems that have addressed the problem. Finally, we note some limitations of the earlier work and propose an alternative approach to modeling the mechanisms underlying language acquisition.

17 citations


Proceedings ArticleDOI
01 Sep 1986
TL;DR: In this chapter, a soft matching function is provided by the vectarbased and the fuzzy set models to rank documents with respect to the degree of similarity between a sunogate document and the query.

16 citations


Patent
Matsuo Amano1, Seiji Suda1, Nobuo Satou1
18 Jul 1986
TL;DR: In this paper, a learning map on which the rewriting of data is performed in accordance with the results of an internal combustion engine controlling operation is provided, and the controlling of the engine is performed on the basis of the data written in the learning map.
Abstract: A learning map on which the rewriting of data is performed in accordance with the results of an internal combustion engine controlling operation is provided. When the number of pieces of data written in this learning map has become greater than one, the corresponding weighting is done for each unlearned region in the basis of the learning data on the learning already-learned regions, and the data in the already learned regions is written in unlearned region. The controlling of the engine is performed on the basis of the data written in the learning map.

8 citations


Journal ArticleDOI
TL;DR: An algorithm for learning class parameters using a restricted updating programme is described along with investigation of its convergence for optimum learning.

7 citations


Journal ArticleDOI
TL;DR: Application of optimal control, for planning a dynamic economy, calls for the specification and estimation of a proper econometric model, which takes into consideration the parameter covariances, but ignores the covariance between the state variables and the parameters of the system.
Abstract: Application of optimal control, for planning a dynamic economy, calls for the specification and estimation of a proper econometric model. This estimation is based upon information available up to the current period. Future information, however, should be taken into consideration, in order to allow the policy maker to adjust his response by revising the model according to the new information. In this case, we may have at each time instant an updating process of the economic system and an analogous revision process of the plan. In the control literature this type of control may be called either a passive learning or active learning process. In the first method, we may take into consideration the parameter covariances, but we ignore the covariance between the state variables and the parameters of the system. In the second method we take into account the above covariances, as well as future covariances of the state and control variables. This consideration or the future perturbations allows the establishing o...


Book ChapterDOI
22 Sep 1986
TL;DR: In this article, a natural language understanding program is presented, which integrates the syntactic/semantic processing of syntactic parsing and semantic processing, using the deterministic analysis principle (see (Rady for the justification).
Abstract: Our goal is to construct a natural language “understanding“ program, which integrates the syntactic/semantic processing The present article is about syntactic parsing Of the various algorithms proposed (Winograd), we prefer the deterministic analysis principle (see (Rady) for the justification) In order to recognize the diverse grammatical templates of the French language, the processing rules are necessarily complex

Journal ArticleDOI
TL;DR: It is argued that more machine learning researchers should focus their efforts on modeling human behavior, but it is not argued that the field should limit itself to this approach.
Abstract: Although science can be characterized in terms of search, some search methods let one explore multiple paths in parallel. We have argued that more machine learning researchers should focus their efforts on modeling human behavior, but we have not argued that the field should limit itself to this approach. For those interested in general principles, the study of nonhuman learning methods is also necessary for useful results. In terms of applications, some of machine learning's greatest achievements have involved nonincremental methods that are clearly poor models of human learning. Planes are terrible imitations of birds (and fly less efficiently), but there are still excellent reasons for using aircraft. However, we do believe that too little research has focused on results from the literature on human learning, and that greater attention in this direction would benefit the field as a whole. Science is a complex and bewildering process, and the scientist should employ all available knowledge to direct his steps in useful directions. This strategy seems especially important in young fields like machine learning, in which conflicting views and methods abound. We encourage the reader to join us in applying machine learning techniques to explain the mysteries of human behavior, and in using knowledge of human behavior to constrain our computational theories of learning.

Book ChapterDOI
01 Jun 1986
TL;DR: Precondition Analysis is a technique for learning control information from a single example that can be used in domains where operator inversion is difficult or impossible.
Abstract: Precondition Analysis is a technique for learning control information from a single example. Unlike many other analytic learning techniques, it can be used in domains where operator inversion is difficult or impossible. Precondition Analysis has been implemented in the domain of equation solving. The author is currently extending this work in various directions.

Book ChapterDOI
01 Jun 1986
TL;DR: It is proposed that a neural modeling approach is reasonable for investigating certain low-level learning processes such as are exhibited by invertebrates, which include habituation, sensitization, classical conditioning, and operant conditioning.
Abstract: In this paper I propose that a neural modeling approach is reasonable for investigating certain low-level learning processes such as are exhibited by invertebrates. These include habituation, sensitization, classical conditioning, and operant conditioning. Recent work in invertebrate neurophysiology has begun to provide much knowledge about the underlying mechanisms of learning in these animals. Guided by these findings, I am constructing simulated organisms which will display these basic forms of learning.


Journal ArticleDOI
TL;DR: A learning controller to exploit an analogy between a current unsolved problem and a similar but previously solved problem to simplify its search for a solution is outlined and indicated that by using an analogy it can speed the search of input sequences to transfer initial states to goal states.
Abstract: This paper proposes anew method for knowledge acquisition, knowledge representation, generalization and reasoning in learning control. A learning controller to exploit an analogy between a current unsolved problem and a similar but previously solved problem to simplify its search for a solution is outlined. As an example, this controller is applied to contlol of multi-links systems and by simulation it is indicated that by using an analogy it can speed the search of input sequences to transfer initial states to goal states.