scispace - formally typeset
Search or ask a question

Showing papers on "Unsupervised learning published in 1970"


Journal ArticleDOI
King-Sun Fu1
TL;DR: The basic concept of learning control is introduced, and the following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) Stochastic automata models.
Abstract: The basic concept of learning control is introduced. The following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) stochastic automata models. Potential applications and problems for further research in learning control are outlined.

204 citations


Journal ArticleDOI
TL;DR: This paper suggests a learning scheme, "learning with a probabilistic teacher," which works with unclassified samples and is computationally feasible for many practical problems.
Abstract: The Bayesian learning scheme is computationally infeasible for most of the unsupervised learning problems. This paper suggests a learning scheme, "learning with a probabilistic teacher," which works with unclassified samples and is computationally feasible for many practical problems. In this scheme a sample is probabilistically assigned with a class with appropriate probabilities computed using all the information available: Then the sample is used in learning the parameter values given this assignment of the class. The convergence of the scheme is established and a comparison with the best linear estimator is presented.

121 citations


Journal ArticleDOI
TL;DR: It is demonstrated that identification theory implies unsupervised learning is possible in many important cases, and a general method is presented as inclusive as the one revealed here, which is effective for all the many cases wherein unsuper supervised learning is known to be possible.
Abstract: The first portion of this paper is tutorial. Beginning with a standard definition of an abstract pattern-recognition machine, "learning" is given a mathematical meaning and the distinction is made between supervised and unsupervised learning. The bibliography will help the interested reader retrace the history of learning in pattern recognition. The exposition now focuses attention on unsupervised learning. Carefully, it is explained how problems in this subject can be viewed as problems in the identification of finite mixtures, a statistical theory that has achieved some maturity. From this vantage point, it is demonstrated that identification theory implies unsupervised learning is possible in many important cases. The remaining sections present a general method for achieving unsupervised learning. Other authors have proposed schemes having greater computational convenience, but no method previously published is as inclusive as the one revealed here, which we demonstrate to be effective for all the many cases wherein unsupervised learning is known to be possible.

54 citations


Journal ArticleDOI
TL;DR: Simple mathematical expressions are derived for the improvement in supervised learning provided by additional nonsupervised learning when the number of learning samples is large so that asymptotic approximations are appropriate.
Abstract: This paper treats an aspect of the learning or estimation phase of statistical pattern recognition (and adaptive statistical decision making in general). Simple mathematical expressions are derived for the improvement in supervised learning provided by additional nonsupervised learning when the number of learning samples is large so that asymptotic approximations are appropriate. The paper consists largely of the examination of a specific example, but, as is briefly discussed, the same procedure can be applied to other parametric problems and generalization to nonparametric problems seems possible. The example treated has the additional interesting aspect that the data does not have structure that would enable the machine to learn in the nonsupervised mode alone; but the additional nonsupervised learning can provide substantial improvement over the results obtainable by supervised learning alone. A second purpose of the paper is to suggest that a new fruitful area of research is the analytical study of the possible benefits of combining supervised and nonsupervised learning.

35 citations



Journal ArticleDOI
TL;DR: A non-parametric, unsupervised learning technique that makes use of a relation matrix to classify binary pattern vectors presented in random sequence that produces decision surfaces of a reasonable form.

5 citations


Journal ArticleDOI
TL;DR: A previously unseen similarity between an associative memory model recently proposed for the brain and some electronic systems that have been developed for practical learning machines is reported on.
Abstract: The letter reports on a previously unseen similarity between an associative memory model recently proposed for the brain and some electronic systems that have been developed for practical learning machines. This similarity is analysed, and its implications are discussed. Results of a comparison between the learning behaviour of an electronic network and a group of human subjects in a pattern-recognition task are given to complete the comparative study.

4 citations


Proceedings ArticleDOI
C. Hilborn1
01 Dec 1970
TL;DR: The Bayes optimal m-ary digital communication receiver structure is derived using a recursive formula from the theory of unsupervised learning pattern classification to be optimal for a model which includes intersymbol interference, Markov symbol and noise sequences, and unknown parameters.
Abstract: The Bayes optimal m-ary digital communication receiver structure is derived using a recursive formula from the theory of unsupervised learning pattern classification The receiver structure is optimal for a model which includes intersymbol interference, Markov symbol and noise sequences, and unknown parameters The optimum receiver is found as a function of the noise density In the particular case of Gauss-Markov noise, the receiver is shown to consist of (1) discrete-time pre-whitening, (2) correlation, (3) energy correction, (4) expontiation, and (5) delay-feedback "filtering" followed by zero-memory linear operations and minimum selection

3 citations


Journal ArticleDOI
01 Jan 1970
TL;DR: A method mixing unsupervised learning of lexical pattern frequencies with semantic information is proposed, which aims at improving the resolution of PP attachment ambiguity.
Abstract: In this paper, we propose a method combining unsupervised learning of lexical frequencies with semantic information aiming at improving PP attachment ambiguity resolution. Using the output of a robust parser, i.e. the set of all possible attachments for a given sentence, we query the Web and obtain statistical information about the frequencies of the attachments distributions as well as lexical signatures of the terms on the patterns. All this information is used to weight the dependencies yielded by the parser.

3 citations


Journal ArticleDOI
TL;DR: An alternate method for achieving limited storage in unsupervised learning pattern recognition problems is proposed for a class of problems and its performance is shown to converge to that of the optimum unlimited storage system.
Abstract: In unsupervised learning pattern recognition problems, the need arises for updating conditional density functions of uncertain parameters using probability density function mixtures. In general, the form of the density mixtures is not reproducing, invoking the need for unlimited system storage requirements. One suboptimal method for achieving limited storage is to restrict the uncertain parameters in question to come from finite sets of values. An alternate method is proposed for a class of problems and its performance is shown to converge to that of the optimum unlimited storage system. A generalization of the procedure is also discussed.

1 citations


Journal ArticleDOI
TL;DR: The learning rate for classifying parts which are produced from a machine with a varying parameter is analyzed to investigate the effect of learning rate to an unsupervised neural network when applied to an inspection process.
Abstract: In this paper, we shall investigate the effect of learning rate to an unsupervised neural network when applied to an inspection process. The network we use is the Learning by Experience (LBE)[1]. Here, we analyse the learning rate for classifying parts which are produced from a machine with a varying parameter. Experiment results using IC leadframes are included.

Proceedings ArticleDOI
01 Dec 1970
TL;DR: Both deterministic decision directed learning as well as random decision directedLearning algorithms for continuous data are obtained in optimal, unsupervised learning, adaptive pattern recognition of "lumped" gaussian signals in white gaussian noise.
Abstract: This paper constitutes Part II of a series of papers on adaptive pattern recognition and its applications. It pertains to optimal, unsupervised learning, adaptive pattern recognition of "lumped" gaussian signals in white gaussian noise. Specifically, both deterministic decision directed learning as well as random decision directed learning algorithms for continuous data are obtained. It is shown that the supervised learning results [1], in particular the partition theorem are applicable in the directed learning approach to the unsupervised case [2].