scispace - formally typeset
Search or ask a question

Showing papers on "Unsupervised learning published in 1969"


Journal ArticleDOI
TL;DR: The letter defines an algorithm and examines its performance in this task of initialising a learning machine and finds a method of providing a good starting state for the learning machine.
Abstract: Pattern recognition has usually been regarded as a two-part process requiring both measurement and classification. It has also been accepted by many workers that the classifier can best be designed using an automatic learning process. However it is well known that many of the proposed learning schemes sometimes fail to realise the most efficient use of the available storage. This difficulty usually arises whenever the initial classifier was a poor approximation to the teacher. The design of the classifier using such learning techniques would clearly be more successful if a method could be found of providing a good starting state for the learning machine. The letter defines an algorithm and examines its performance in this task of initialising a learning machine.

24 citations


Journal ArticleDOI
TL;DR: A recursive Bayes optimal solution is found for the problem of sequential multicategory pattern recognition when unsupervised learning is required and is shown to be realizable in recursive form with fixed memory requirements.
Abstract: A recursive Bayes optimal solution is found for the problem of sequential multicategory pattern recognition when unsupervised learning is required. An unknown parameter model is developed which, for the pattern classification problem, allows for 1) both constant and time-varying unknown parameters, 2) partially unknown probability laws of the hypotheses and time-varying parameter sequences, 3) dependence of the observations on past as well as present hypotheses and parameters, and most significantly, 4) sequential dependencies in the observations arising from either (or both) dependency in the pattern or information source (context dependence) or in the observation medium (sequential measurement correlation), these dependencies being up to any finite Markov orders. For finite parameter spaces, the solution which is Bayes optimal (minimum risk) at each step is found and shown to be realizable in recursive form with fixed memory requirements. The asymptotic properties of the optimal solution are studied and conditions established for the solution (in addition to making best use of available data at each step) to converge in performance to operation with knowledge of the (unobservable) constant unknown parameters.

16 citations


Journal ArticleDOI
TL;DR: It is suggested that the methods and techniques employed in the project may be useful in mechanizing some problem-solving activities that can be reduced to pattern recognition, such as meteorological forecasting, medical diagnosis, traffic control, and so on.
Abstract: Five different but interrelated models of learning have been established within a complex computer program. These models incorporate mechanisms that optimize response patterns on algorithmic and heuristic bases; make abstractions at different levels; produce value judgements; recognize, modify, store, and retrieve geometrical patterns; and exhibit, in general, many aspects of intelligent behavior. Both the teacher and the learner are simulated in the machine. In one model, the program follows a qualitatively new kind of learning process in generating its own strategy and improving it on the basis of experience. The method enables the learner to exceed the playing quality of the teacher. It is suggested that the methods and techniques employed in the project may be useful in mechanizing some problem-solving activities that can be reduced to pattern recognition, such as meteorological forecasting, medical diagnosis, traffic control, and so on. No deliberate attempt has been made to imitate humans.

12 citations



Journal ArticleDOI
Chi-Hau Chen1
TL;DR: Efforts are made to simplify the implementation and to improve the flexibility of Bayesian learning systems using a truncated series expansion to represent a pattern class, a simplified structure is shown with nearly optimal performance.
Abstract: Efforts are made to simplify the implementation and to improve the flexibility of Bayesian learning systems. Using a truncated series expansion to represent a pattern class, a simplified structure is shown with nearly optimal performance. A criterion of determining the learning sample size is given so that after taking a sufficient number of learning observations, the system may elect to learn by itself without relying on the external supervision. A time-varying random parameter is approximated by the polynomial with random coefficients. The Bayes estimates of the coefficients are obtained sequentially from the useful information in the learning observations. The condition for convergence of the unsupervised learning is established and shown to be closely related to the selection of the characteristic features. The system retains the same structure in both supervised and unsupervised learning processes with either the stationary or the time-varying random parameter.

7 citations


Journal ArticleDOI
TL;DR: Codes are constructed for communicating an alphabet consisting of two letters over an unknown, stationary binary channel and consistent estimators for the code words and their a priori probabilities are constructed at the receiver.
Abstract: Codes are constructed for communicating an alphabet consisting of two letters over an unknown, stationary binary channel. The code words, their a priori probabilities, and channel parameters are unknown at the receiver. Consistent estimators for the code words and their a priori probabilities are constructed at the receiver.

2 citations