scispace - formally typeset
Search or ask a question

Showing papers on "Unsupervised learning published in 1968"


Journal ArticleDOI
TL;DR: The main contributions of this paper are the extension of previous investigations of the unsupervised parametric pattern recognition problem to include cases where both constant and time-varying unknown parameter vectors are simultaneously present and the removal of the assumption of statistical independence between hypotheses for the sequence of observations.
Abstract: A Bayesian decision theory approach is applied to the solution of the problem of unsupervised parametric pattern recognition. The parametric model for this investigation includes the cases where both constant and time-varying unknown parameters are present, and, most significantly, the unknown hypotheses do not constitute a statistically independent sequence. They are restricted only to be from a source with finite-order Markov dependence. The resulting optimal learning system is found and shown to grow initially in size and memory until the N th observation (where N is the highest Markov order), and subsequently to remain of fixed size and memory. It can, therefore, operate indefinitely and continue to improve its ability to recognize patterns utilizing only a fixed size memory. In summary, the main contributions of this paper are the following: \begin{enumerate} \item the extension of previous investigations of the unsupervised parametric pattern recognition problem to include cases where both constant and time-varying unknown parameter vectors are simultaneously present; \item that the a priori probabilities of the hypotheses, the time-varying parameters, and their transition laws may, if constant, be expressed as functions of the constant unknown parameter and, thus, also be learned; and \item the removal of the assumption of statistical independence between hypotheses for the sequence of observations. \end{enumerate}

23 citations


Journal ArticleDOI
TL;DR: Using experimentally generated data and a method of estimating parameters developed by Massy, the linear learning model was tested as a predictor of brand repeat purchasing and the results indicate that the model may not be as useful as others have suggested.
Abstract: Using experimentally generated data and a method of estimating parameters developed by Massy, the linear learning model was tested as a predictor of brand repeat purchasing. The results differ from...

20 citations


Journal ArticleDOI
TL;DR: A "mixture approach" defined previously is used to define this class of unsupervised estimation problems, and state precisely the a priori knowledge used to defined each problem.
Abstract: The unsupervised estimation problem has received considerable attention during the last three years. The problem usually considered, however, is only one of a class of unsupervised estimation problems. In this paper, a "mixture approach" defined previously is used to define this class of unsupervised estimation problems, and state precisely the a priori knowledge used to define each problem. After using available a priori knowledge to construct precisely the mixture appropriate to the unsupervised problem, the parameters characterizing this particular unsupervised problem can be estimated, or a Bayes minimum conditional risk receiver can be constructed. The class of unsupervised estimation problems includes the following cases: unknown number of pattern classes, dependent observation vectors, nonstationary class probabilities, more than one vector observation taken with a single class active, lack of synchronization, and unsupervised learning control and communications.

19 citations


Journal ArticleDOI
TL;DR: The asymptotic probability of error for an unsupervised learning system with two unknown mean vectors, which are estimated by "decision-directed" estimators, is investigated.
Abstract: The asymptotic probability of error for an unsupervised learning system with two unknown mean vectors, which are estimated by "decision-directed" estimators, is investigated. Theoretical solutions are obtained for various class probabilities and signal-to-noise ratios by utilizing a digital computer.

15 citations


Patent
29 Nov 1968

13 citations


Journal ArticleDOI
01 Jun 1968
TL;DR: In this article, two versions of an unsupervised learning algorithm for pattern recognition are compared by means of numerical calculations based on two-dimensional ellipsoidal pattern distributions, and the results show that one of them outperforms the other.
Abstract: Two versions of an unsupervised learning algorithm for pattern recognition are compared by means of numerical calculations based on two-dimensional ellipsoidal pattern distributions.

7 citations


Proceedings Article
01 Jan 1968

1 citations


Proceedings ArticleDOI
01 Dec 1968

Proceedings ArticleDOI
01 Dec 1968
TL;DR: A recursive Bayes optimal solution is found for the problem of sequential, multicategory pattern recognition, when unsupervised learning is required and is shown to be realizable in recursive form with fixed memory requirements.
Abstract: A recursive Bayes optimal solution is found for the problem of sequential, multicategory pattern recognition, when unsupervised learning is required. An unknown parameter model is developed which, for the pattern classification problem, allows for (i) both constant and time-varying unknown parameters, (ii)partially unknown probability laws of the hypotheses and time-varying parameter sequences, (iii) dependence of the observations on past as well as present hypotheses and parameters, and most significantly, (iv) sequential dependencies in the observations arising from either (or both) dependency in the pattern or information source (context dependence) or in the observation medium (sequential measurement correlation), these dependencies being up to any finite Markov orders. For finite parameter spaces, the solution which is Bayes optimal (minimum risk) at each step is found and shown to be realizable in recursive form with fixed memory requirements. The asymptotic properties of the optimal solution are studied and conditions established for the solution (in addition to making best use of available data at each step) to converge in performance to operation with knowledge of the (unobservable constant unknown parameters.