scispace - formally typeset
Search or ask a question

Showing papers on "Unsupervised learning published in 1982"


Book
01 Apr 1982

12 citations


Journal ArticleDOI
TL;DR: This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.
Abstract: This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm. This extends previous work utilizing a hybrid source coder which combines an orthogonal transformation with differential pulse code modulation (DPCM). The data compression is achieved in the DPCM loop, and it is the quantizer of this scheme which is approached from an unsupervised learning procedure. The distribution defining the quantizer is represented as a set of separable Laplacian mixture densities for two-dimensional images. The condition of identifiability is shown for the Laplacian case and a decision directed estimate of both the active distribution parameters and the mixing parameters are discussed in view of a Bayesian structure. The decision directed estimators, although not optimum, provide a realizable structure for estimating the parameters which define a distribution which has become active. These parameters are then used to scale the optimum (in the mean square error sense) Laplacian quantizer. The decision criteria is modified to prevent convergence to a single distribution which in effect is the default condition for a variance estimator. This investigation was applied to a test image and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.

5 citations


Journal ArticleDOI
TL;DR: It is possible to advance a hypothesis on the role of neurons with various types of receptive fields in information processing by a complex neuronal network.
Abstract: In the paper two kinds of unsupervised learning processes occurring in formal neurons are analysed. The relationship between these processes and the supervised ones is discussed too. In the first case of unsupervised learning processes the neuron is considered as a filter that passes signals most frequently occurring in the learning sequence {x[n]}. In the second case it is considered as a “detector of rareness” which, after a finite number of steps operates as a filter passing only signals which rarely occur in the learning sequence {x[n]}. These two approaches result in different types of receptive fields of formal neurons. On the basis of the results obtained, it is possible to advance a hypothesis on the role of neurons with various types of receptive fields in information processing by a complex neuronal network.

5 citations


Journal ArticleDOI
TL;DR: It turns out that the iterative solution of the maximum likelihood equations has the best properties among the three approaches, but even this one fails to yield satisfactory results if the number of unknown parameters becomes large, as is usually the case in realistic problems of pattern recognition.
Abstract: Three well-known algorithms for unsupervised learning using a decision-directed approach are the random labeling of patterns according to the estimated a posteriori probabilities, the classification according to the estimated a posteriori probabilities, and the iterative solution of the maximum likelihood equations. The convergence properties of these algorithms are studied by using a sample of about 10 000 handwritten numerals. It turns out that the iterative solution of the maximum likelihood equations has the best properties among the three approaches. However, even this one fails to yield satisfactory results if the number of unknown parameters becomes large, as is usually the case in realistic problems of pattern recognition.

5 citations


Journal ArticleDOI
TL;DR: Analytical expressions have been found for misclassification probability under Bayes' rule for multivariate ease under certain conditions of noise statistics and the sensitivity of mis classification probability has been formulated to study the effect of small inaccuracy in parameter estimation on mis Classification probability.
Abstract: This work is concerned mainly with statistical pattern recognition in noisy environment. Analytical expressions have been found for misclassification probability under Bayes' rule for multivariate ease under certain conditions of noise statistics. The case when noise density is normal has been considered in detail and the properties have been studied with numerical results. The sensitivity of misclassification probability has been formulated to study the effect of small inaccuracy in parameter estimation on misclassification probability both for ideal and noisy environment and numerical result presented for both cases. Also, the problems of unsupervised learning and recognition, e.g. clustering, has been discussed for noisy environment. The work is useful and important in practical pattern recognition problems.

4 citations


Proceedings Article
18 Aug 1982
TL;DR: A technique for learning new words that uses expectations generated by the context and an ISA hierarchy to guide the inference process.
Abstract: A technique for learning new words is discussed. The technique uses expectations generated by the context and an ISA hierarchy to guide the inference process. The learning process uses the context of several independent occurrences of the word to converge on its meaning.

4 citations


Journal ArticleDOI
E. J. Brody1
TL;DR: A computational procedure for the unsupervised discovery of probabilistic distributional classes, using random text presentation, is shown to converge stochastically to the correct classification.
Abstract: A probabilistic version of word distributional equivalence, which includes the usual notion of syntactic distributional classes as a special case, is formulated. A computational procedure for the unsupervised discovery of probabilistic distributional classes, using random text presentation, is shown to converge stochastically to the correct classification. The results of a simulation experiment are presented. A geometrical interpretation of the procedure, in which words are represented as vectors in an infinite-dimensional inner product space, is discussed.