scispace - formally typeset
Search or ask a question
Author

H. Kazmierczak

Bio: H. Kazmierczak is an academic researcher from Karlsruhe University of Applied Sciences. The author has contributed to research in topics: Pattern recognition (psychology) & Feature (machine learning). The author has an hindex of 1, co-authored 1 publications receiving 36 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The perceptual superiority of the human visual system over automata is outlined comparing the properties of both systems and the existing classification methods are outlined and discussed with regard to adaptive systems.
Abstract: The perceptual superiority of the human visual system over automata is outlined comparing the properties of both systems. The most effective property with regard to pattern recognition is the internal adaptability and the ability of abstracting. Both properties are well performed by human beings. A mechanical perceptor for complex pattern recognition must also have these capabilities. The use of adaptation for pattern recognition is discussed. The realization of these properties by machines is difficult, especially the development of an adequate feature generator which performs the internal adaptability and thus solves the problem of identification-criteria invariance of patterns. This is assumed to be the main task in pattern recognition research. External teaching processes may be accomplished by adaptive categorizers. The existing classification methods are outlined and discussed with regard to adaptive systems. Adaptive categorizers of a learning matrix type and a perceptron type are compared as to structure, linear classification performance, and training routine. It is assumed, however, that the somewhat passive external adaptation of categorizers must be supplemented by a more active adaptation by the system itself.

36 citations


Cited by
More filters
Book
01 Jan 1996
TL;DR: The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Abstract: Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index

3,598 citations

Journal ArticleDOI
TL;DR: The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, and applications of locally weighted learning.
Abstract: This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.

1,863 citations

Book ChapterDOI
01 Jan 1983
TL;DR: The study and computer modeling of learning processes in their multiple manifestations constitutes the subject matter of machine learning.
Abstract: Learning is a many-faceted phenomenon. Learning processes include the acquisition of new declarative knowledge, the development of motor and cognitive skills through instruction or practice, the organization of new knowledge into general, effective representations, and the discovery of new facts and theories through observation and experimentation. Since the inception of the computer era, researchers have been striving to implant such capabilities in computers. Solving this problem has been, and remains, a most challenging and fascinating long-range goal in artificial intelligence (AI). The study and computer modeling of learning processes in their multiple manifestations constitutes the subject matter of machine learning.

383 citations

Journal ArticleDOI
George Nagy1
01 Jan 1968
TL;DR: This paper reviews statistical, adaptive, and heuristic techniques used in laboratory investigations of pattern recognition problems and includes correlation methods, discriminant analysis, maximum likelihood decisions minimax techniques, perceptron-like algorithms, feature extraction, preprocessing, clustering and nonsupervised learning.
Abstract: This paper reviews statistical, adaptive, and heuristic techniques used in laboratory investigations of pattern recognition problems. The discussion includes correlation methods, discriminant analysis, maximum likelihood decisions minimax techniques, perceptron-like algorithms, feature extraction, preprocessing, clustering and nonsupervised learning. Two-dimensional distributions are used to illustrate the properties of the various procedures. Several experimental projects, representative of prospective applications, are also described.

317 citations

Journal ArticleDOI
TL;DR: A memory-based local modeling approach (locally weighted regression) is used to represent a learned model of the task to be performed, and an exploration algorithm is developed that explicitly deals with prediction accuracy requirements during exploration.
Abstract: Issues involved in implementing robot learning for a challenging dynamic task are explored in this article, using a case study from robot juggling. We use a memory-based local modeling approach (locally weighted regression) to represent a learned model of the task to be performed. Statistical tests are given to examine the uncertainty of a model, to optimize its prediction quality, and to deal with noisy and corrupted data. We develop an exploration algorithm that explicitly deals with prediction accuracy requirements during exploration. Using all these ingredients in combination with methods from optimal control, our robot achieves fast real-time learning of the task within 40 to 100 trials. >

270 citations