Topic
Statistical learning theory
About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.
Papers published on a yearly basis
Papers
More filters
••
19 May 2009TL;DR: Fuzzy empirical risk minimization principle is proposed and the key theorem of statistical learning theory with fuzzy samples is proven.
Abstract: The key theorem of statistical learning theory provides a theoretical basis for the applied research of support vector machine etc., so it is one of the most important theorems in learning theory. By combining fuzzy set with statistical learning theory, the key theorem of learning theory is generalized. We replace random samples with fuzzy samples. Fuzzy empirical risk minimization principle is proposed. And the key theorem of statistical learning theory with fuzzy samples is proven.
•
01 Jan 2014TL;DR: In this article, the authors formalize a latent variable inference problem called supervised pattern discovery, which is to find sets of observations that belong to a single "pattern" and prove uniform risk bounds for both versions of the problem.
Abstract: This paper formalizes a latent variable inference problem we call supervised pattern discovery, the goal of which is to find sets of observations that belong to a single “pattern.” We discuss two versions of the problem and prove uniform risk bounds for both. In the first version, collections of patterns can be generated in an arbitrary manner and the data consist of multiple labeled collections. In the second version, the patterns are assumed to be generated independently by identically distributed processes. These processes are allowed to take an arbitrary form, so observations within a pattern are not in general independent of each other. The bounds for the second version of the problem are stated in terms of a new complexity measure, the quasi-Rademacher complexity.
•
01 Apr 2007TL;DR: In this article, the authors established the new mathematical foundation for singular learning machines by using resolution of singularities, by which the likelihood function can be represented as the standard form, and proved the asymptotic behavior of the generalization errors of the maximum likelihood method and the Bayes estimation.
Abstract: A learning machine is called singular if its Fisher information matrix is singular. Almost all learning machines used in information processing are singular, for example, layered neural networks, normal mixtures, binomial mixtures, Bayes networks, hidden Markov models, Boltzmann machines, stochastic context-free grammars, and reduced rank regressions are singular. In singular learning machines, the likelihood function can not be approximated by any quadratic form of the parameter. Moreover, neither the distribution of the maximum likelihood estimator nor the Bayes a posteriori distribution converges to the normal distribution, even if the number of training samples tends to infinity. Therefore, the conventional statistical learning theory does not hold in singular learning machines. This paper establishes the new mathematical foundation for singular learning machines. We propose that, by using resolution of singularities, the likelihood function can be represented as the standard form, by which we can prove the asymptotic behavior of the generalization errors of the maximum likelihood method and the Bayes estimation. The result will be a base on which training algorithms of singular learning machines are devised and optimized
••
01 Jan 2015TL;DR: This chapter proposes a new method of face recognition based on PCA and SVM, which applies PCA to extract face feature and uses SVM combined with cross-validation (CV) to classify face images.
Abstract: Support vector machine (SVM) is a kind of machine learning based on statistical learning theory. It shows unique advantages in the small-sample, nonlinear, and high-dimension pattern recognition. Principal component analysis (PCA) is a multivariate analysis technology for feature extraction. In this chapter, we propose a new method of face recognition based on PCA and SVM. It applies PCA to extract face feature and uses SVM combined with cross-validation (CV) to classify face images. CV is a good method of parameter optimization in SVM. We conduct the recognition experiment on the Cambridge ORL database. Compared with other methods, the accuracy rate of face recognition is up to 89.5 %. It is shown to be an effective method.