scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
Dan Feldman1
01 Jan 2020
TL;DR: In optimization or machine learning problems the authors are given a set of items, usually points in some metric space, and the goal is to minimize or maximize an objective function over some space of candidate solutions.
Abstract: In optimization or machine learning problems we are given a set of items, usually points in some metric space, and the goal is to minimize or maximize an objective function over some space of candidate solutions. For example, in clustering problems, the input is a set of points in some metric space, and a common goal is to compute a set of centers in some other space (points, lines) that will minimize the sum of distances to these points. In database queries, we may need to compute such a sum for a specific query set of k centers.

20 citations

Journal ArticleDOI
01 May 2010
TL;DR: Improve the accuracy of SVM by proposing a non-linear combination of multiple RBF kernels to obtain more flexible kernel functions and this new kernel is proved to be a Mercer’s kernel.
Abstract: Kernel functions are used in support vector machines (SVM) to compute inner product in a higher dimensional feature space. SVM classification performance depends on the chosen kernel. The radial basis function (RBF) kernel is a distance-based kernel that has been successfully applied in many tasks. This paper focuses on improving the accuracy of SVM by proposing a non-linear combination of multiple RBF kernels to obtain more flexible kernel functions. Multi-scale RBF kernels are weighted and combined. The proposed kernel allows better discrimination in the feature space. This new kernel is proved to be a Mercer’s kernel. Furthermore, evolutionary strategies (ESs) are used for adjusting the hyperparameters of SVM. Training accuracy, the bound of generalization error, and subset cross-validation on training accuracy are considered to be objective functions in the evolutionary process. The experimental results show that the accuracy of multi-scale RBF kernels is better than that of a single RBF kernel. Moreover, the subset cross-validation on training accuracy is more suitable and it yields the good results on benchmark datasets.

20 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...It was originally defined by Vapnik and Chervonenkis (Vapnik and Chervonenkis 1971; Blumer et al. 1989)....

    [...]

Posted Content
TL;DR: The problem of PAC learning quantum process can be solved with samples when the outputs are pure states and samples if the outputs can be mixed, and approximate state discrimination can be solve in polynomial samples even when concept class size $|C|$ is exponential in the number of qubits.
Abstract: We generalize the PAC (probably approximately correct) learning model to the quantum world by generalizing the concepts from classical functions to quantum processes, defining the problem of \emph{PAC learning quantum process}, and study its sample complexity. In the problem of PAC learning quantum process, we want to learn an $\epsilon$-approximate of an unknown quantum process $c^*$ from a known finite concept class $C$ with probability $1-\delta$ using samples $\{(x_1,c^*(x_1)),(x_2,c^*(x_2)),\dots\}$, where $\{x_1,x_2, \dots\}$ are computational basis states sampled from an unknown distribution $D$ and $\{c^*(x_1),c^*(x_2),\dots\}$ are the (possibly mixed) quantum states outputted by $c^*$. The special case of PAC-learning quantum process under constant input reduces to a natural problem which we named as approximate state discrimination, where we are given copies of an unknown quantum state $c^*$ from an known finite set $C$, and we want to learn with probability $1-\delta$ an $\epsilon$-approximate of $c^*$ with as few copies of $c^*$ as possible. We show that the problem of PAC learning quantum process can be solved with $$O\left(\frac{\log|C| + \log(1/ \delta)} { \epsilon^2}\right)$$ samples when the outputs are pure states and $$O\left(\frac{\log^3 |C|(\log |C|+\log(1/ \delta))} { \epsilon^2}\right)$$ samples if the outputs can be mixed. Some implications of our results are that we can PAC-learn a polynomial sized quantum circuit in polynomial samples and approximate state discrimination can be solved in polynomial samples even when concept class size $|C|$ is exponential in the number of qubits, an exponentially improvement over a full state tomography.

20 citations

Journal ArticleDOI
TL;DR: The VC dimension of the set of linear-threshold functions that have nonzero weights for at most s of n inputs is bound, showing that the problem of minimizing the number of nonzero input weights used is both NP-hard and difficult to approximate.

20 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...In section 2 we quantify the bene ts of minimizing the number of features used, by giving an upper bound on the VC dimension of the set of linear threshold functions with n inputs and at most s non-zero input weights....

    [...]

  • ...We have VCdim(H) = n+1 (Blumer et al., 1989)....

    [...]

  • ...The de nitions and theorems of this subsection are from Blumer et al. (1989)....

    [...]

  • ...For learning algorithms that use a hypothesis space H consistently, these bounds are linear in VCdim(H) (Blumer et al., 1989)....

    [...]

  • ...…against the possibility of obtaining improved sample-complexity bounds by approximating MIN FS. Lemma 5.6 of Haussler (1988) and Theorem 3.2.1 of Blumer et al. (1989) can be used to establish improved upper bounds on sample complexity when the approximation factor grows slowly with the number…...

    [...]

Journal ArticleDOI
TL;DR: The author uses an approach developed by Haussler (1989) to show that under an alternative, feature-based representation, recognition is probably approximately correct (PAC) learnable from a feasible number of examples in a distribution-free manner.
Abstract: Previous results on nonlearnability of visual concepts relied on the assumption that such concepts are represented as sets of pixels. The author uses an approach developed by Haussler (1989) to show that under an alternative, feature-based representation, recognition is probably approximately correct (PAC) learnable from a feasible number of examples in a distribution-free manner. >

20 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations