scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Using Vapnik's empirical risk minimization method, it is shown that if F has finite capacity, then under bounded error condition, for sufficiently large sample, f can be obtained such that P[I( f )−I(f ∗ )>epsiv;] for arbitrarily specified e > 0 andδ, 0
Abstract: Consider that the sensor Si, i = 1,2,…,N, outputs y (i) ϵ R ; d , according to an unknown probability density pi(y(i)|x), corresponding to an object with parameter xϵ R d . For the system of N sensors, S1, S2,…,SN, a training l-sample (x1,y1), (x2,y2),…, (x1,y1) is given where yi = (y(1)i,y(2)i,…,y(N)i) and y(j)i is the output of Sj in response to input xi. The problem is to estimate a fusion rule ƒ: R Nd ↦ R d , based on the sample, such that the expected square error I(ƒ) = ∫ [x − ƒ(y (1) ,y (2) ,…,y (N) )] 2 p(y (1) ,y (2) ,…,y (N) |x)p(x)dy (1) dy (2) …dy (N) dx is to be minimized over a family of fusion rules F based on the given l-sample. Let ƒ ∗ ϵ F minimize I(ƒ). In general, ƒ∗ cannot be computed since the underlying probability densities are unknown. Using Vapnik's empirical risk minimization method, we show that if F has finite capacity, then under bounded error condition, for sufficiently large sample, f can be obtained such that P[I( f )−I(f ∗ )>epsiv;] for arbitrarily specified e > 0 andδ, 0

11 citations

Journal ArticleDOI
TL;DR: It is shown that if M is a set of points quasi-uniformly distributed on a unit sphere Sd-1, then there is a weak ε-net of size $W \subseteq {\Bbb R}^d$ of size $O(\log ({1/{\epsilon}) \log({1}/{ep silon}))$ for M, where kd is exponential in d.
Abstract: A weak e -net for a set of points M, is a set of points W (not necessarily in M) where every convex set containing e |M| points in M must contain at least one point in W. Weak e-nets have applications in diverse areas such as computational geometry, learning theory, optimization, and statistics. Here we show that if M is a set of points quasi-uniformly distributed on a unit sphere S d-1 , then there is a weak e-net \(W \subseteq {\Bbb R}^d\) of size \(O(\log ({1}/{\epsilon}) \log ({1}/{\epsilon}))\) for M, where k d is exponential in d. A set of points M is quasi-uniformly distributed on S d-1 if, for any spherical cap \({\mbox{$\cal C$}} \subseteq S^{d-1}$ with $\mathop{\rm Vol} olimits({\mbox{$\cal C$}}) \geq c_1/|M|\) , we have \( c_2 \mathop{\rm Vol} olimits({\mbox{$\cal C$}}) \leq | {\mbox{$\cal C$}} \cap M| \leq c_3 \mathop{\rm Vol} olimits({\mbox{$\cal C$}}) \) for three positive constants c_1, c_2, and c 3 .

11 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Let us recall some basic facts about"-nets, for example, see the papers [12], [ 4 ], and [7], or the books [2], [16], and [18]....

    [...]

  • ...Later, Blummer et al. [ 4 ] showed that Haussler and Welzl’s upper bound could be lowered....

    [...]

Proceedings Article
12 Dec 2011
TL;DR: It is proved that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be parallelized and this ability does not reduce the overall number of successive stages of boosting needed for learning by even a single stage.
Abstract: We study the fundamental problem of learning an unknown large-margin half-space in the context of parallel computation. Our main positive result is a parallel algorithm for learning a large-margin half-space that is based on interior point methods from convex optimization and fast parallel algorithms for matrix computations. We show that this algorithm learns an unknown γ-margin halfspace over n dimensions using poly(n, 1/γ) processors and runs in time O(1/γ) + O(log n). In contrast, naive parallel algorithms that learn a γ-margin halfspace in time that depends polylogarithmically on n have Ω(1/γ2) runtime dependence on γ. Our main negative result deals with boosting, which is a standard approach to learning large-margin halfspaces. We give an information-theoretic proof that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be parallelized: the ability to call the weak learner multiple times in parallel within a single boosting stage does not reduce the overall number of successive stages of boosting that are required.

11 citations


Additional excerpts

  • ...The third line of the table, included for comparison, is simply a standard sequential algorithm for learning a halfspace based on polynomial-time linear programming executed on one processor (Blumer et al., 1989; Karmarkar, 1984)....

    [...]

  • ...naive parallelization of Perceptron poly(n, 1/γ) Õ(1/γ(2)) +O(log n) naive parallelization of [27] poly(n, 1/γ) Õ(1/γ(2)) +O(log n) polynomial-time linear programming [2] 1 poly(n, log(1/γ)) This paper poly(n, 1/γ) Õ(1/γ) +O(log n)...

    [...]

Proceedings ArticleDOI
14 Jul 2019
TL;DR: A Fully Connected Cascade Neural Network is incorporated for ensemble learning, which is solved by an enhanced Levenberg-Marquardt (LM) training algorithm, and its superior performance over several baseline schemes is demonstrated.
Abstract: In this paper, a short-term load forecasting framework with long short-term memory (LSTM)-based ensemble learning is proposed. To fully exploit the correlation in data for accurate load forecasting, the data is first clustered and each cluster is used to train an LSTM model. Then a Fully Connected Cascade (FCC) Neural Network is incorporated for ensemble learning, which is solved by an enhanced Levenberg-Marquardt (LM) training algorithm. The proposed framework is tested with a public dataset, where its superior performance over several baseline schemes is demonstrated.

11 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...The VapnikChervonenkis dimension (or, the VC dimension), is introduced to measure the capacity of neural networks [35]....

    [...]

Proceedings ArticleDOI
07 Jul 1992
TL;DR: By suitably designing a linear threshold function of the outputs of individual learners, it is shown that the composite system can be made better than the best of the individual and strengthened on the VC-dimension of the hypothesis space of the fuser.
Abstract: Given N learners each capable of learning concepts (subsets) in the sense of Valiant (1985), we are interested in combining them using a single fuser. We consider two cases. In open fusion the fuser is given the sample and the hypotheses of the individual learners; we show that a fusion rule can be obtained by formulating this problem as another learning problem. We show sufficiency conditions that ensure the composite system to be better than the best of the individual. Second, in closed fusion the fuser does not have an access to either the training sample or the hypotheses of the individual learners. By using a linear threshold fusion function (of the outputs of individual learners) we show that the composite system can be made better than the best of the statistically independent learners. >

11 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations