V
Vladimir Vapnik
Researcher at Princeton University
Publications - 101
Citations - 170176
Vladimir Vapnik is an academic researcher from Princeton University. The author has contributed to research in topics: Support vector machine & Generalization. The author has an hindex of 59, co-authored 101 publications receiving 159214 citations. Previous affiliations of Vladimir Vapnik include Facebook & Columbia University.
Papers
More filters
Proceedings Article
A new learning paradigm: Learning using privileged information
Vladimir Vapnik,Akshay Vashist +1 more
TL;DR: Details of the new paradigm and corresponding algorithms are discussed, some new algorithms are introduced, several specific forms of privileged information are considered, and superiority of thenew learning paradigm over the classical learning paradigm when solving practical problems is demonstrated.
Proceedings Article
Parallel Support Vector Machines: The Cascade SVM
TL;DR: An algorithm for support vector machines (SVM) that can be parallelized efficiently and scales to very large problems with hundreds of thousands of training vectors, which can be spread over multiple processors with minimal communication overhead and requires far less memory.
Proceedings Article
Model Selection for Support Vector Machines
Olivier Chapelle,Vladimir Vapnik +1 more
TL;DR: New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space and it is shown that using these functionals one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.
Journal ArticleDOI
Measuring the VC-dimension of a learning machine
TL;DR: A method for measuring the capacity of learning machines is described, based on fitting a theoretically derived function to empirical measurements of the maximal difference between the error rates on two separate data sets of varying sizes.
Journal ArticleDOI
Boosting and other ensemble methods
TL;DR: A surprising result is shown for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate.