Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TLDR
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.Abstract:
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.read more
Citations
More filters
BookDOI
Semi-Supervised Learning
TL;DR: Semi-supervised learning (SSL) as discussed by the authors is the middle ground between supervised learning (in which all training examples are labeled) and unsupervised training (where no label data are given).
Journal ArticleDOI
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
Bjoern H. Menze,Andras Jakab,Stefan Bauer,Jayashree Kalpathy-Cramer,Keyvan Farahani,Justin Kirby,Yuliya Burren,N Porz,Johannes Slotboom,Roland Wiest,Levente Lanczi,Elizabeth R. Gerstner,Marc-André Weber,Tal Arbel,Brian B. Avants,Nicholas Ayache,Patricia Buendia,D. Louis Collins,Nicolas Cordier,Jason J. Corso,Antonio Criminisi,Tilak Das,Hervé Delingette,Çağatay Demiralp,Christopher R. Durst,Michel Dojat,Senan Doyle,Joana Festa,Florence Forbes,Ezequiel Geremia,Ben Glocker,Polina Golland,Xiaotao Guo,Andac Hamamci,Khan M. Iftekharuddin,Raj Jena,Nigel M. John,Ender Konukoglu,Danial Lashkari,José Mariz,Raphael Meier,Sérgio Pereira,Doina Precup,Stephen J. Price,Tammy Riklin Raviv,Syed M. S. Reza,Michael Ryan,Duygu Sarikaya,Lawrence H. Schwartz,Hoo-Chang Shin,Jamie Shotton,Carlos A. Silva,Nuno Sousa,Nagesh K. Subbanna,Gábor Székely,Thomas J. Taylor,Owen M. Thomas,Nicholas J. Tustison,Gozde Unal,Flor Vasseur,Max Wintermark,Dong Hye Ye,Liang Zhao,Binsheng Zhao,Darko Zikic,Marcel Prastawa,Mauricio Reyes,Koen Van Leemput +67 more
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Book
Applied Predictive Modeling
Max Kuhn,Kjell Johnson +1 more
TL;DR: This research presents a novel and scalable approach called “Smartfitting” that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of designing and implementing statistical models for regression models.
Book
Prediction, learning, and games
Nicolò Cesa-Bianchi,Gábor Lugosi +1 more
TL;DR: In this paper, the authors provide a comprehensive treatment of the problem of predicting individual sequences using expert advice, a general framework within which many related problems can be cast and discussed, such as repeated game playing, adaptive data compression, sequential investment in the stock market, sequential pattern analysis, and several other problems.
Journal ArticleDOI
An introduction to kernel-based learning algorithms
TL;DR: This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods.
References
More filters
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Proceedings ArticleDOI
A training algorithm for optimal margin classifiers
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Book
Methods of Mathematical Physics
Richard Courant,David Hilbert +1 more
TL;DR: In this paper, the authors present an algebraic extension of LINEAR TRANSFORMATIONS and QUADRATIC FORMS, and apply it to EIGEN-VARIATIONS.