Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TLDR
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.Abstract:
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.read more
Citations
More filters
Journal ArticleDOI
Protein function prediction via graph kernels
Karsten M. Borgwardt,Cheng Soon Ong,Stefan Schönauer,S. V. N. Vishwanathan,Alexander J. Smola,Hans-Peter Kriegel +5 more
TL;DR: A new approach that combines sequential, structural and chemical information into one graph model of proteins, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets.
Posted Content
TMVA - Toolkit for Multivariate Data Analysis
Andreas Hoecker,P. Speckmayer,J. Stelzer,Jan Therhaag,E. von Toerne,H. Voss,Moritz Backes,Tancredi Carli,O. Cohen,A. Christov,D. Dannheim,K. Danielowski,Sophie Henrot-Versille,M. Jachowski,K. Kraszewski,Attila Krasznahorkay,M. Kruk,Y. Mahalalel,Rustem Ospanov,X. Prudent,A. Robert,D. Schouten,F. Tegenfeldt,A. Voigt,K. Voss,Marcin Wladyslaw Wolter,Andrzej Zemla +26 more
TL;DR: The TMVA toolkit as discussed by the authors is a toolkit for multivariate regression of a real-valued target vector and has been extended to multivariate classification of a target vector with the same user interfaces as classification.
Journal ArticleDOI
Applications of Machine Learning in Cancer Prediction and Prognosis
Joseph A. Cruz,David S. Wishart +1 more
TL;DR: A broad survey of the different types of machine learning methods being used, the types of data being integrated and the performance of these methods in cancer prediction and prognosis is conducted, including a growing dependence on protein biomarkers and microarray data, a strong bias towards applications in prostate and breast cancer, and a heavy reliance on "older" technologies.
Journal ArticleDOI
Credit rating analysis with support vector machines and neural networks: a market comparative study
TL;DR: A relatively new machine learning technique, support vector machines (SVM), is introduced to the problem in attempt to provide a model with better explanatory power and relative importance of the input financial variables from the neural network models.
Journal ArticleDOI
Multivariate Analysis in Metabolomics.
Bradley Worley,Robert Powers +1 more
TL;DR: The use of multivariate analysis for metabolomics is discussed, as well as common pitfalls and misconceptions, and spectral features contributing most to variation or separation are identified for further analysis.
References
More filters
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Proceedings ArticleDOI
A training algorithm for optimal margin classifiers
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Book
Methods of Mathematical Physics
Richard Courant,David Hilbert +1 more
TL;DR: In this paper, the authors present an algebraic extension of LINEAR TRANSFORMATIONS and QUADRATIC FORMS, and apply it to EIGEN-VARIATIONS.