Journal ArticleDOI
LIBSVM: A library for support vector machines
Chih-Chung Chang,Chih-Jen Lin +1 more
Reads0
Chats0
TLDR
Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.Abstract:
LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.read more
Citations
More filters
Book ChapterDOI
An Efficient Dense and Scale-Invariant Spatio-Temporal Interest Point Detector
TL;DR: In this article, the Hessian scale-invariant saliency measure is used to detect spatio-temporal interest points that are at the same time scale invariant and densely cover the video content.
Journal ArticleDOI
Load forecasting using support vector Machines: a study on EUNITE competition 2001
TL;DR: How SVM, a new learning technique, is successfully applied to load forecasting is discussed in detail and some important conclusions are that temperature might not be useful in such a mid-term load forecasting problem and that the introduction of time-series concept may improve the forecasting.
Proceedings ArticleDOI
Reliable Crowdsourcing and Deep Locality-Preserving Learning for Expression Recognition in the Wild
Shan Li,Weihong Deng,Junping Du +2 more
TL;DR: A new DLP-CNN (Deep Locality-Preserving CNN) method, which aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatters, is proposed.
Journal ArticleDOI
Predicting subcellular localization of proteins for Gram-negative bacteria by support vector machines based on n-peptide compositions
TL;DR: This method uses the support vector machines trained by multiple feature vectors based on n‐peptide compositions to predict subcellular localization for Gram‐negative bacteria, and achieves the highest prediction rate ever reported.
Journal ArticleDOI
OP-ELM: Optimally Pruned Extreme Learning Machine
TL;DR: The proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM, and is still able to maintain an accuracy that is comparable to the performance of the SVM.
References
More filters
Journal ArticleDOI
Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Statistical learning theory
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings ArticleDOI
A training algorithm for optimal margin classifiers
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
A Practical Guide to Support Vector Classication
TL;DR: A simple procedure is proposed, which usually gives reasonable results and is suitable for beginners who are not familiar with SVM.
Journal ArticleDOI
A comparison of methods for multiclass support vector machines
Hsu Chih-Wei,Chih-Jen Lin +1 more
TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.