Journal ArticleDOI
LIBSVM: A library for support vector machines
Chih-Chung Chang,Chih-Jen Lin +1 more
Reads0
Chats0
TLDR
Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.Abstract:
LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.read more
Citations
More filters
Journal ArticleDOI
Speech Emotion Recognition Using Fourier Parameters
TL;DR: Experimental results show that the proposed Fourier parameter (FP) features are effective in identifying various emotional states in speech signals and improve the recognition rates over the methods using Mel frequency cepstral coefficient features.
Proceedings Article
Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison
TL;DR: It is shown that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystrom method can yield impressively better generalization error bound than random Fourier features based approach.
Journal ArticleDOI
Universal Approximation Capability of Broad Learning System and Its Structural Variations
TL;DR: A mathematical proof of the universal approximation property of BLS is provided and the framework of several BLS variants with their mathematical modeling is given, which include cascade, recurrent, and broad–deep combination structures.
Proceedings ArticleDOI
Tagommenders: connecting users to items through tags
Shilad Sen,Jesse Vig,John Riedl +2 more
TL;DR: Algorithms combining tags with recommenders may deliver both the automation inherent in recommenders, and the flexibility and conceptual comprehensibility inherent in tagging systems, and they may lead to flexible recommender systems that leverage the characteristics of items users find most important.
Journal ArticleDOI
Advanced mathematical methods of SOC and SOH estimation for lithium-ion batteries
TL;DR: In this paper, a dual filter consisting of an interaction of a standard Kalman filter and an Unscented Kalman Filter is proposed to predict internal battery states and a support vector machine (SVM) algorithm is implemented and coupled with the dual filter.
References
More filters
Journal ArticleDOI
Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Statistical learning theory
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings ArticleDOI
A training algorithm for optimal margin classifiers
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
A Practical Guide to Support Vector Classication
TL;DR: A simple procedure is proposed, which usually gives reasonable results and is suitable for beginners who are not familiar with SVM.
Journal ArticleDOI
A comparison of methods for multiclass support vector machines
Hsu Chih-Wei,Chih-Jen Lin +1 more
TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.