scispace - formally typeset
Journal ArticleDOI

LIBSVM: A library for support vector machines

Reads0
Chats0
TLDR
Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract
LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal Article

An improved GLMNET for L1-regularized logistic regression

TL;DR: In this paper, an improved GLMNET is proposed to address some theoretical and implementation issues, which is shown to be more efficient than CDN for L1-regularized logistic regression.
Journal ArticleDOI

Urban traffic congestion estimation and prediction based on floating car trajectory data

TL;DR: A novel approach to estimate and predict the urban traffic congestion using floating car trajectory data efficiently using a new fuzzy comprehensive evaluation method in which the weights of multi-indexes are assigned according to the traffic flows.
Journal ArticleDOI

Single-molecule spectroscopy of amino acids and peptides by recognition tunnelling

TL;DR: It is shown that single amino acids can be identified by trapping the molecules between two electrodes that are coated with a layer of recognition molecules and measuring the electron tunneling current across the junction, and a machine-learning algorithm is used to distinguish between the sets of electronic ‘fingerprints’ associated with each binding motif.
ReportDOI

A probabilistic model of redundancy in information extraction

TL;DR: A combinatorial "balls-andurns" model is introduced that computes the impact of sample size, redundancy, and corroboration from multiple distinct extraction rules on the probability that an extraction is correct.
Journal ArticleDOI

Short-term electricity demand forecasting with MARS, SVR and ARIMA models using aggregated demand data in Queensland, Australia

TL;DR: Data-driven techniques for forecasting short-term G forecasting in Queensland, Australia, based on the Multivariate Adaptive Regression Spline, Support Vector Regression, and Autoregressive Integrated Moving Average models are adopted and are useful scientific tools for further exploration of real-time electricity demand data forecasting.
References
More filters
Journal ArticleDOI

Support-Vector Networks

TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings ArticleDOI

A training algorithm for optimal margin classifiers

TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.

A Practical Guide to Support Vector Classication

TL;DR: A simple procedure is proposed, which usually gives reasonable results and is suitable for beginners who are not familiar with SVM.
Journal ArticleDOI

A comparison of methods for multiclass support vector machines

TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.