scispace - formally typeset
Search or ask a question
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Comparative studies on five large IQA databases show that the proposed BPRI model is comparable to the state-of-the-art opinion-aware- and OU-BIQA models, and not only performs well on natural scene images, but also is applicable to screen content images.
Abstract: Traditional full-reference image quality assessment (IQA) metrics generally predict the quality of the distorted image by measuring its deviation from a perfect quality image called reference image. When the reference image is not fully available, the reduced-reference and no-reference IQA metrics may still be able to derive some characteristics of the perfect quality images, and then measure the distorted image's deviation from these characteristics. In this paper, contrary to the conventional IQA metrics, we utilize a new “reference” called pseudo-reference image (PRI) and a PRI-based blind IQA (BIQA) framework. Different from a traditional reference image, which is assumed to have a perfect quality, PRI is generated from the distorted image and is assumed to suffer from the severest distortion for a given application. Based on the PRI-based BIQA framework, we develop distortion-specific metrics to estimate blockiness, sharpness, and noisiness. The PRI-based metrics calculate the similarity between the distorted image's and the PRI's structures. An image suffering from severer distortion has a higher degree of similarity with the corresponding PRI. Through a two-stage quality regression after a distortion identification framework, we then integrate the PRI-based distortion-specific metrics into a general-purpose BIQA method named blind PRI-based (BPRI) metric. The BPRI metric is opinion-unaware (OU) and almost training-free except for the distortion identification process. Comparative studies on five large IQA databases show that the proposed BPRI model is comparable to the state-of-the-art opinion-aware- and OU-BIQA models. Furthermore, BPRI not only performs well on natural scene images, but also is applicable to screen content images. The MATLAB source code of BPRI and other PRI-based distortion-specific metrics will be publicly available.

223 citations


Cites methods from "LIBSVM: A library for support vecto..."

  • ...LIBSVM [50] is adopted to implement SVM with a radial basis function (RBF) kernel....

    [...]

Journal ArticleDOI
TL;DR: Remaining useful life values have been predicted here by using the hybrid PSO–SVM-based model from the remaining measured parameters (input variables) for aircraft engines with success.

223 citations

Journal Article
TL;DR: The traditional active learning framework is extended to include feedback on features in addition to labeling instances, and an algorithm that interleaves labeling features and documents which significantly accelerates standard active learning in the authors' simulation experiments is devised.
Abstract: We extend the traditional active learning framework to include feedback on features in addition to labeling instances, and we execute a careful study of the effects of feature selection and human feedback on features in the setting of text categorization. Our experiments on a variety of categorization tasks indicate that there is significant potential in improving classifier performance by feature re-weighting, beyond that achieved via membership queries alone (traditional active learning) if we have access to an oracle that can point to the important (most predictive) features. Our experiments on human subjects indicate that human feedback on feature relevance can identify a sufficient proportion of the most relevant features (over 50% in our experiments). We find that on average, labeling a feature takes much less time than labeling a document. We devise an algorithm that interleaves labeling features and documents which significantly accelerates standard active learning in our simulation experiments. Feature feedback can complement traditional active learning in applications such as news filtering, e-mail classification, and personalization, where the human teacher can have significant knowledge on the relevance of features.

223 citations

Journal ArticleDOI
TL;DR: This paper introduces multiple pseudo reference images (MPRIs) by further degrading the distorted image in several ways and to certain degrees, and then compares the similarities between the distorted images and the MPRIs, and uses the full-reference IQA framework to compute the quality.
Abstract: Traditional blind image quality assessment (IQA) measures generally predict quality from a sole distorted image directly. In this paper, we first introduce multiple pseudo reference images (MPRIs) by further degrading the distorted image in several ways and to certain degrees, and then compare the similarities between the distorted image and the MPRIs. Via such distortion aggravation, we can have some references to compare with, i.e., the MPRIs, and utilize the full-reference IQA framework to compute the quality. Specifically, we apply four types and five levels of distortion aggravation to deal with the commonly encountered distortions. Local binary pattern features are extracted to describe the similarities between the distorted image and the MPRIs. The similarity scores are then utilized to estimate the overall quality. More similar to a specific pseudo reference image (PRI) indicates closer quality to this PRI. Owning to the availability of the created multiple PRIs, we can reduce the influence of image content, and infer the image quality more accurately and consistently. Validation is conducted on four mainstream natural scene image and screen content image quality assessment databases, and the proposed method is comparable to or outperforms the state-of-the-art blind IQA measures. The MATLAB source code of the proposed measure will be publicly available.

223 citations


Cites methods from "LIBSVM: A library for support vecto..."

  • ...(12) We use the LIBSVM [42] implementation of the SVR with a radial basis function (RBF) kernel....

    [...]

  • ...We use the LIBSVM [42] implementation of the SVR with a radial basis function (RBF) kernel....

    [...]

Journal ArticleDOI
TL;DR: Experimental results have demonstrated that the FKNN-based system greatly outperforms SVM-based approaches and other methods in the literature, and might serve as a new candidate of powerful tools for diagnosing PD with excellent performance.
Abstract: In this paper, we present an effective and efficient diagnosis system using fuzzy k-nearest neighbor (FKNN) for Parkinson's disease (PD) diagnosis. The proposed FKNN-based system is compared with the support vector machines (SVM) based approaches. In order to further improve the diagnosis accuracy for detection of PD, the principle component analysis was employed to construct the most discriminative new feature sets on which the optimal FKNN model was constructed. The effectiveness of the proposed system has been rigorously estimated on a PD data set in terms of classification accuracy, sensitivity, specificity and the area under the receiver operating characteristic (ROC) curve (AUC). Experimental results have demonstrated that the FKNN-based system greatly outperforms SVM-based approaches and other methods in the literature. The best classification accuracy (96.07%) obtained by the FKNN-based system using a 10-fold cross validation method can ensure a reliable diagnostic model for detection of PD. Promisingly, the proposed system might serve as a new candidate of powerful tools for diagnosing PD with excellent performance.

223 citations


Cites methods from "LIBSVM: A library for support vecto..."

  • ...For SVM, LIBSVM implementation is utilized, which is originally developed by Chang and Lin (2001)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations


"LIBSVM: A library for support vecto..." refers background in this paper

  • ...{1,-1}, C-SVC [Boser et al. 1992; Cortes and Vapnik 1995] solves 4LIBSVM Tools: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools. the following primal optimization problem: l t min 1 w T w +C .i (1) w,b,. 2 i=1 subject to yi(w T f(xi) +b) =1 -.i, .i =0,i =1,...,l, where f(xi)maps xi into a…...

    [...]

01 Jan 1998
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Abstract: A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.

26,531 citations


"LIBSVM: A library for support vecto..." refers background in this paper

  • ...Under given parameters C > 0and E> 0, the standard form of support vector regression [Vapnik 1998] is ll tt 1 T min w w + C .i + C .i * w,b,.,. * 2 i=1 i=1 subject to w T f(xi) + b- zi = E + .i, zi - w T f(xi) - b = E + .i * , * .i,.i = 0,i = 1,...,l....

    [...]

  • ...It can be clearly seen that C-SVC and one-class SVM are already in the form of problem (11)....

    [...]

  • ..., l, in two classes, and a vector y ∈ Rl such that yi ∈ {1,−1}, C-SVC (Cortes and Vapnik, 1995; Vapnik, 1998) solves the following primal problem:...

    [...]

  • ...Then, according to the SVM formulation, svm train one calls a corresponding subroutine such as solve c svc for C-SVC and solve nu svc for ....

    [...]

  • ...Note that b of C-SVC and E-SVR plays the same role as -. in one-class SVM, so we de.ne ....

    [...]

Proceedings ArticleDOI
01 Jul 1992
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Abstract: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.

11,211 citations


"LIBSVM: A library for support vecto..." refers background in this paper

  • ...It can be clearly seen that C-SVC and one-class SVM are already in the form of problem (11)....

    [...]

  • ...Then, according to the SVM formulation, svm train one calls a corresponding subroutine such as solve c svc for C-SVC and solve nu svc for ....

    [...]

  • ...Note that b of C-SVC and E-SVR plays the same role as -. in one-class SVM, so we de.ne ....

    [...]

  • ...In Section 2, we describe SVM formulations sup­ported in LIBSVM: C-Support Vector Classi.cation (C-SVC), ....

    [...]

  • ...{1,-1}, C-SVC [Boser et al. 1992; Cortes and Vapnik 1995] solves 4LIBSVM Tools: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools. the following primal optimization problem: l t min 1 w T w +C .i (1) w,b,. 2 i=1 subject to yi(w T f(xi) +b) =1 -.i, .i =0,i =1,...,l, where f(xi)maps xi into a higher-dimensional space and C > 0 is the regularization parameter....

    [...]

01 Jan 2008
TL;DR: A simple procedure is proposed, which usually gives reasonable results and is suitable for beginners who are not familiar with SVM.
Abstract: Support vector machine (SVM) is a popular technique for classication. However, beginners who are not familiar with SVM often get unsatisfactory results since they miss some easy but signicant steps. In this guide, we propose a simple procedure, which usually gives reasonable results.

7,069 citations


"LIBSVM: A library for support vecto..." refers methods in this paper

  • ...A Simple Example of Running LIBSVM While detailed instructions of using LIBSVM are available in the README file of the package and the practical guide by Hsu et al. [2003], here we give a simple example....

    [...]

  • ...For instructions of using LIBSVM, see the README file included in the package, the LIBSVM FAQ,3 and the practical guide by Hsu et al. [2003]. LIBSVM supports the following learning tasks....

    [...]

Journal ArticleDOI
TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.
Abstract: Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such "all-together" methods. We then compare their performance with three methods based on binary classifications: "one-against-all," "one-against-one," and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the "one-against-one" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors.

6,562 citations