scispace - formally typeset
Search or ask a question
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It was found that the SVM model produced better prediction performance for crash injury severity than did the OP model, and for several variables such as the length of the exit ramp and the shoulder width of the freeway mainline, the results of the S VM model are more reasonable than those of theOP model.

224 citations

Journal ArticleDOI
TL;DR: Individuals with UD and those with BD are differentiated by structural abnormalities in neural regions supporting emotion processing, and neuroimaging and multivariate pattern classification techniques are promising tools to differentiate UD from BD.
Abstract: Importance The structural abnormalities in the brain that accurately differentiate unipolar depression (UD) and bipolar depression (BD) remain unidentified. Objectives First, to investigate and compare morphometric changes in UD and BD, and to replicate the findings at 2 independent neuroimaging sites; second, to differentiate UD and BD using multivariate pattern classification techniques. Design, Setting, and Participants In a 2-center cross-sectional study, structural gray matter data were obtained at 2 independent sites (Pittsburgh, Pennsylvania, and Munster, Germany) using 3-T magnetic resonance imaging. Voxel-based morphometry was used to compare local gray and white matter volumes, and a novel pattern classification approach was used to discriminate between UD and BD, while training the classifier at one imaging site and testing in an independent sample at the other site. The Pittsburgh sample of participants was recruited from the Western Psychiatric Institute and Clinic at the University of Pittsburgh from 2008 to 2012. The Munster sample was recruited from the Department of Psychiatry at the University of Munster from 2010 to 2012. Equally divided between the 2 sites were 58 currently depressed patients with bipolar I disorder, 58 age- and sex-matched unipolar depressed patients, and 58 matched healthy controls. Main Outcomes and Measures Magnetic resonance imaging was used to detect structural differences between groups. Morphometric analyses were applied using voxel-based morphometry. Pattern classification techniques were used for a multivariate approach. Results At both sites, individuals with BD showed reduced gray matter volumes in the hippocampal formation and the amygdala relative to individuals with UD (Montreal Neurological Institute coordinates x = −22, y = −1, z = 20; k = 1938 voxels; t = 4.75), whereas individuals with UD showed reduced gray matter volumes in the anterior cingulate gyrus compared with individuals with BD (Montreal Neurological Institute coordinates x = −8, y = 32, z = 3; k = 979 voxels; t = 6.37; all corrected P P P Conclusions and Relevance Individuals with UD and those with BD are differentiated by structural abnormalities in neural regions supporting emotion processing. Neuroimaging and multivariate pattern classification techniques are promising tools to differentiate UD from BD and show promise as future diagnostic aids.

224 citations

Journal ArticleDOI
TL;DR: Raman spectroscopy allows a non-invasive, continuous monitoring of cell death, which may help shedding new light on complex pathophysiological or drug-induced cell death processes.
Abstract: Although apoptosis and necrosis have distinct features, the identification and discrimination of apoptotic and necrotic cell death in vitro is challenging. Immunocytological and biochemical assays represent the current gold standard for monitoring cell death pathways; however, these standard assays are invasive, render large numbers of cells and impede continuous monitoring experiments. In this study, both room temperature (RT)-induced apoptosis and heat-triggered necrosis were analyzed in individual Saos-2 and SW-1353 cells by utilizing Raman microspectroscopy. A targeted analysis of defined cell death modalities, including early and late apoptosis as well as necrosis, was facilitated based on the combination of Raman spectroscopy with fluorescence microscopy. Spectral shifts were identified in the two cell lines that reflect biochemical changes specific for either RT-induced apoptosis or heat-mediated necrosis. A supervised classification model specified apoptotic and necrotic cell death based on single cell Raman spectra. To conclude, Raman spectroscopy allows a non-invasive, continuous monitoring of cell death, which may help shedding new light on complex pathophysiological or drug-induced cell death processes.

224 citations

Journal ArticleDOI
TL;DR: Standard machine learning techniques naive Bayes and SVM are incorporated into the domain of online Cantonese-written restaurant reviews to automatically classify user reviews as positive or negative, finding that accuracy is influenced by interaction between the classification models and the feature options.
Abstract: Research highlights? Naive Bayes and SVM are used for Cantonese sentiment classification. ? Accuracy is influenced by interaction between classification models and features. ? Naive Bayes classifier achieves as well as or better accuracy than SVM. ? Character-based bigrams are better features than unigrams and trigrams in capturing Cantonese sentiment. Cantonese is an important dialect in some regions of Southern China. Local online users often represent their opinions and experiences on the web with written Cantonese. Although the information in those reviews is valuable to potential consumers and sellers, the huge amount of web reviews make it difficult to give an unbiased evaluation to a product and the Cantonese reviews are unintelligible for Mandarin Chinese speakers.In this paper, standard machine learning techniques naive Bayes and SVM are incorporated into the domain of online Cantonese-written restaurant reviews to automatically classify user reviews as positive or negative. The effects of feature presentations and feature sizes on classification performance are discussed. We find that accuracy is influenced by interaction between the classification models and the feature options. The naive Bayes classifier achieves as well as or better accuracy than SVM. Character-based bigrams are proved better features than unigrams and trigrams in capturing Cantonese sentiment orientation.

224 citations

Journal ArticleDOI
TL;DR: Experiments performed on a machine fault simulator indicate that compared with the current state-of-the-art methods, the proposed convolutional discriminative feature learning method shows significant performance gains, and it is effective and efficient for induction motor fault diagnosis.
Abstract: A convolutional discriminative feature learning method is presented for induction motor fault diagnosis. The approach firstly utilizes back-propagation (BP)-based neural network to learn local filters capturing discriminative information. Then, a feed-forward convolutional pooling architecture is built to extract final features through these local filters. Due to the discriminative learning of BP-based neural network, the learned local filters can discover potential discriminative patterns. Also, the convolutional pooling architecture is able to derive invariant and robust features. Therefore, the proposed method can learn robust and discriminative representation from the raw sensory data of induction motors in an efficient and automatic way. Finally, the learned representations are fed into support vector machine classifier to identify six different fault conditions. Experiments performed on a machine fault simulator indicate that compared with the current state-of-the-art methods, the proposed method shows significant performance gains, and it is effective and efficient for induction motor fault diagnosis.

224 citations


Cites methods from "LIBSVM: A library for support vecto..."

  • ...The LIBSVM tools are used for the implementation of the SVM classification [42], where the regularization term C that controls norms of model parameters and the Gaussian kernel parameter σ of the RBF-based SVM classifier are determined by cross-validation....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations


"LIBSVM: A library for support vecto..." refers background in this paper

  • ...{1,-1}, C-SVC [Boser et al. 1992; Cortes and Vapnik 1995] solves 4LIBSVM Tools: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools. the following primal optimization problem: l t min 1 w T w +C .i (1) w,b,. 2 i=1 subject to yi(w T f(xi) +b) =1 -.i, .i =0,i =1,...,l, where f(xi)maps xi into a…...

    [...]

01 Jan 1998
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Abstract: A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.

26,531 citations


"LIBSVM: A library for support vecto..." refers background in this paper

  • ...Under given parameters C > 0and E> 0, the standard form of support vector regression [Vapnik 1998] is ll tt 1 T min w w + C .i + C .i * w,b,.,. * 2 i=1 i=1 subject to w T f(xi) + b- zi = E + .i, zi - w T f(xi) - b = E + .i * , * .i,.i = 0,i = 1,...,l....

    [...]

  • ...It can be clearly seen that C-SVC and one-class SVM are already in the form of problem (11)....

    [...]

  • ..., l, in two classes, and a vector y ∈ Rl such that yi ∈ {1,−1}, C-SVC (Cortes and Vapnik, 1995; Vapnik, 1998) solves the following primal problem:...

    [...]

  • ...Then, according to the SVM formulation, svm train one calls a corresponding subroutine such as solve c svc for C-SVC and solve nu svc for ....

    [...]

  • ...Note that b of C-SVC and E-SVR plays the same role as -. in one-class SVM, so we de.ne ....

    [...]

Proceedings ArticleDOI
01 Jul 1992
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Abstract: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.

11,211 citations


"LIBSVM: A library for support vecto..." refers background in this paper

  • ...It can be clearly seen that C-SVC and one-class SVM are already in the form of problem (11)....

    [...]

  • ...Then, according to the SVM formulation, svm train one calls a corresponding subroutine such as solve c svc for C-SVC and solve nu svc for ....

    [...]

  • ...Note that b of C-SVC and E-SVR plays the same role as -. in one-class SVM, so we de.ne ....

    [...]

  • ...In Section 2, we describe SVM formulations sup­ported in LIBSVM: C-Support Vector Classi.cation (C-SVC), ....

    [...]

  • ...{1,-1}, C-SVC [Boser et al. 1992; Cortes and Vapnik 1995] solves 4LIBSVM Tools: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools. the following primal optimization problem: l t min 1 w T w +C .i (1) w,b,. 2 i=1 subject to yi(w T f(xi) +b) =1 -.i, .i =0,i =1,...,l, where f(xi)maps xi into a higher-dimensional space and C > 0 is the regularization parameter....

    [...]

01 Jan 2008
TL;DR: A simple procedure is proposed, which usually gives reasonable results and is suitable for beginners who are not familiar with SVM.
Abstract: Support vector machine (SVM) is a popular technique for classication. However, beginners who are not familiar with SVM often get unsatisfactory results since they miss some easy but signicant steps. In this guide, we propose a simple procedure, which usually gives reasonable results.

7,069 citations


"LIBSVM: A library for support vecto..." refers methods in this paper

  • ...A Simple Example of Running LIBSVM While detailed instructions of using LIBSVM are available in the README file of the package and the practical guide by Hsu et al. [2003], here we give a simple example....

    [...]

  • ...For instructions of using LIBSVM, see the README file included in the package, the LIBSVM FAQ,3 and the practical guide by Hsu et al. [2003]. LIBSVM supports the following learning tasks....

    [...]

Journal ArticleDOI
TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.
Abstract: Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such "all-together" methods. We then compare their performance with three methods based on binary classifications: "one-against-all," "one-against-one," and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the "one-against-one" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors.

6,562 citations