scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Combining MLC and SVM classifiers for learning based decision making: analysis and evaluations

01 Jan 2015-Computational Intelligence and Neuroscience (Hindawi Publishing Corporation)-Vol. 2015, pp 423581-423581
TL;DR: MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making and interesting results are reported to indicate how the combined classifier may work under various conditions.
Abstract: Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences.The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
18 Nov 2016
TL;DR: A novel generalized deep transfer networks (DTNs) capable of transferring label information across heterogeneous domains, textual domain to visual domain, and to share the labels between two domains are proposed, able to generate domain-specific and shared interdomain features.
Abstract: In recent years, deep neural networks have been successfully applied to model visual concepts and have achieved competitive performance on many tasks. Despite their impressive performance, traditional deep networks are subjected to the decayed performance under the condition of lacking sufficient training data. This problem becomes extremely severe for deep networks trained on a very small dataset, making them overfitting by capturing nonessential or noisy information in the training set. Toward this end, we propose a novel generalized deep transfer networks (DTNs), capable of transferring label information across heterogeneous domains, textual domain to visual domain. The proposed framework has the ability to adequately mitigate the problem of insufficient training images by bringing in rich labels from the textual domain. Specifically, to share the labels between two domains, we build parameter- and representation-shared layers. They are able to generate domain-specific and shared interdomain features, making this architecture flexible and powerful in capturing complex information from different domains jointly. To evaluate the proposed method, we release a new dataset extended from NUS-WIDE at http://imag.njust.edu.cn/NUS-WIDE-128.html. Experimental results on this dataset show the superior performance of the proposed DTNs compared to existing state-of-the-art methods.

137 citations


Cites methods from "Combining MLC and SVM classifiers f..."

  • ...(1) SVM: SVM is the conventional shallow structured classifier [Zhang et al. 2015a] and is set as the baseline for comparisons....

    [...]

  • ...We compare the proposed generalized DTNs (sig-tDTNs and duft-tDTNs) to the following methods: (1) SVM: SVM is the conventional shallow structured classifier [Zhang et al. 2015a] and is set as the baseline for comparisons....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the proposed model outperforms five other state-of-the-art video saliency detection approaches and the proposed framework is found useful for other video content based applications such as video highlights.

130 citations


Cites methods from "Combining MLC and SVM classifiers f..."

  • ...For regression purpose, a linear SVM is adopted for its simplicity and effectiveness[23][102]....

    [...]

Journal ArticleDOI
TL;DR: The validation test on UCI data sets demonstrates that for imbalanced medical data, the proposed method enhanced the overall performance of the classifier while producing high accuracy in identifying both majority and minority class.
Abstract: The classification in class imbalanced data has drawn significant interest in medical application. Most existing methods are prone to categorize the samples into the majority class, resulting in bias, in particular the insufficient identification of minority class. A kind of novel approach, class weights random forest is introduced to address the problem, by assigning individual weights for each class instead of a single weight. The validation test on UCI data sets demonstrates that for imbalanced medical data, the proposed method enhanced the overall performance of the classifier while producing high accuracy in identifying both majority and minority class.

128 citations


Cites methods from "Combining MLC and SVM classifiers f..."

  • ...There were many popular algorithms concerning about Classifier Combination; such as Bayesian [41], [42], Dempster–Shafer [43]–[47], Fuzzy Integral [48], [49], and Voting Methods [50]–[57]....

    [...]

Journal ArticleDOI
TL;DR: The conclusion is that both decision-level and pixel-level fusion approaches produced comparable classification results, and either of the procedures can be adopted in areas with inescapable cloud problems for updating crop inventories and acreage estimation at regional scales.
Abstract: Crops mapping unequivocally becomes a daunting task in humid, tropical, or subtropical regions due to unattainability of adequate cloud-free optical imagery. Objective of this study is to evaluate the comparative performance between decision- and pixel-levels data fusion ensemble classified maps using Landsat 8, Landsat 7, and Sentinel-2 data. This research implements parallel and concatenation approach to ensemble classify the images. The multiclassifier system comprises of Maximum Likelihood, Support Vector Machines, and Spectral Information Divergence as base classifiers. Decision-level fusion is achieved by implementing plurality voting method. Pixel-level fusion is achieved by implementing fusion by mosaicking approach, thus appending cloud-free pixels from either Sentinel-2 or Landsat 7. The comparison is based on the assessment of classification accuracy. Overall accuracy results show that decision-level fusion achieved an accuracy of 85.4%, whereas pixel-level fusion classification attained 82.5%, but their respective kappa coefficients of 0.84 and 0.80 but are not significantly different according to Z-test at $\alpha = {\text{0.05}}$ . F1-score values reveal that decision-level performed better on most individual classes than pixel-level. Regression coefficient between planted areas from both approaches is 0.99. However, Support Vector Machines performed the best of the three classifiers. The conclusion is that both decision-level and pixel-level fusion approaches produced comparable classification results. Therefore, either of the procedures can be adopted in areas with inescapable cloud problems for updating crop inventories and acreage estimation at regional scales. Future work can focus on performing more comparison tests on different areas, run tests using different multiclassifier systems, and use different imagery.

23 citations


Additional excerpts

  • ...[56] and Szuster et al....

    [...]

Journal ArticleDOI
TL;DR: The results revealed that the supervised object-based NN approach using the visible and near-infrared bands of both satellite imagery produced the most homogenous and accurate map among the other methods.
Abstract: Impervious surface is mainly defined as any surface which water cannot infiltrate the soil. Due to the impact of urban impervious surfaces (UIS) on environmental issues, the amount of impervious surfaces has been recognized as the most significant index of environmental quality. Detection and analysis of impervious surfaces within a watershed is one of the developing areas of scientific interest. This study evaluates and compares the accuracy and performance of five classification algorithms—supervised object-based nearest neighbour (NN) classifier, supervised pixel-based maximum likelihood classifier (MLC), supervised pixel-based spectral angle mapper (SAM), band ratioing normalized difference built-up index (NDBI), and normalized difference impervious index (NDII)—in extracting urban impervious surfaces. Our first aim was to identify the most effective method for mapping UIS using Sentinel-2A and Landsat-8 satellite data. The second aim was to compare and reveal the efficiency of the spatial and spectral resolution of Sentinel-2A and Landsat-8 data in extracting UIS. The results revealed that the supervised object-based NN approach using the visible and near-infrared bands of both satellite imagery produced the most homogenous and accurate map among the other methods. The object-based NN algorithm achieved an overall classification accuracy of 90.91% and 88.64%, and Kappa coefficient of 0.82 and 0.77 for Sentinel-2 and Landsat-8 images, respectively. The study also showed that the Sentinel-2 image yielded better results than the Landsat-8 pan-sharpened image in extracting detail and classification accuracy. Comparing these methods in the selected challenging study area can provide insight into the selection of the classification method for rapid and reliable extraction of UIS.

19 citations


Cites background or methods from "Combining MLC and SVM classifiers f..."

  • ...…impact of urban impervious surfaces on environmental issues such as water and air pollution, flooding, and urban climate, the amount of impervious surfaces (IS) has been recognized as the most significant index of environmental quality (Arnold Jr and Gibbons 1996; Weng 2012; Zhang et al. 2015a)....

    [...]

  • ...…it is also reported that the distribution of IS plays a crucial role in estimating numerous socioeconomic factors such as urban development, population distribution and density, social conditions, and fluctuation of housing prices (Wu and Murray 2003; Yuan and Bauer 2007; Zhang et al. 2015a)....

    [...]

  • ...This algorithm is based on Bayesian theory in estimating parameters of a probabilistic model (Zhang et al. 2015b)....

    [...]

  • ...Nevertheless, accuratemapping of impervious surfaces using satellite passive sensor data has been a challenging task due to the diversity of urban land cover classes, where confusion often occurs between pervious and impervious surfaces (Weng 2012; Zhang et al. 2015a, 2016; Ma et al. 2017b)....

    [...]

  • ...A number of studies on the extraction of IS, including Slonecker et al. (2001), Bauer et al. (2005), Yuan and Bauer (2007), Weng (2012), Wang et al. (2015), Zhang et al. (2015a), and Wei and Blaschke (2018), have shown the effectiveness and reliability of remote sensing in the monitoring of UIS....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

40,826 citations


"Combining MLC and SVM classifiers f..." refers methods in this paper

  • ...Among these four datasets, SamplesNew is a dataset of suspicious micro-classification clusters extracted from [16] and svmguide3 is a demo dataset of practical svm guide [28], whilst sonar and splice datasets come from the UCI repository of machine learning databases [29]....

    [...]

  • ...Stage 1: SVM for initial training and classification The open source library libSVM [28] is used for initial training and classification of the aforementioned four datasets, and both the linear and the Gaussian radial basis (RBF) kernels are tested....

    [...]

Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations


"Combining MLC and SVM classifiers f..." refers background in this paper

  • ...In Cortes and Vapnik [21], the principles of SVM are comprehensively discussed....

    [...]

  • ...In Cortes and Vapnik [22], the principles of SVM are comprehensively discussed....

    [...]

  • ...Machine Learning, 2011 [22] Cortes, C., Vapnik, V., Support-vector networks, Machine Learning, 20: 273-297, 1995 [23] Hsu, C.-W., Lin, C.-J., A Comparison of Methods for Multiclass Support Vector Machines, IEEE Transactions on Neural Networks, 13(2): 415-425, 2002 [24] Lee, Y., Lin, Y., Wahba, G., Multicategory Support Vector Machines, Theory, and Application to the Classification of Microarray Data and Satellite Radiance Data, J. Amer....

    [...]

Journal ArticleDOI
TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.
Abstract: Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such "all-together" methods. We then compare their performance with three methods based on binary classifications: "one-against-all," "one-against-one," and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the "one-against-one" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors.

6,562 citations


"Combining MLC and SVM classifiers f..." refers background or methods in this paper

  • ...2 Results from a RBF-kernelled SVM and the MLC In this group of experiments, the RBF kernel is used for the SVM in the combined classifier as it is popularly used in various classification problems [16, 23]....

    [...]

  • ...Some useful further readings can be found in [23], [24] and [25]....

    [...]

01 Jan 1999

4,584 citations


"Combining MLC and SVM classifiers f..." refers methods in this paper

  • ...In Platt [25], a posterior class probability p i is estimated by a sigmoid function as follows:...

    [...]

  • ...In Platt [26], a posterior class probability ip is estimated by a sigmoid function below....

    [...]

  • ...[25] Crammer, K., Singer, Y., On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines, Journal of Machine Learning Research 2: 265–292, 2001 [26] Platt, J., Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods, In: A. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans (eds.)...

    [...]

  • ...In addition, in Lin et al [27] Platt’s approach is further improved to avoid any numerical difficulty, i.e. overflow or underflow, in determining ip in cases BAgE iSVMi )(x is either too large or too small. otherwiseee Eife p ii i EE i E i 1 1 )1( 0)1( (24) Although there are significant differences between SVM and MLC, the probabilistic model above has uncovered the connection between these two classifiers....

    [...]

  • ...Cambridge, MA., 2000 [27] Lin, H.-T., Lin, C. J., Weng, R. C., A note on Platt’s probabilistic outputs for support vector machines, Journal of Machine Learning, 68(3): 267-276, 2007....

    [...]

Journal Article
TL;DR: This paper describes the algorithmic implementation of multiclass kernel-based vector machines using a generalized notion of the margin to multiclass problems, and describes an efficient fixed-point algorithm for solving the reduced optimization problems and proves its convergence.
Abstract: In this paper we describe the algorithmic implementation of multiclass kernel-based vector machines. Our starting point is a generalized notion of the margin to multiclass problems. Using this notion we cast multiclass categorization problems as a constrained optimization problem with a quadratic objective function. Unlike most of previous approaches which typically decompose a multiclass problem into multiple independent binary classification tasks, our notion of margin yields a direct method for training multiclass predictors. By using the dual of the optimization problem we are able to incorporate kernels with a compact set of constraints and decompose the dual problem into multiple optimization problems of reduced size. We describe an efficient fixed-point algorithm for solving the reduced optimization problems and prove its convergence. We then discuss technical details that yield significant running time improvements for large datasets. Finally, we describe various experiments with our approach comparing it to previously studied kernel-based methods. Our experiments indicate that for multiclass problems we attain state-of-the-art accuracy.

2,214 citations


"Combining MLC and SVM classifiers f..." refers background in this paper

  • ...Some useful further readings can be found in [23], [24] and [25]....

    [...]

Trending Questions (1)
Is SVM reinforcement learning?

Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process.