scispace - formally typeset
Journal ArticleDOI

Combining MLC and SVM classifiers for learning based decision making: analysis and evaluations

01 Jan 2015-Computational Intelligence and Neuroscience (Hindawi Publishing Corporation)-Vol. 2015, pp 423581-423581

...read more

Content maybe subject to copyright    Report


Citations
More filters
Journal ArticleDOI

[...]

TL;DR: Experimental results show that the proposed model outperforms five other state-of-the-art video saliency detection approaches and the proposed framework is found useful for other video content based applications such as video highlights.
Abstract: Although research on detection of saliency and visual attention has been active over recent years, most of the existing work focuses on still image rather than video based saliency In this paper, a deep learning based hybrid spatiotemporal saliency feature extraction framework is proposed for saliency detection from video footages The deep learning model is used for the extraction of high-level features from raw video data, and they are then integrated with other high-level features The deep learning network has been found extremely effective for extracting hidden features than that of conventional handcrafted methodology The effectiveness for using hybrid high-level features for saliency detection in video is demonstrated in this work Rather than using only one static image, the proposed deep learning model take several consecutive frames as input and both the spatial and temporal characteristics are considered when computing saliency maps The efficacy of the proposed hybrid feature framework is evaluated by five databases with human gaze complex scenes Experimental results show that the proposed model outperforms five other state-of-the-art video saliency detection approaches In addition, the proposed framework is found useful for other video content based applications such as video highlights As a result, a large movie clip dataset together with labeled video highlights is generated

120 citations


Cites methods from "Combining MLC and SVM classifiers f..."

  • [...]

Journal ArticleDOI

[...]

18 Nov 2016
TL;DR: A novel generalized deep transfer networks (DTNs) capable of transferring label information across heterogeneous domains, textual domain to visual domain, and to share the labels between two domains are proposed, able to generate domain-specific and shared interdomain features.
Abstract: In recent years, deep neural networks have been successfully applied to model visual concepts and have achieved competitive performance on many tasks. Despite their impressive performance, traditional deep networks are subjected to the decayed performance under the condition of lacking sufficient training data. This problem becomes extremely severe for deep networks trained on a very small dataset, making them overfitting by capturing nonessential or noisy information in the training set. Toward this end, we propose a novel generalized deep transfer networks (DTNs), capable of transferring label information across heterogeneous domains, textual domain to visual domain. The proposed framework has the ability to adequately mitigate the problem of insufficient training images by bringing in rich labels from the textual domain. Specifically, to share the labels between two domains, we build parameter- and representation-shared layers. They are able to generate domain-specific and shared interdomain features, making this architecture flexible and powerful in capturing complex information from different domains jointly. To evaluate the proposed method, we release a new dataset extended from NUS-WIDE at http://imag.njust.edu.cn/NUS-WIDE-128.html. Experimental results on this dataset show the superior performance of the proposed DTNs compared to existing state-of-the-art methods.

92 citations


Cites methods from "Combining MLC and SVM classifiers f..."

  • [...]

  • [...]

Journal ArticleDOI

[...]

TL;DR: The validation test on UCI data sets demonstrates that for imbalanced medical data, the proposed method enhanced the overall performance of the classifier while producing high accuracy in identifying both majority and minority class.
Abstract: The classification in class imbalanced data has drawn significant interest in medical application. Most existing methods are prone to categorize the samples into the majority class, resulting in bias, in particular the insufficient identification of minority class. A kind of novel approach, class weights random forest is introduced to address the problem, by assigning individual weights for each class instead of a single weight. The validation test on UCI data sets demonstrates that for imbalanced medical data, the proposed method enhanced the overall performance of the classifier while producing high accuracy in identifying both majority and minority class.

72 citations


Cites methods from "Combining MLC and SVM classifiers f..."

  • [...]

Journal ArticleDOI

[...]

TL;DR: The conclusion is that both decision-level and pixel-level fusion approaches produced comparable classification results, and either of the procedures can be adopted in areas with inescapable cloud problems for updating crop inventories and acreage estimation at regional scales.
Abstract: Crops mapping unequivocally becomes a daunting task in humid, tropical, or subtropical regions due to unattainability of adequate cloud-free optical imagery. Objective of this study is to evaluate the comparative performance between decision- and pixel-levels data fusion ensemble classified maps using Landsat 8, Landsat 7, and Sentinel-2 data. This research implements parallel and concatenation approach to ensemble classify the images. The multiclassifier system comprises of Maximum Likelihood, Support Vector Machines, and Spectral Information Divergence as base classifiers. Decision-level fusion is achieved by implementing plurality voting method. Pixel-level fusion is achieved by implementing fusion by mosaicking approach, thus appending cloud-free pixels from either Sentinel-2 or Landsat 7. The comparison is based on the assessment of classification accuracy. Overall accuracy results show that decision-level fusion achieved an accuracy of 85.4%, whereas pixel-level fusion classification attained 82.5%, but their respective kappa coefficients of 0.84 and 0.80 but are not significantly different according to Z-test at $\alpha = {\text{0.05}}$ . F1-score values reveal that decision-level performed better on most individual classes than pixel-level. Regression coefficient between planted areas from both approaches is 0.99. However, Support Vector Machines performed the best of the three classifiers. The conclusion is that both decision-level and pixel-level fusion approaches produced comparable classification results. Therefore, either of the procedures can be adopted in areas with inescapable cloud problems for updating crop inventories and acreage estimation at regional scales. Future work can focus on performing more comparison tests on different areas, run tests using different multiclassifier systems, and use different imagery.

11 citations


Additional excerpts

  • [...]

Journal ArticleDOI

[...]

TL;DR: In this article, the authors map land and aquatic vegetation of coastal areas using remote sensing for better management and conservation in many parts of the world, such as India, Australia, and New Zealand.
Abstract: Mapping land and aquatic vegetation of coastal areas using remote sensing for better management and conservation has been a long-standing interest in many parts of the world. Due to natural complex...

8 citations


Cites background or methods or result from "Combining MLC and SVM classifiers f..."

  • [...]

  • [...]

  • [...]

  • [...]


References
More filters
Journal ArticleDOI

[...]

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

37,868 citations


"Combining MLC and SVM classifiers f..." refers methods in this paper

  • [...]

  • [...]

Journal ArticleDOI

[...]

TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

35,157 citations


"Combining MLC and SVM classifiers f..." refers background in this paper

  • [...]

  • [...]

  • [...]

Journal ArticleDOI

[...]

TL;DR: Decomposition implementations for two "all-together" multiclass SVM methods are given and it is shown that for large problems methods by considering all data at once in general need fewer support vectors.
Abstract: Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such "all-together" methods. We then compare their performance with three methods based on binary classifications: "one-against-all," "one-against-one," and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the "one-against-one" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors.

6,178 citations


"Combining MLC and SVM classifiers f..." refers background or methods in this paper

  • [...]

  • [...]

[...]

01 Jan 1999

4,027 citations


"Combining MLC and SVM classifiers f..." refers methods in this paper

  • [...]

  • [...]

  • [...]

  • [...]

  • [...]

Journal Article

[...]

TL;DR: This paper describes the algorithmic implementation of multiclass kernel-based vector machines using a generalized notion of the margin to multiclass problems, and describes an efficient fixed-point algorithm for solving the reduced optimization problems and proves its convergence.
Abstract: In this paper we describe the algorithmic implementation of multiclass kernel-based vector machines. Our starting point is a generalized notion of the margin to multiclass problems. Using this notion we cast multiclass categorization problems as a constrained optimization problem with a quadratic objective function. Unlike most of previous approaches which typically decompose a multiclass problem into multiple independent binary classification tasks, our notion of margin yields a direct method for training multiclass predictors. By using the dual of the optimization problem we are able to incorporate kernels with a compact set of constraints and decompose the dual problem into multiple optimization problems of reduced size. We describe an efficient fixed-point algorithm for solving the reduced optimization problems and prove its convergence. We then discuss technical details that yield significant running time improvements for large datasets. Finally, we describe various experiments with our approach comparing it to previously studied kernel-based methods. Our experiments indicate that for multiclass problems we attain state-of-the-art accuracy.

2,120 citations


"Combining MLC and SVM classifiers f..." refers background in this paper

  • [...]



Trending Questions (1)
Is SVM reinforcement learning?

Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process.