scispace - formally typeset
Search or ask a question

Showing papers on "Ranking SVM published in 2020"


Journal ArticleDOI
TL;DR: RBRL inherits the ranking loss minimization advantages of Rank-SVM and thus overcomes the disadvantages of BR suffering the class-imbalance issue and ignoring the label correlations, and derives the kernelization RBRL to achieve nonlinear multi-label classifiers.

58 citations


Journal ArticleDOI
TL;DR: This work investigates and formalizes a flexible framework consisting of two components, i.e., visual-semantic embedding and zero-shot multi-label prediction, and presents a deep regression model to project the visual features into the semantic space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors and makes label prediction possible.
Abstract: During the past decade, both multi-label learning and zero-shot learning have attracted huge research attention, and significant progress has been made. Multi-label learning algorithms aim to predict multiple labels given one instance, while most existing zero-shot learning approaches target at predicting a single testing label for each unseen class via transferring knowledge from auxiliary seen classes to target unseen classes. However, relatively less effort has been made on predicting multiple labels in the zero-shot setting, which is nevertheless a quite challenging task. In this work, we investigate and formalize a flexible framework consisting of two components, i.e., visual-semantic embedding and zero-shot multi-label prediction. First, we present a deep regression model to project the visual features into the semantic space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors and makes label prediction possible. Then, we formulate the label prediction problem as a pairwise one and employ Ranking SVM to seek the unique multi-label correlations in the embedding space. Furthermore, we provide a transductive multi-label zero-shot prediction approach that exploits the testing data manifold structure. We demonstrate the effectiveness of the proposed approach on three popular multi-label datasets with state-of-the-art performance obtained on both conventional and generalized ZSL settings.

33 citations


Journal ArticleDOI
TL;DR: A cost-sensitive ranking support vector machine (SVM) (CSRankSVM), which modifies the loss function of the ranking SVM algorithm by adding two penalty parameters to address both the cost issue and the data imbalance problem in RODP methods and achieves better performance.
Abstract: Context: Ranking-oriented defect prediction (RODP) ranks software modules to allocate limited testing resources to each module according to the predicted number of defects. Most RODP methods overlook that ranking a module with more defects incorrectly makes it difficult to successfully find all of the defects in the module due to fewer testing resources being allocated to the module, which results in much higher costs than incorrectly ranking the modules with fewer defects, and the numbers of defects in software modules are highly imbalanced in defective software datasets. Cost-sensitive learning is an effective technique in handling the cost issue and data imbalance problem for software defect prediction. However, the effectiveness of cost-sensitive learning has not been investigated in RODP models. Aims: In this article, we propose a cost-sensitive ranking support vector machine (SVM) (CSRankSVM) algorithm to improve the performance of RODP models. Method: CSRankSVM modifies the loss function of the ranking SVM algorithm by adding two penalty parameters to address both the cost issue and the data imbalance problem. Additionally, the loss function of the CSRankSVM is optimized using a genetic algorithm. Results: The experimental results for 11 project datasets with 41 releases show that CSRankSVM achieves 1.12%–15.68% higher average fault percentile average (FPA) values than the five existing RODP methods (i.e., decision tree regression, linear regression, Bayesian ridge regression, ranking SVM, and learning-to-rank (LTR)) and 1.08%–15.74% higher average FPA values than the four data imbalance learning methods (i.e., random undersampling and a synthetic minority oversampling technique; two data resampling methods; RankBoost, an ensemble learning method; IRSVM, a CSRankSVM method for information retrieval). Conclusion: CSRankSVM is capable of handling the cost issue and data imbalance problem in RODP methods and achieves better performance. Therefore, CSRankSVM is recommended as an effective method for RODP.

27 citations


Journal ArticleDOI
TL;DR: A new automatic summarization model for news text which based on fuzzy logic rules, multi-feature and Genetic algorithm is introduced and outperforms other methods including Msword, System19, System21, System 31, SDS-NNGA, GCD, SOM and Ranking SVM.
Abstract: In the last 70 years, the automatic text summarization work has become more and more important because the amount of data on the Internet is increasing so fast, and automatic text summarization work can extract useful information and knowledge what user's need that could be easily handled by humans and used for many purposes. Especially in people's daily life, news text is the type of text most people are exposed to. In this study, a new automatic summarzation model for news text which based on fuzzy logic rules, multi-feature and Genetic algorithm (GA) is introduced. Firstly, the most important feature is word features, we score each word and extracted words that exceeded the preset score as keywords and because news text is a special kind of text, it contains many specific elements, such as time, place and characters, so sometimes these special news elements can be extracted directly as keywords. Second is sentence features, a linear combination of these features shows the importance of each sentence and each feature is weighted by Genetic algorithm. At last, we use fuzzy logic system to calculate the final score in order to get automatic summarization. The results of the proposed method was compared with other methods including Msword, System19, System21, System 31, SDS-NNGA, GCD, SOM and Ranking SVM by using ROUGE assessment method on DUC2002 dataset show that proposed method outperforms the aforementioned methods.

14 citations


Journal ArticleDOI
TL;DR: In this article, a deterministic model for the selection and ranking of commercial off-the-shelf (COTS) components is developed on the basis of fuzzy modified distance-based approach (FMDBA).
Abstract: In this article, a deterministic model for the selection and ranking of commercial off-the-shelf (COTS) components is developed on the basis of fuzzy modified distance-based approach (FMDBA). The COTS selection and ranking problem is modeled as multicriteria decision making problem due to the involvement of multiple ranking criteria like functionality, reliability, compatibility, etc. FMDBA is the combination of the fuzzy set theory and the modified distance-based approach. To show the working of the developed ranking model, a case study of the e-payment system is demonstrated which aims on the selection and ranking of eight COTS components over four major categories of ranking criteria. FMDBA provides a comprehensive ranking of the components based on their calculated composite distance values. To depict the docility of FMDBA method, the results obtained are also compared with the existing decision-making methodologies.

6 citations


Journal ArticleDOI
TL;DR: Comprehensive evaluation on three benchmark datasets shows the considerable improvement of the proposed model on ranking metric, and corresponding surrogate ranking loss function is optimized.
Abstract: The purpose of this paper is to propose a novel latent factor model that generates a ranked list of items in the recommendation list based on prior interaction with system on e-commerce platforms. The ranking of items in recommendation list is exhibited as an optimization model that optimizes the ranking metrics. The latent features of user and items are learnt using cosine based latent factor model which in turn are used to learn the ranking metric. This paper proposes cosine based latent factor model to learn the implicit features, and corresponding surrogate ranking loss function is optimized. Comprehensive evaluation on three benchmark datasets shows the considerable improvement of the proposed model on ranking metric.

3 citations


Posted Content
TL;DR: A deep Siamese network with rank SVM loss function, called Deep Rank SVM (DRSVM), is introduced in order to decide which one of a pair of images has a stronger presence of a specific attribute.
Abstract: Relative attributes indicate the strength of a particular attribute between image pairs. We introduce a deep Siamese network with rank SVM loss function, called Deep Rank SVM (DRSVM), in order to decide which one of a pair of images has a stronger presence of a specific attribute. The network is trained in an end-to-end fashion to jointly learn the visual features and the ranking function. We demonstrate the effectiveness of our approach against the state-of-the-art methods on four image benchmark datasets: LFW-10, PubFig, UTZap50K-lexi and UTZap50K-2 datasets. DRSVM surpasses state-of-art in terms of the average accuracy across attributes, on three of the four image benchmark datasets.

2 citations


Proceedings ArticleDOI
27 Nov 2020
TL;DR: This paper utilizes FS-SCPR as a preprocessor for determining discriminative and useful features and employs Ranking SVM to derive a ranking model for document retrieval with the selected features.
Abstract: In this paper, a graph-based feature selection method for learning to rank, called FS-SCPR, is proposed. FS-SCPR models feature relationships as a graph and selects a subset of features that have minimum redundancy with each other and have maximum relevance to the ranking problem. For minimizing redundancy, FS-SCPR abandons redundant features which are those being grouped into the same cluster. For maximizing relevance, FS-SCPR greedily collects from each cluster a representative feature which is with high relevance to the ranking problem. This paper utilizes FS-SCPR as a preprocessor for determining discriminative and useful features and employs Ranking SVM to derive a ranking model for document retrieval with the selected features. The proposed approach is evaluated using the LETOR datasets and found to perform competitively when being compared to another feature selection method, GAS-E.

2 citations


Journal ArticleDOI
TL;DR: A manifold embedding algorithm to automatically translate image-level text semantic labels into several pixel-level image regions and a three-level spatial pyramid model to extract both local and global features of objects from training images are designed.

1 citations


Book ChapterDOI
01 Jan 2020
TL;DR: Deep-RankSVM as mentioned in this paper is a deep Siamese network with rank SVM loss function, which can decide which one of a pair of images has a stronger presence of a specific attribute.
Abstract: Relative attributes indicate the strength of a particular attribute between image pairs. We introduce a deep Siamese network with rank SVM loss function, called Deep-RankSVM, that can decide which one of a pair of images has a stronger presence of a specific attribute. The network is trained in an end-to-end fashion to jointly learn the visual features and the ranking function. The trained network for an attribute can predict the relative strength of that attribute in novel images.