Topic
Multiple kernel learning
About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A novel method, task-dependent multi-task multiple kernel learning (TD-MTMKL), to jointly detect the absence and presence of multiple AUs and captures commonalities and adapts to variations among co-occurred AUs.
37 citations
••
01 Sep 2014TL;DR: This study inspects a spectrum of social network theories to systematically model the multiple facets of a social network and infer user preferences and shows that the proposed approach provides more accurate recommendations than trust-based methods and the collaborative filtering approach.
Abstract: Recommender systems are a critical component of e-commerce websites. The rapid development of online social networking services provides an opportunity to explore social networks together with information used in traditional recommender systems, such as customer demographics, product characteristics, and transactions. It also provides more applications for recommender systems. To tackle this social network-based recommendation problem, previous studies generally built trust models in light of the social influence theory. This study inspects a spectrum of social network theories to systematically model the multiple facets of a social network and infer user preferences. In order to effectively make use of these heterogonous theories, we take a kernel-based machine learning paradigm, design and select kernels describing individual similarities according to social network theories, and employ a non-linear multiple kernel learning algorithm to combine the kernels into a unified model. This design also enables us to consider multiple theories' interactions in assessing individual behaviors. We evaluate our proposed approach on a real-world movie review data set. The experiments show that our approach provides more accurate recommendations than trust-based methods and the collaborative filtering approach. Further analysis shows that kernels derived from contagion theory and homophily theory contribute a larger portion of the model.
37 citations
••
TL;DR: Experimental results on several hyperspectral images demonstrate the effectiveness of the proposed MKL method in terms of classification performance and computation efficiency.
Abstract: In hyperspectral images, band selection plays a crucial role for land-cover classification. Multiple kernel learning (MKL) is a popular feature selection method by selecting the relevant features and classifying the images simultaneously. Unfortunately, a large number of spectral bands in hyperspectral images result in excessive kernels, which limit the application of MKL. To address this problem, a novel MKL method based on discriminative kernel clustering (DKC) is proposed. In the proposed method, a discriminative kernel alignment (KA) (DKA) is defined. Traditional KA measures kernel similarity independently of the current classification task. Compared with KA, DKA measures the similarity of discriminative information by introducing the comparison of intraclass and interclass similarities. It can evaluate both kernel redundancy and kernel synergy for classification. Then, DKA-based affinity-propagation clustering is devised to reduce the kernel scale and retain the kernels having high discrimination and low redundancy for classification. Additionally, an analysis of necessity for DKC in hyperspectral band selection is provided by empirical Rademacher complexity. Experimental results on several hyperspectral images demonstrate the effectiveness of the proposed band selection method in terms of classification performance and computation efficiency.
37 citations
••
TL;DR: In this paper, a semi-infinite linear programming (SILP) based multiple-kernel learning (MKL) method was proposed for ELM, where the kernel function can be automatically learned as a combination of multiple kernels.
Abstract: The extreme learning machine (ELM) is a new method for using single hidden layer feed-forward networks with a much simpler training method. While conventional kernel-based classifiers are based on a single kernel, in reality, it is often desirable to base classifiers on combinations of multiple kernels. In this paper, we propose the issue of multiple-kernel learning (MKL) for ELM by formulating it as a semi-infinite linear programming. We further extend this idea by integrating with techniques of MKL. The kernel function in this ELM formulation no longer needs to be fixed, but can be automatically learned as a combination of multiple kernels. Two formulations of multiple-kernel classifiers are proposed. The first one is based on a convex combination of the given base kernels, while the second one uses a convex combination of the so-called equivalent kernels. Empirically, the second formulation is particularly competitive. Experiments on a large number of both toy and real-world data sets (including high-magnification sampling rate image data set) show that the resultant classifier is fast and accurate and can also be easily trained by simply changing linear program.
36 citations
••
TL;DR: This paper addresses the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP) and shows that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection.
36 citations