scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings Article
17 Nov 2011
TL;DR: This paper proposes anective procedure to identify relative outliers from the target dataset with respect to another reference dataset of normal data and presents a novel learning framework to learn a relative outlier detector.
Abstract: Outliers usually spread across regions of low density. However, due to the absence or scarcity of outliers, designing a robust detector to sift outliers from a given dataset is still very challenging. In this paper, we consider to identify relative outliers from the target dataset with respect to another reference dataset of normal data. Particularly, we employ Maximum Mean Discrepancy (MMD) for matching the distribution between these two datasets and present a novel learning framework to learn a relative outlier detector. The learning task is formulated as a Mixed Integer Programming (MIP) problem, which is computationally hard. To this end, we propose an eective procedure to nd a largely violated labeling vector for identifying relative outliers from abundant normal patterns, and its convergence is also presented. Then, a set of largely violated labeling vectors are combined by multiple kernel learning methods to robustly locate relative outliers. Comprehensive empirical studies on real-world datasets verify that our proposed relative outlier detection outperforms existing methods.

7 citations

Journal ArticleDOI
TL;DR: A novel bi-sparse optimization-based least squares regression (BSOLSR) method is proposed in the framework of LSSVR, based on the new row and column kernel matrices, and the l 0 -norm sparsification function is introduced to the L SSVR model.

7 citations

Proceedings ArticleDOI
25 Jul 2019
TL;DR: This paper presents a robust multiple kernel learning algorithm for predicting points failures that takes into account the missing pattern of data as well as the inherent variance on different sets of railway points.
Abstract: Railway points are among the key components of railway infrastructure. As a part of signal equipment, points control the routes of trains at railway junctions, having a significant impact on the reliability, capacity, and punctuality of rail transport. Meanwhile, they are also one of the most fragile parts in railway systems. Points failures cause a large portion of railway incidents. Traditionally, maintenance of points is based on a fixed time interval or raised after the equipment failures. Instead, it would be of great value if we could forecast points' failures and take action beforehand, minimising any negative effect. To date, most of the existing prediction methods are either lab-based or relying on specially installed sensors which makes them infeasible for large-scale implementation. Besides, they often use data from only one source. We, therefore, explore a new way that integrates multi-source data which are ready to hand to fulfil this task. We conducted our case study based on Sydney Trains rail network which is an extensive network of passenger and freight railways. Unfortunately, the real-world data are usually incomplete due to various reasons, e.g., faults in the database, operational errors or transmission faults. Besides, railway points differ in their locations, types and some other properties, which means it is hard to use a unified model to predict their failures. Aiming at this challenging task, we firstly constructed a dataset from multiple sources and selected key features with the help of domain experts. In this paper, we formulate our prediction task as a multiple kernel learning problem with missing kernels. We present a robust multiple kernel learning algorithm for predicting points failures. Our model takes into account the missing pattern of data as well as the inherent variance on different sets of railway points. Extensive experiments demonstrate the superiority of our algorithm compared with other state-of-the-art methods.

7 citations

Journal ArticleDOI
03 Aug 2020
TL;DR: Compared to the single kernel and ANN-based techniques, the use of multiple kernel support vector machines benefit from a higher degree of correctness and generalization ability for prediction of wear rate of grinding media.
Abstract: In this study, we investigates the application of three powerful kernel-based supervised learning algorithms to develop a global model of the wear rate of grinding media based on the input factors such as pH, solid percentage, throughout, charge weight of balls, rotation speed of mill and grinding time. It is found that there is a trade-off between the training and testing error when a single kernel function is used and therefore these methods cannot provide the generalization capability. However, this problem is solved utilizing the multiple kernel learning frameworks for support vector machine in which the kernel function was expressed as a combination of basis kernel functions. It is distinguished that compared to the single kernel and ANN-based techniques, the use of multiple kernel support vector machines benefit from a higher degree of correctness and generalization ability for prediction of wear rate of grinding media. Meanwhile, the findings indicate that in this state, the values of R2 are achieved 0.99417 and 0.993 for training and testing datasets, respectively.

7 citations

Journal ArticleDOI
TL;DR: In this paper, a fully distributed online learning ( DOMKL) framework with multiple kernels is proposed, which can learn a common function having a diminishing gap from the best function in hindsight.
Abstract: We consider the problem of learning a nonlinear function over a network of learners in a fully decentralized fashion Online learning is additionally assumed where every learner receives continuous streaming data locally This learning model is called a fully distributed online learning or a fully decentralized online federated learning). For this model, we propose a novel learning framework with multiple kernels, which is named DOMKL. The proposed DOMKL is devised by harnessing the principles of an online alternating direction method of multipliers and a distributed Hedge algorithm. We theoretically prove that DOMKL over T time slots can achieve an optimal sublinear regret O(√T), implying that every learner in the network can learn a common function having a diminishing gap from the best function in hindsight. Our analysis also reveals that DOMKL yields the same asymptotic performance as the state-of-the-art centralized approach while keeping local data at edge learners. Via numerical tests with real datasets, we demonstrate the effectiveness of the proposed DOMKL on various online regression and time-series prediction tasks.

7 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114