scispace - formally typeset
Open AccessProceedings ArticleDOI

A Comparative Study of Methods for Transductive Transfer Learning

TLDR
A novel maximum entropy based technique, iterative feature transformation (IFT), is introduced and it is shown how simple relaxations, such as providing additional information like the proportion of positive examples in the test data, can significantly improve the performance of some of the transductive transfer learners.
Abstract: 
The problem of transfer learning, where information gained in one learning task is used to improve performance in another related task, is an important new area of research. While previous work has studied the supervised version of this problem, we study the more challenging case of unsupervised transductive transfer learning, where no labeled data from the target domain are available at training. We describe some current state-of-the-art inductive and transductive approaches and then adapt these models to the problem of transfer learning for protein name extraction. In the process, we introduce a novel maximum entropy based technique, iterative feature transformation (IFT), and show that it achieves comparable performance with state-of-the-art transductive SVMs. We also show how simple relaxations, such as providing additional information like the proportion of positive examples in the test data, can significantly improve the performance of some of the transductive transfer learners.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Survey on Transfer Learning

TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Proceedings ArticleDOI

Transfer Feature Learning with Joint Distribution Adaptation

TL;DR: JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference.
Book ChapterDOI

Mining Text Data

TL;DR: Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining.
Journal ArticleDOI

Transfer learning using computational intelligence

TL;DR: This paper systematically examines computational intelligence-based transfer learning techniques and clusters related technique developments into four main categories and provides state-of-the-art knowledge that will directly support researchers and practice-based professionals to understand the developments in computational Intelligence- based transfer learning research and applications.
Journal ArticleDOI

Transfer learning for activity recognition: a survey

TL;DR: The literature is surveyed to highlight recent advances in transfer learning for activity recognition, and existing approaches to transfer-based activity recognition are characterized by sensor modality, by differences between source and target environments, by data availability, and by type of information that is transferred.
References
More filters

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings Article

A comparison of event models for naive bayes text classification

TL;DR: It is found that the multi-variate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizes--providing on average a 27% reduction in error over the multi -variateBernoulli model at any vocabulary size.
Journal ArticleDOI

A maximum entropy approach to natural language processing

TL;DR: A maximum-likelihood approach for automatically constructing maximum entropy models is presented and how to implement this approach efficiently is described, using as examples several problems in natural language processing.
Journal ArticleDOI

Text Classification from Labeled and Unlabeled Documents using EM

TL;DR: This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents, and presents two extensions to the algorithm that improve classification accuracy under these conditions.
Related Papers (5)