Open AccessProceedings Article
Cross-Language Text Classification Using Structural Correspondence Learning
Peter Prettenhofer,Benno Stein +1 more
- pp 1118-1127
TLDR
A new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation, is presented, using unlabeled documents, along with a simple word translation oracle, in order to induce task-specific, cross-lingual word correspondences.Abstract:
We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce task-specific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of inter-language correspondence modeling.
We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas.read more
Citations
More filters
Journal ArticleDOI
A survey of transfer learning
TL;DR: This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied toTransfer learning, which can be applied to big data environments.
Posted Content
Domain Adaptation for Visual Applications: A Comprehensive Survey
TL;DR: An overview of domain adaptation and transfer learning with a specific view on visual applications and the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes are overviewed.
Journal ArticleDOI
Learning With Augmented Features for Supervised and Semi-Supervised Heterogeneous Domain Adaptation
TL;DR: This paper proposes a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples and shows that the SHFA and HFA outperform the existing HDA methods.
Posted Content
Learning with Augmented Features for Heterogeneous Domain Adaptation
Lixin Duan,Dong Xu,Ivor W. Tsang +2 more
TL;DR: A new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions, and it is demonstrated that HFA outperforms the existing HDA methods.
Proceedings Article
Multi-View Clustering and Feature Learning via Structured Sparsity
Hua Wang,Feiping Nie,Heng Huang +2 more
TL;DR: A novel multi-view learning model to integrate all features and learn the weight for every feature with respect to each cluster individually via new joint structured sparsity-inducing norms is proposed.
References
More filters
Proceedings ArticleDOI
Domain Adaptation with Structural Correspondence Learning
TL;DR: This work introduces structural correspondence learning to automatically induce correspondences among features from different domains in order to adapt existing models from a resource-rich source domain to aresource-poor target domain.
Journal ArticleDOI
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
Rie Kubota Ando,Tong Zhang +1 more
TL;DR: This paper presents a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data, and algorithms for structural learning will be proposed, and computational issues will be investigated.
Posted Content
Frustratingly Easy Domain Adaptation
TL;DR: In this paper, the authors describe an approach to domain adaptation that is appropriate exactly in the case when one has enough target data to do slightly better than just using only source data.
Proceedings Article
Frustratingly Easy Domain Adaptation
TL;DR: This work describes an approach to domain adaptation that is appropriate exactly in the case when one has enough “target” data to do slightly better than just using only “source’ data.
Proceedings ArticleDOI
Solving large scale linear prediction problems using stochastic gradient descent algorithms
TL;DR: Stochastic gradient descent algorithms on regularized forms of linear prediction methods, related to online algorithms such as perceptron, are studied, and numerical rate of convergence for such algorithms is obtained.