scispace - formally typeset
Open AccessProceedings Article

Generalizing from Several Related Classification Tasks to a New Unlabeled Sample

Reads0
Chats0
TLDR
This work develops a distribution-free, kernel-based approach to the problem of assigning class labels to an unlabeled test data set, and presents generalization error analysis, describe universal kernels, and establish universal consistency of the proposed methodology.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Domain Generalization via Invariant Feature Representation

TL;DR: Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables is proposed.
Proceedings ArticleDOI

Unified Deep Supervised Domain Adaptation and Generalization

TL;DR: This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture.
Posted Content

WILDS: A Benchmark of in-the-Wild Distribution Shifts

TL;DR: WILDS is presented, a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, and is hoped to encourage the development of general-purpose methods that are anchored to real-world distribution shifts and that work well across different applications and problem settings.
Proceedings ArticleDOI

Domain Generalization for Object Recognition with Multi-task Autoencoders

TL;DR: In this article, a multi-task autoencoder (MTAE) is proposed to transform the original image into analogs in multiple related domains, which are then used as inputs to a classifier.
Posted Content

In Search of Lost Domain Generalization

TL;DR: This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.
References
More filters
Book

Support Vector Machines

TL;DR: This book explains the principles that make support vector machines (SVMs) a successful modelling and prediction tool for a variety of applications and provides a unique in-depth treatment of both fundamental and recent material on SVMs that so far has been scattered in the literature.
Book

Foundations of modern probability

TL;DR: In this article, the authors discuss the relationship between Markov Processes and Ergodic properties of Markov processes and their relation with PDEs and potential theory. But their main focus is on the convergence of random processes, measures, and sets.
Posted ContentDOI

Making large scale SVM learning practical

TL;DR: SVM light as discussed by the authors is an implementation of an SVM learner which addresses the problem of large-scale SVM training with many training examples on the shelf, which makes large scale SVM learning more practical.
Book

Multitask learning

Rich Caruana
TL;DR: Multitask learning as discussed by the authors is an approach to inductive transfer that improves learning for one task by using the information contained in the training signals of other related tasks, and it does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.
Book ChapterDOI

Rademacher and gaussian complexities: risk bounds and structural results

TL;DR: In this paper, the authors investigate the use of data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities, in a decision theoretic setting and prove general risk bounds in terms of these complexities.
Related Papers (5)