scispace - formally typeset
Y

Yaroslav Ganin

Researcher at Skolkovo Institute of Science and Technology

Publications -  25
Citations -  14457

Yaroslav Ganin is an academic researcher from Skolkovo Institute of Science and Technology. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 18, co-authored 22 publications receiving 11148 citations. Previous affiliations of Yaroslav Ganin include Université de Montréal & Google.

Papers
More filters
Book ChapterDOI

Domain-adversarial training of neural networks

TL;DR: In this article, a new representation learning approach for domain adaptation is proposed, in which data at training and test time come from similar but different distributions, and features that cannot discriminate between the training (source) and test (target) domains are used to promote the emergence of features that are discriminative for the main learning task on the source domain.
Posted Content

Unsupervised Domain Adaptation by Backpropagation

TL;DR: In this paper, a gradient reversal layer is proposed to promote the emergence of deep features that are discriminative for the main learning task on the source domain and invariant with respect to the shift between the domains.
Proceedings Article

Unsupervised Domain Adaptation by Backpropagation

TL;DR: The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

Domain-Adversarial Training of Neural Networks.

TL;DR: A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
Posted Content

Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition

TL;DR: A simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning is proposed, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks.