scispace - formally typeset
Search or ask a question
Institution

Skolkovo Institute of Science and Technology

EducationSkolkovo, Russia
About: Skolkovo Institute of Science and Technology is a education organization based out in Skolkovo, Russia. It is known for research contribution in the topics: Carbon nanotube & Artificial neural network. The organization has 2061 authors who have published 5304 publications receiving 93653 citations. The organization is also known as: Skoltech.


Papers
More filters
Book ChapterDOI
TL;DR: In this article, a new representation learning approach for domain adaptation is proposed, in which data at training and test time come from similar but different distributions, and features that cannot discriminate between the training (source) and test (target) domains are used to promote the emergence of features that are discriminative for the main learning task on the source domain.
Abstract: We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.

4,862 citations

Posted Content
TL;DR: In this paper, a gradient reversal layer is proposed to promote the emergence of deep features that are discriminative for the main learning task on the source domain and invariant with respect to the shift between the domains.
Abstract: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

3,222 citations

Proceedings Article
06 Jul 2015
TL;DR: The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.
Abstract: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

2,889 citations

Journal ArticleDOI
13 Sep 2017-Nature
TL;DR: The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers.
Abstract: Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields. Traditional machine learning has dramatically improved the benchmarking an ...

2,162 citations

Journal ArticleDOI
18 Dec 2015-Science
TL;DR: At Atomic-scale characterization, supported by theoretical calculations, revealed structures reminiscent of fused boron clusters with multiple scales of anisotropic, out-of-plane buckling that are consistent with predictions of a highly an isotropic, 2D metal.
Abstract: At the atomic-cluster scale, pure boron is markedly similar to carbon, forming simple planar molecules and cage-like fullerenes. Theoretical studies predict that two-dimensional (2D) boron sheets will adopt an atomic configuration similar to that of boron atomic clusters. We synthesized atomically thin, crystalline 2D boron sheets (i.e., borophene) on silver surfaces under ultrahigh-vacuum conditions. Atomic-scale characterization, supported by theoretical calculations, revealed structures reminiscent of fused boron clusters with multiple scales of anisotropic, out-of-plane buckling. Unlike bulk boron allotropes, borophene shows metallic characteristics that are consistent with predictions of a highly anisotropic, 2D metal.

1,873 citations


Authors

Showing all 2136 results

NameH-indexPapersCitations
Rudolf Jaenisch206606178436
Richard A. Young173520126642
Peter M. Lansdorp10533043982
Alexander van Oudenaarden10122945367
Andrzej Cichocki9795241471
Gleb B. Sukhorukov9644035549
Raul R. Gainetdinov8030330663
Vladimir E. Zakharov7438124220
Sergei Tretiak7341523905
Artem R. Oganov7341226174
Xavier Gonze7027623836
Christoph H. Borchers6935819328
Nikita Nekrasov6716921466
Mikhail S. Gelfand6731615382
Victor Lempitsky6717330867
Network Information
Related Institutions (5)
École Polytechnique Fédérale de Lausanne
98.2K papers, 4.3M citations

90% related

ETH Zurich
122.4K papers, 5.1M citations

89% related

Georgia Institute of Technology
119K papers, 4.6M citations

88% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202328
2022119
20211,140
20201,225
20191,019
2018734