A
Alexander Kolesnikov
Researcher at Google
Publications - 61
Citations - 22409
Alexander Kolesnikov is an academic researcher from Google. The author has contributed to research in topics: Computer science & Feature learning. The author has an hindex of 24, co-authored 47 publications receiving 6802 citations. Previous affiliations of Alexander Kolesnikov include Institute of Science and Technology Austria & Yandex.
Papers
More filters
Posted Content
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy,Lucas Beyer,Alexander Kolesnikov,Dirk Weissenborn,Xiaohua Zhai,Thomas Unterthiner,Mostafa Dehghani,Matthias Minderer,Georg Heigold,Sylvain Gelly,Jakob Uszkoreit,Neil Houlsby +11 more
TL;DR: Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
Proceedings ArticleDOI
iCaRL: Incremental Classifier and Representation Learning
TL;DR: In this paper, the authors introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively.
Journal ArticleDOI
Accurate circular consensus long-read sequencing improves variant detection and assembly of a human genome.
Aaron M. Wenger,Paul Peluso,William J Rowell,Pi-Chuan Chang,Richard Hall,Gregory T. Concepcion,Jana Ebler,Arkarachai Fungtammasan,Alexander Kolesnikov,Nathan D. Olson,Armin Töpfer,Michael Alonge,Medhat Mahmoud,Yufeng Qian,Chen-Shan Chin,Adam M. Phillippy,Michael C. Schatz,Gene Myers,Mark A. DePristo,Jue Ruan,Tobias Marschall,Tobias Marschall,Fritz J. Sedlazeck,Justin M. Zook,Heng Li,Sergey Koren,Andrew Carroll,David R. Rank,Michael W. Hunkapiller +28 more
TL;DR: The optimization of circular consensus sequencing (CCS) is reported to improve the accuracy of single-molecule real-time (SMRT) sequencing (PacBio) and generate highly accurate (99.8%) long high-fidelity (HiFi) reads with an average length of 13.5 kilobases (kb).
Proceedings ArticleDOI
Revisiting Self-Supervised Visual Representation Learning
TL;DR: This study revisits numerous previously proposed self-supervised models, conducts a thorough large scale study and uncovers multiple crucial insights about standard recipes for CNN design that do not always translate to self- supervised representation learning.
Book ChapterDOI
Seed, expand and constrain: Three principles for weakly-supervised image segmentation
TL;DR: It is shown experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset.