Avalanche: an End-to-End Library for Continual Learning
Vincenzo Lomonaco,Lorenzo Pellegrini,Andrea Cossu,Antonio Carta,Gabriele Graffieti,Tyler L. Hayes,Matthias De Lange,Marc Masana,Jary Pomponi,Gido M. van de Ven,Martin Mundt,Qi She,Keiland W. Cooper,Jeremy Forest,Eden Belouadah,Simone Calderara,German Ignacio Parisi,Fabio Cuzzolin,Andreas S. Tolias,Simone Scardapane,Luca Antiga,Subutai Ahmad,Adrian Popescu,Christopher Kanan,Joost van de Weijer,Tinne Tuytelaars,Davide Bacciu,Davide Maltoni +27 more
- pp 3595-3605
TLDR
In this article, the authors propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch, which is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.Abstract:
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch. Avalanche is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.read more
Citations
More filters
Journal ArticleDOI
Continual learning for recurrent neural networks: An empirical evaluation.
TL;DR: In this article, the authors organize the literature on continuous learning for sequential data processing by providing a categorization of the contributions and a review of the benchmarks, and propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real world applications.
Journal Article
The CLEAR Benchmark: Continual LEArning on Real-World Imagery
TL;DR: This paper introduces CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014), and proposes novel "streaming" protocols for CL that always test on the (near) future.
Proceedings Article
Pretrained Language Model in Continual Learning: A Comparative Study
TL;DR: This paper thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 CL approaches on 3 benchmarks in 2 typical incremental settings, and extensive experimental analyses reveal interesting performance differences acrossPLMs and across CL methods.
Journal ArticleDOI
Anomaly Detection and Failure Root Cause Analysis in (Micro) Service-Based Cloud Applications: A Survey
TL;DR: In this paper , the authors provide a structured overview and qualitative analysis of currently available techniques for anomaly detection and root cause analysis in modern multi-service applications and some open challenges and research directions stemming out from the analysis are also discussed.
Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners
TL;DR: The overall framework of continual learning for object detection is introduced and the key elements’ effect on withstanding catastrophic forgetting in the solution is analysed.
References
More filters
Posted Content
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew Howard,Menglong Zhu,Bo Chen,Dmitry Kalenichenko,Weijun Wang,Tobias Weyand,M. Andreetto,Hartwig Adam +7 more
TL;DR: This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
Proceedings ArticleDOI
Caffe: Convolutional Architecture for Fast Feature Embedding
Yangqing Jia,Evan Shelhamer,Jeff Donahue,Sergey Karayev,Jonathan Long,Ross Girshick,Sergio Guadarrama,Trevor Darrell +7 more
TL;DR: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Proceedings Article
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke,Sam Gross,Francisco Massa,Adam Lerer,James Bradbury,Gregory Chanan,Trevor Killeen,Zeming Lin,Natalia Gimelshein,Luca Antiga,Alban Desmaison,Andreas Kopf,Edward Z. Yang,Zachary DeVito,Martin Raison,Alykhan Tejani,Sasank Chilamkurthy,Benoit Steiner,Lu Fang,Junjie Bai,Soumith Chintala +20 more
TL;DR: This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Proceedings ArticleDOI
Transformers: State-of-the-Art Natural Language Processing
Thomas Wolf,Lysandre Debut,Victor Sanh,Julien Chaumond,Clement Delangue,Anthony Moi,Pierric Cistac,Clara Ma,Yacine Jernite,Julien Plu,Canwen Xu,Teven Le Scao,Sylvain Gugger,Mariama Drame,Quentin Lhoest,Alexander M. Rush +15 more
TL;DR: Transformers is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.
Proceedings ArticleDOI
iCaRL: Incremental Classifier and Representation Learning
TL;DR: In this paper, the authors introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively.