scispace - formally typeset
Open AccessProceedings ArticleDOI

Avalanche: an End-to-End Library for Continual Learning

TLDR
In this article, the authors propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch, which is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
Abstract
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch. Avalanche is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Continual learning for recurrent neural networks: An empirical evaluation.

TL;DR: In this article, the authors organize the literature on continuous learning for sequential data processing by providing a categorization of the contributions and a review of the benchmarks, and propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real world applications.
Journal Article

The CLEAR Benchmark: Continual LEArning on Real-World Imagery

TL;DR: This paper introduces CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014), and proposes novel "streaming" protocols for CL that always test on the (near) future.
Proceedings Article

Pretrained Language Model in Continual Learning: A Comparative Study

TL;DR: This paper thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 CL approaches on 3 benchmarks in 2 typical incremental settings, and extensive experimental analyses reveal interesting performance differences acrossPLMs and across CL methods.
Journal ArticleDOI

Anomaly Detection and Failure Root Cause Analysis in (Micro) Service-Based Cloud Applications: A Survey

TL;DR: In this paper , the authors provide a structured overview and qualitative analysis of currently available techniques for anomaly detection and root cause analysis in modern multi-service applications and some open challenges and research directions stemming out from the analysis are also discussed.

Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners

TL;DR: The overall framework of continual learning for object detection is introduced and the key elements’ effect on withstanding catastrophic forgetting in the solution is analysed.
References
More filters
Posted Content

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

TL;DR: This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
Proceedings ArticleDOI

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Proceedings ArticleDOI

Transformers: State-of-the-Art Natural Language Processing

TL;DR: Transformers is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.
Proceedings ArticleDOI

iCaRL: Incremental Classifier and Representation Learning

TL;DR: In this paper, the authors introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively.
Related Papers (5)