scispace - formally typeset
T

Timothée Lesort

Researcher at Superior National School of Advanced Techniques

Publications -  39
Citations -  1126

Timothée Lesort is an academic researcher from Superior National School of Advanced Techniques. The author has contributed to research in topics: Reinforcement learning & Robot. The author has an hindex of 12, co-authored 30 publications receiving 725 citations. Previous affiliations of Timothée Lesort include École supérieure de chimie physique électronique de Lyon & French Institute for Research in Computer Science and Automation.

Papers
More filters
Journal ArticleDOI

State representation learning for control: An overview.

TL;DR: This survey aims at covering the state-of-the-art on state representation learning in the most recent years by reviewing different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real).
Posted Content

Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges

TL;DR: ContinContinual Learning (CL) is a particular machine learning paradigm where the data distribution and learning objective changes through time, or where all the training data and objective criteria are never available at once as mentioned in this paper.
Proceedings ArticleDOI

Generative Models from the perspective of Continual Learning

TL;DR: It is found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods.
Journal ArticleDOI

Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges

TL;DR: This paper aims at reviewing the existing state of the art of continual learning, summarizing existing benchmarks and metrics, and proposing a framework for presenting and evaluating both robotics and non robotics approaches in a way that makes transfer between both fields easier.
Posted Content

DisCoRL: Continual Reinforcement Learning via Policy Distillation.

TL;DR: This paper proposes DisCoRL, an approach combining state representation learning and policy distillation that can solve all tasks and automatically infer which one to run, and tests its robustness by transferring the final policy into a real life setting.