R
Raoul de Charette
Researcher at French Institute for Research in Computer Science and Automation
Publications - 50
Citations - 1577
Raoul de Charette is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Computer science & Image translation. The author has an hindex of 14, co-authored 43 publications receiving 995 citations. Previous affiliations of Raoul de Charette include University of Macedonia & Mines ParisTech.
Papers
More filters
Proceedings ArticleDOI
Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation
TL;DR: This proposal efficiently learns sparse features without the need of an additional validity mask, and works with densities as low as 0.8% (8 layer lidar).
Proceedings ArticleDOI
Real time visual traffic lights recognition based on Spot Light Detection and adaptive traffic lights templates
TL;DR: A new real-time traffic light recognition system for on-vehicle camera applications using the generic “Adaptive Templates” it would be possible to recognize different kinds of traffic lights from various countries.
Real Time Visual Traffic Lights Recognition Based on Spot Light Detection and Adaptive Traffic Lights Templates
TL;DR: In this paper, a real-time traffic light recognition system for on-vehicle camera applications is presented, which is mainly based on a spot detection algorithm and is able to detect lights from a high distance with the main advantage of being not so sensitive to motion blur and illumination variations.
Proceedings ArticleDOI
End-to-End Race Driving with Deep Reinforcement Learning
TL;DR: The newly proposed reward and learning strategies lead together to faster convergence and more robust driving using only RGB image from a forward facing camera, and some domain adaption capability of the latest reinforcement learning algorithm is shown.
Proceedings ArticleDOI
xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation
TL;DR: In this paper, a cross-modal unsupervised domain adaptation (xMUDA) is proposed, where the presence of 2D images and 3D point clouds for 3D semantic segmentation is assumed.