Robustifying the Deployment of tinyML Models for Autonomous Mini-Vehicles
TLDR
In this paper, a closed-loop learning flow for autonomous driving mini-vehicles that includes the target deployment environment in-the-loop is proposed, where a family of compact and high-throughput tiny-CNNs are used to control the mini vehicle that learn by imitating a computer vision algorithm in the target environment.Abstract:
Standard-sized autonomous vehicles have rapidly improved thanks to the breakthroughs of deep learning. However, scaling autonomous driving to mini-vehicles poses several challenges due to their limited on-board storage and computing capabilities. Moreover, autonomous systems lack robustness when deployed in dynamic environments where the underlying distribution is different from the distribution learned during training. To address these challenges, we propose a closed-loop learning flow for autonomous driving mini-vehicles that includes the target deployment environment in-the-loop. We leverage a family of compact and high-throughput tinyCNNs to control the mini-vehicle that learn by imitating a computer vision algorithm, i.e., the expert, in the target environment. Thus, the tinyCNNs, having only access to an on-board fast-rate linear camera, gain robustness to lighting conditions and improve over time. Moreover, we introduce an online predictor that can choose between different tinyCNN models at runtime—trading accuracy and latency—which minimises the inference’s energy consumption by up to 3.2×. Finally, we leverage GAP8, a parallel ultra-low-power RISC-V-based micro-controller unit (MCU), to meet the real-time inference requirements. When running the family of tinyCNNs, our solution running on GAP8 outperforms any other implementation on the STM32L4 and NXP k64f (traditional single-core MCUs), reducing the latency by over 13× and the energy consumption by 92%.read more
Citations
More filters
Journal ArticleDOI
A Review on TinyML: State-of-the-art and Prospects
TL;DR: In this article, the authors present an intuitive review about the possibilities for TinyML and identify key challenges and a future roadmap for mitigating several research issues of TinyML, including maintaining the accuracy of learning models, providing train-to-deploy facility in resource frugal tiny edge devices, optimizing processing capacity, and improving reliability.
Journal ArticleDOI
A review on TinyML: State-of-the-art and prospects
TL;DR: In this paper , the authors present an intuitive review about the possibilities for TinyML and identify key challenges and a future roadmap for mitigating several research issues of TinyML, including maintaining the accuracy of learning models, providing train-to-deploy facility in resource frugal tiny edge devices, optimizing processing capacity, and improving reliability.
Journal ArticleDOI
TinyML: Enabling of Inference Deep Learning Models on Ultra-Low-Power IoT Edge Devices for AI Applications
Norah Alajlan,Dina M. Ibrahim +1 more
TL;DR: An overview of the revolution of TinyML and a review of tinyML studies is provided, wherein the main contribution is to provide an analysis of the type of ML models used intinyML studies and the details of datasets and the types and characteristics of the devices.
Proceedings ArticleDOI
TinyML: A Systematic Review and Synthesis of Existing Research
TL;DR: In this paper , the authors conduct a systematic review of TinyML research by synthesizing 47 papers from academic and grey publication since 2019 (the early TinyML publication starts from 2019), and analyze relevant TinyML literature from five aspects: hardware, framework, datasets, use cases, and algorithms/model.
Journal ArticleDOI
Communication-efficient distributed AI strategies for the IoT edge
TL;DR: In this article , the authors provide an architecture for enabling AI in fully edge-based scenarios and provide strategies to tackle the communication inefficiencies that arise from the distributed nature of fully edgebased scenarios.
References
More filters
Book
Reinforcement Learning: An Introduction
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI
Technical Note : \cal Q -Learning
Chris Watkins,Peter Dayan +1 more
TL;DR: This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989), showing that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action- values are represented discretely.
Proceedings Article
Asynchronous methods for deep reinforcement learning
Volodymyr Mnih,Adrià Puigdomènech Badia,Mehdi Mirza,Alex Graves,Tim Harley,Timothy P. Lillicrap,David Silver,Koray Kavukcuoglu +7 more
TL;DR: A conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers and shows that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
Journal ArticleDOI
What is a support vector machine
TL;DR: Support vector machines are becoming popular in a wide variety of biological applications, but how do they work and what are their most promising applications in the life sciences?
Journal ArticleDOI
The global k-means clustering algorithm
TL;DR: The global k-means algorithm is presented which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N executions of the k-Means algorithm from suitable initial positions.