Open AccessPosted Content
Bringing AI To Edge: From Deep Learning's Perspective
TLDR
This paper surveys the representative and latest deep learning techniques that are useful for edge intelligence systems, including hand-crafted models, model compression, hardware-aware neural architecture search and adaptive deep learning models.Abstract:
Edge computing and artificial intelligence (AI), especially deep learning for nowadays, are gradually intersecting to build a novel system, called edge intelligence. However, the development of edge intelligence systems encounters some challenges, and one of these challenges is the \textit{computational gap} between computation-intensive deep learning algorithms and less-capable edge systems. Due to the computational gap, many edge intelligence systems cannot meet the expected performance requirements. To bridge the gap, a plethora of deep learning techniques and optimization methods are proposed in the past years: light-weight deep learning models, network compression, and efficient neural architecture search. Although some reviews or surveys have partially covered this large body of literature, we lack a systematic and comprehensive review to discuss all aspects of these deep learning techniques which are critical for edge intelligence implementation. As various and diverse methods which are applicable to edge systems are proposed intensively, a holistic review would enable edge computing engineers and community to know the state-of-the-art deep learning techniques which are instrumental for edge intelligence and to facilitate the development of edge intelligence systems. This paper surveys the representative and latest deep learning techniques that are useful for edge intelligence systems, including hand-crafted models, model compression, hardware-aware neural architecture search and adaptive deep learning models. Finally, based on observations and simple experiments we conducted, we discuss some future directions.read more
Citations
More filters
Journal ArticleDOI
Edge AI: A survey
TL;DR: In this paper , a detailed survey of edge computing and its paradigms including transition to edge AI is presented to explore the background of each variant proposed for implementing edge computing, and the Edge AI approach to deploying AI algorithms and models on edge devices, which are typically resource-constrained devices located at the edge of the network.
Journal ArticleDOI
LightNAS: On Lightweight and Scalable Neural Architecture Search for Embedded Platforms
TL;DR: LightNAS as discussed by the authors is a hardware-aware differentiable NAS framework, which consists of two separate stages, in which the first stage aims to search for the architecture that strictly satisfies the required latency constraint at the macro level in a differentiable manner, and more importantly through a one-time search (i.e., you only search once).
Journal ArticleDOI
ALOHA: A Unified Platform-Aware Evaluation Method for CNNs Execution on Heterogeneous Systems at the Edge
TL;DR: In this paper, a modular platform and execution model that adequately describes the details of the platform and the scheduling of different CNN operators on different platform processing elements is proposed to improve the evaluation accuracy.
Journal ArticleDOI
Distributed Artificial Intelligence Empowered by End-Edge-Cloud Computing: A Survey
TL;DR: In this paper , the authors provide a comprehensive survey on the distributed artificial intelligence (DAI) empowered by end-edge-cloud computing (EECC), where the heterogeneous capabilities of on-device computing, edge computing, and cloud computing are orchestrated to satisfy the diverse requirements raised by resource-intensive and distributed AI computation.
Proceedings ArticleDOI
A Benchmark of Deep Learning Models for Multi-leaf Diseases for Edge Devices
TL;DR: In this paper, the authors benchmark the most popular deep learning models for multi-leaf disease detection to gauge which model is the most suitable for real deployment and find that MobileNet V3 provides a reliable accuracy of 96.58%, small Inference/Initialization time of 127 ms and 11 ms respectively, requires only 7.4 MB of memory in total.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.