scispace - formally typeset
Open AccessJournal ArticleDOI

Moving Deep Learning to the Edge

Mário P. Véstias, +3 more
- 18 May 2020 - 
- Vol. 13, Iss: 5, pp 125
Reads0
Chats0
TLDR
This paper reviews the main research directions for edge computing deep learning algorithms and suggests new resource and energy-oriented deep learning models, as well as new computing platforms.
Abstract
Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devices themselves, in order to alleviate cloud server workloads and improve latency. However, edge devices are less powerful than cloud servers, and many are subject to energy constraints. Hence, new resource and energy-oriented deep learning models are required, as well as new computing platforms. This paper reviews the main research directions for edge computing deep learning algorithms.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Towards edge computing in intelligent manufacturing: Past, present and future

TL;DR: In this article , the authors present a survey on edge computing in industrial IoT applications and present the optimum solutions to bring intelligence to the edge by overcoming the resource and complexity-bound with accuracy and latency constraints for the decision-making processes.
Journal ArticleDOI

Distributed intelligence on the Edge-to-Cloud Continuum: A systematic literature review

TL;DR: In this paper , the main state-of-the-art libraries and frameworks for machine learning and data analytics available on the edge-to-cloud Continuum are surveyed and discussed.
Journal ArticleDOI

Knowledge distillation in deep learning and its applications.

TL;DR: To compare the performances of different techniques, a new metric called distillation metric is proposed which compares different knowledge distillation solutions based on models' sizes and accuracy scores.
Journal ArticleDOI

A Classification of the Enabling Techniques for Low Latency and Reliable Communications in 5G and Beyond: AI-Enabled Edge Caching

TL;DR: A classification of the AI-enabled edge caching solutions, the use of deep learning (DL), deep reinforcement learning (DRL), and federated learning (FL) algorithms, and the performance gains of FL frameworks over conventional centralized and decentralized DL and DRL frameworks are presented.
Journal ArticleDOI

Design and Evaluation of a New Machine Learning Framework for IoT and Embedded Devices

Gianluca Cornetta, +1 more
- 04 Mar 2021 - 
TL;DR: This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)