scispace - formally typeset
Open AccessPosted Content

LENS: Layer Distribution Enabled Neural Architecture Search in Edge-Cloud Hierarchies

Reads0
Chats0
TLDR
In this article, a multi-objective neural architecture search (NAS) for two-tiered edge-cloud hierarchical systems is presented, where the performance objectives are refashioned to consider the wireless communication parameters.
Abstract
Edge-Cloud hierarchical systems employing intelligence through Deep Neural Networks (DNNs) endure the dilemma of workload distribution within them. Previous solutions proposed to distribute workloads at runtime according to the state of the surroundings, like the wireless conditions. However, such conditions are usually overlooked at design time. This paper addresses this issue for DNN architectural design by presenting a novel methodology, LENS, which administers multi-objective Neural Architecture Search (NAS) for two-tiered systems, where the performance objectives are refashioned to consider the wireless communication parameters. From our experimental search space, we demonstrate that LENS improves upon the traditional solution's Pareto set by 76.47% and 75% with respect to the energy and latency metrics, respectively.

read more

Citations
More filters
Journal ArticleDOI

SAGE: A Split-Architecture Methodology for Efficient End-to-End Autonomous Vehicle Control

TL;DR: In this paper, the authors propose a methodology for selectively offloading the key energy-consuming modules of DL architectures to the cloud to optimize edge energy usage while meeting real-time latency constraints, and leverage Head Network Distillation (HND) to introduce efficient bottlenecks within the DL architecture in order to minimize the network overhead costs of offloading with almost no degradation in the model's performance.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Journal ArticleDOI

Taking the Human Out of the Loop: A Review of Bayesian Optimization

TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Posted Content

Neural Architecture Search with Reinforcement Learning

Barret Zoph, +1 more
- 05 Nov 2016 - 
TL;DR: This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Proceedings ArticleDOI

A close examination of performance and power characteristics of 4G LTE networks

TL;DR: This paper develops the first empirically derived comprehensive power model of a commercial LTE network with less than 6% error rate and state transitions matching the specifications, and identifies that the performance bottleneck for web-based applications lies less in the network, compared to the previous study in 3G.
Related Papers (5)