scispace - formally typeset
Search or ask a question
Topic

Edge computing

About: Edge computing is a research topic. Over the lifetime, 11657 publications have been published within this topic receiving 148533 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper addresses the machine learning problem where it lacks training data and limits computing power, and investigates domain adaptation which is able to transfer knowledge from one labeled source domain to an unlabeled target domain, so that the running environment is confined.
Abstract: It is widely acknowledged that the success of deep learning is built on large-scale training data and tremendous computing power. However, the data and computing power are not always available for many real-world applications. In this paper, we address the machine learning problem where it lacks training data and limits computing power. Specifically, we investigate domain adaptation which is able to transfer knowledge from one labeled source domain to an unlabeled target domain, so that we do not need much training data from the target domain. At the same time, we consider the situation that the running environment is confined, e.g., in edge computing the end device has very limited running resources. Technically, we present the Faster Domain Adaptation (FDA) protocol and further report two paradigms of FDA: early stopping and amid skipping. The former accelerates domain adaptation by multiple early exit points. The latter speeds up the adaptation by wisely skip several amid neural network blocks. Extensive experiments on standard benchmarks verify that our method is able to achieve the comparable and even better accuracy but employ much less computing resources. To the best of our knowledge, there are very few works which investigated accelerating knowledge adaptation in the community.

64 citations

Journal ArticleDOI
TL;DR: In this article, Parked vehicle assisted edge computing (PVEC) by FedParking is investigated, where different parking lot operators collaborate to train a long short-term memory model for parking space estimation without exchanging the raw data.
Abstract: As a distributed learning approach, federated learning trains a shared learning model over distributed datasets while preserving the training data privacy. We extend the application of federated learning to parking management and introduce FedParking in which Parking Lot Operators (PLOs) collaborate to train a long short-term memory model for parking space estimation without exchanging the raw data. Furthermore, we investigate the management of Parked Vehicle assisted Edge Computing (PVEC) by FedParking. In PVEC, different PLOs recruit PVs as edge computing nodes for offloading services through an incentive mechanism, which is designed according to the computation demand and parking capacity constraints derived from FedParking. We formulate the interactions among the PLOs and vehicles as a multi-lead multi-follower Stackelberg game. Considering the dynamic arrivals of the vehicles and time-varying parking capacity constraints, we present a multi-agent deep reinforcement learning approach to gradually reach the Stackelberg equilibrium in a distributed yet privacy-preserving manner. Finally, numerical results are provided to demonstrate the effectiveness and efficiency of our scheme.

64 citations

Journal ArticleDOI
05 Nov 2019-Sensors
TL;DR: A comprehensive review of methods and techniques in fog computing is provided, providing solutions to critical challenges and as an enabler for IIoT application domains and open research challenges are discussed to enlighten fog computing aspects in different fields and technologies.
Abstract: Industry is going through a transformation phase, enabling automation and data exchange in manufacturing technologies and processes, and this transformation is called Industry 4.0. Industrial Internet-of-Things (IIoT) applications require real-time processing, near-by storage, ultra-low latency, reliability and high data rate, all of which can be satisfied by fog computing architecture. With smart devices expected to grow exponentially, the need for an optimized fog computing architecture and protocols is crucial. Therein, efficient, intelligent and decentralized solutions are required to ensure real-time connectivity, reliability and green communication. In this paper, we provide a comprehensive review of methods and techniques in fog computing. Our focus is on fog infrastructure and protocols in the context of IIoT applications. This article has two main research areas: In the first half, we discuss the history of industrial revolution, application areas of IIoT followed by key enabling technologies that act as building blocks for industrial transformation. In the second half, we focus on fog computing, providing solutions to critical challenges and as an enabler for IIoT application domains. Finally, open research challenges are discussed to enlighten fog computing aspects in different fields and technologies.

64 citations

Proceedings ArticleDOI
23 Jul 2018
TL;DR: An online hierarchical stratified reservoir sampling algorithm that uses edge computing resources to produce approximate output with rigorous error bounds is designed and implemented based on Apache Kafka and evaluated its effectiveness using a set of microbenchmarks and real-world case studies.
Abstract: IoT-enabled devices continue to generate a massive amount of data. Transforming this continuously arriving raw data into timely insights is critical for many modern online services. For such settings, the traditional form of data analytics over the entire dataset would be prohibitively limiting and expensive for supporting real-time stream analytics. In this work, we make a case for approximate computing for data analytics in IoT settings. Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing– based on the chosen sample size – can make a systematic tradeoff between the output accuracy and computation efficiency. This motivated the design of APPROXIOT– a data analytics system for approximate computing in IoT. To realize this idea, we designed an online hierarchical stratified reservoir sampling algorithm that uses edge computing resources to produce approximate output with rigorous error bounds. To showcase the effectiveness of our algorithm, we implemented APPROXIOT based on Apache Kafka and evaluated its effectiveness using a set of microbenchmarks and real-world case studies. Our results show that APPROXIOT achieves a speedup 1:3×–9:9× with varying sampling fraction of 80% to 10% compared to simple random sampling.

64 citations

Journal ArticleDOI
TL;DR: XNOR-RRAM as mentioned in this paper is a scalable RRAM based in-memory computing design, which is fabricated in a 90nm CMOS technology with monolithic integration of RRAM devices between metal 1 and 2.
Abstract: Deep learning hardware designs have been bottlenecked by conventional memories such as SRAM due to density, leakage and parallel computing challenges. Resistive devices can address the density and volatility issues, but have been limited by peripheral circuit integration. In this work, we demonstrate a scalable RRAM based in-memory computing design, termed XNOR-RRAM, which is fabricated in a 90nm CMOS technology with monolithic integration of RRAM devices between metal 1 and 2. We integrated a 128x64 RRAM array with CMOS peripheral circuits including row/column decoders and flash analog-to-digital converters (ADCs), which collectively become a core component for scalable RRAM-based in-memory computing towards large deep neural networks (DNNs). To maximize the parallelism of in-memory computing, we assert all 128 wordlines of the RRAM array simultaneously, perform analog computing along the bitlines, and digitize the bitline voltages using ADCs. The resistance distribution of low resistance states is tightened by write-verify scheme, and the ADC offset is calibrated. Prototype chip measurements show that the proposed design achieves high binary DNN accuracy of 98.5% for MNIST and 83.5% for CIFAR-10 datasets, respectively, with energy efficiency of 24 TOPS/W and 158 GOPS throughput. This represents 5.6X, 3.2X, 14.1X improvements in throughput, energy-delay product (EDP), and energy-delay-squared product (ED2P), respectively, compared to the state-of-the-art literature. The proposed XNOR-RRAM can enable intelligent functionalities for area-/energy-constrained edge computing devices.

64 citations


Network Information
Related Topics (5)
Wireless sensor network
142K papers, 2.4M citations
93% related
Network packet
159.7K papers, 2.2M citations
93% related
Wireless network
122.5K papers, 2.1M citations
93% related
Server
79.5K papers, 1.4M citations
93% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
92% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,471
20223,274
20212,978
20203,397
20192,698
20181,649