scispace - formally typeset
Search or ask a question
Topic

Edge computing

About: Edge computing is a research topic. Over the lifetime, 11657 publications have been published within this topic receiving 148533 citations.


Papers
More filters
Proceedings ArticleDOI
16 Nov 2020
TL;DR: This work presents Distream, a distributed live video analytics system based on the smart camera-edge cluster architecture that is able to adapt to the workload dynamics to achieve low-latency, high-throughput, and scalable live video Analytics.
Abstract: Video cameras have been deployed at scale today. Driven by the breakthrough in deep learning (DL), organizations that have deployed these cameras start to use DL-based techniques for live video analytics. Although existing systems aim to optimize live video analytics from a variety of perspectives, they are agnostic to the workload dynamics in real-world deployments. In this work, we present Distream, a distributed live video analytics system based on the smart camera-edge cluster architecture, that is able to adapt to the workload dynamics to achieve low-latency, high-throughput, and scalable live video analytics. The key behind the design of Distream is to adaptively balance the workloads across smart cameras and partition the workloads between cameras and the edge cluster. In doing so, Distream is able to fully utilize the compute resources at both ends to achieve optimized system performance. We evaluated Distream with 500 hours of distributed video streams from two real-world video datasets with a testbed that consists of 24 cameras and a 4-GPU edge cluster. Our results show that Distream consistently outperforms the status quo in terms of throughput, latency, and latency service level objective (SLO) miss rate.

66 citations

01 Jan 2018
TL;DR: This paper proposes a new approach which leverages edge computing to deploy edge functions that gather information about incoming traffic and communicate that information via a fast-path with a nearby detection service that accelerates the detection and the arrest of DDoS attacks, limiting their damaging impact.
Abstract: Application-level DDoS attacks mounted using compromised IoT devices are emerging as a critical problem. The application-level and seemingly legitimate nature of traffic in such attacks renders most existing solutions ineffective, and the sheer amount and distribution of the generated traffic make mitigation extremely costly. This paper proposes a new approach which leverages edge computing to deploy edge functions that gather information about incoming traffic and communicate that information via a fast-path with a nearby detection service. This accelerates the detection and the arrest of such attacks, limiting their damaging impact. Preliminary investigation shows promise for up to 10x faster detection that reduces up to 82% of the Internet traffic due to IoT-DDoS.

66 citations

Journal ArticleDOI
TL;DR: This paper jointly considers communication, caching, and computation (3C) to reduce infotainment content retrieval delay for smart cars and proposes a joint solution based on the alternative direction method of multipliers technique, which operates in a distributed manner.
Abstract: Remarkable prevalence of cloud computing has enabled smart cars to provide infotainment services. However, retrieving infotainment contents from long-distance data centers poses a significant delay, thus hindering to offer stringent latency-aware infotainment services. Multi-access edge computing is a promising option to meet strict latency requirements. However, it imposes severe resource constraints with respect to caching, and computation. Similarly, communication resources utilized to fetch the infotainment contents are scarce. In this paper, we jointly consider communication, caching, and computation (3C) to reduce infotainment content retrieval delay for smart cars. We formulate the problem as a mix-integer, nonlinear, and nonconvex optimization to minimize the latency. Furthermore, we relax the formulated problem from NP-hard to linear programming. Then, we propose a joint solution (3C) based on the alternative direction method of multipliers technique, which operates in a distributed manner. We compare the proposed 3C solution with various approaches, namely, greedy, random, and centralized. Simulation results reveal that the proposed solution reduces delay up to $\text{9}\%$ and $\text{28}\%$ compared to the greedy and random approaches, respectively.

66 citations

Journal ArticleDOI
TL;DR: This paper focuses on the development of distributed computing and storage infrastructure that will enable the deployment of applications and services at the edge of the network, allowing operators to offer a virtualized environment to enterprise customers and industries to implement applications and Services close to end users.
Abstract: Future 5G cellular networks are expected to play a major role in supporting the Internet of Things (IoT) due to their ubiquitous coverage, plug-and-play configuration, and embedded security. Besides connectivity, however, the IoT will need computation and storage in proximity of sensors and actuators to support timecritical and opportunistic applications. Mobile-edge computing (MEC) is currently under standardization as a novel paradigm expected to enrich future broadband communication networks [1], [2]. With MEC, traditional networks will be empowered by placing cloud-computing-like capabilities within the radio access network, in an MEC server located in close proximity to end users. Such distributed computing and storage infrastructure will enable the deployment of applications and services at the edge of the network, allowing operators to offer a virtualized environment to enterprise customers and industries to implement applications and services close to end users.

66 citations

Journal ArticleDOI
TL;DR: This article investigates a backscatter-assisted data offloading in OFDMA-based wireless-powered MEC for IoT systems and concludes that the FEA is the best solution which results in a near-globally-optimal solution at a much lower complexity as compared to benchmark schemes.
Abstract: Mobile-edge computing (MEC) has emerged as a prominent technology to overcome sudden demands on computation-intensive applications of the Internet of Things (IoT) with finite processing capabilities Nevertheless, the limited energy resources also seriously hinder IoT devices from offloading tasks that consume high power in active RF communications Despite the development of energy harvesting (EH) techniques, the harvested energy from surrounding environments could be inadequate for power-hungry tasks Fortunately, backscatter communications (Backcom) is an intriguing technology to narrow the gap between the power needed for communication and harvested power Motivated by these considerations, this article investigates a backscatter-assisted data offloading in OFDMA-based wireless-powered (WP) MEC for IoT systems Specifically, we aim at maximizing the sum computation rate by jointly optimizing the transmit power at the gateway (GW), backscatter coefficient, time-splitting (TS) ratio, and binary decision-making matrices This problem is challenging to solve due to its nonconvexity To find solutions, we first simplify the problem by determining the optimal values of transmit power of the GW and backscatter coefficient Then, the original problem is decomposed into two subproblems, namely, TS ratio optimization with given offloading decision matrices and offloading decision optimization with given TS ratio Especially, a closed-form expression for the TS ratio is obtained which greatly enhances the CPU execution time Based on the solutions of the two subproblems, an efficient algorithm, termed the fast-efficient algorithm (FEA), is proposed by leveraging the block coordinate descent method Then, it is compared with exhaustive search (ES), the bisection-based algorithm (BA), edge computing (EC), and local computing (LC) used as reference methods As a result, the FEA is the best solution which results in a near-globally-optimal solution at a much lower complexity as compared to benchmark schemes For instance, the CPU execution time of FEA is about 0029 s in a 50-user network, which is tailored for ultralow latency applications of IoT networks

66 citations


Network Information
Related Topics (5)
Wireless sensor network
142K papers, 2.4M citations
93% related
Network packet
159.7K papers, 2.2M citations
93% related
Wireless network
122.5K papers, 2.1M citations
93% related
Server
79.5K papers, 1.4M citations
93% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
92% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,471
20223,274
20212,978
20203,397
20192,698
20181,649