scispace - formally typeset
Search or ask a question
Topic

Edge computing

About: Edge computing is a research topic. Over the lifetime, 11657 publications have been published within this topic receiving 148533 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This article proposes a two-level edge computing architecture for automated driving services in order to make full use of the intelligence at the wireless edge (i.e., base stations and autonomous vehicles) for coordinated content delivery and investigates the research challenges of wireless edge caching and vehicular content sharing.
Abstract: Automated driving is coming with enormous potential for safer, more convenient, and more efficient transportation systems Besides onboard sensing, autonomous vehicles can also access various cloud services such as high definition maps and dynamic path planning through cellular networks to precisely understand the real-time driving environments However, these automated driving services, which have large content volume, are time-varying, location-dependent, and delay-constrained Therefore, cellular networks will face the challenge of meeting this extreme performance demand To cope with the challenge, by leveraging the emerging mobile edge computing technique, in this article, we first propose a two-level edge computing architecture for automated driving services in order to make full use of the intelligence at the wireless edge (ie, base stations and autonomous vehicles) for coordinated content delivery We then investigate the research challenges of wireless edge caching and vehicular content sharing Finally, we propose potential solutions to these challenges and evaluate them using real and synthetic traces Simulation results demonstrate that the proposed solutions can significantly reduce the backhaul and wireless bottlenecks of cellular networks while ensuring the quality of automated driving services

224 citations

Journal ArticleDOI
TL;DR: A new architecture that can dynamically orchestrate edge computing and caching resources to improve system utility by making full use of AI-based algorithms is proposed and a novel resource management scheme is developed by exploiting deep reinforcement learning.
Abstract: Recent advances in edge computing and caching have significant impacts on the developments of vehicular networks. Nevertheless, the heterogeneous requirements of on-vehicle applications and the time variability on popularity of contents bring great challenges for edge servers to efficiently utilize their resources. Moreover, the high mobility of smart vehicles adds substantial complexity in jointly optimizing edge computing and caching. Artificial intelligence (AI) can greatly enhance the cognition and intelligence of vehicular networks and thus assist in optimally allocating resources for problems with diverse, time-variant, and complex features. In this article, we propose a new architecture that can dynamically orchestrate edge computing and caching resources to improve system utility by making full use of AI-based algorithms. Then we formulate a joint edge computing and caching scheme to maximize system utility and develop a novel resource management scheme by exploiting deep reinforcement learning. Numerical results demonstrate the effectiveness of the proposed scheme.

223 citations

Journal ArticleDOI
Xiaoyu Qiu1, Luobin Liu1, Wuhui Chen1, Zicong Hong1, Zibin Zheng1 
TL;DR: This paper forms the online offloading problem as a Markov decision process by considering both the blockchain mining tasks and data processing tasks and introduces an adaptive genetic algorithm into the exploration of deep reinforcement learning to effectively avoid useless exploration and speed up the convergence without reducing performance.
Abstract: Offloading computation-intensive tasks (e.g., blockchain consensus processes and data processing tasks) to the edge/cloud is a promising solution for blockchain-empowered mobile edge computing. However, the traditional offloading approaches (e.g., auction-based and game-theory approaches) fail to adjust the policy according to the changing environment and cannot achieve long-term performance. Moreover, the existing deep reinforcement learning-based offloading approaches suffer from the slow convergence caused by high-dimensional action space. In this paper, we propose a new model-free deep reinforcement learning-based online computation offloading approach for blockchain-empowered mobile edge computing in which both mining tasks and data processing tasks are considered. First, we formulate the online offloading problem as a Markov decision process by considering both the blockchain mining tasks and data processing tasks. Then, to maximize long-term offloading performance, we leverage deep reinforcement learning to accommodate highly dynamic environments and address the computational complexity. Furthermore, we introduce an adaptive genetic algorithm into the exploration of deep reinforcement learning to effectively avoid useless exploration and speed up the convergence without reducing performance. Finally, our experimental results demonstrate that our algorithm can converge quickly and outperform three benchmark policies.

223 citations

Journal ArticleDOI
TL;DR: A dynamic network virtualization technique to integrate the network resources, and furtherly design a cooperative computation offloading (CCO) model to achieve parallel computation in STNs is proposed.
Abstract: The high-speed satellite-terrestrial network (STN) is an indispensable alternative in future mobile communication systems. In this article, we first introduce the architecture and application scenarios of STNs, and then investigate possible ways to implement mobile edge computing (MEC) technique for QoS improvement in STNs. We propose satellite MEC (SMEC), in which a user equipment without a proximal MEC server can also enjoy MEC services via satellite links. We propose a dynamic network virtualization technique to integrate the network resources, and furtherly design a cooperative computation offloading (CCO) model to achieve parallel computation in STNs. Task scheduling models in SMEC are discussed in detail, and an elemental simulation is conducted to evaluate the performance of the proposed CCO model in SMEC.

223 citations

Journal ArticleDOI
TL;DR: This article presents a mobility-aware hierarchical MEC framework for green and low-latency IoT, and deploys a game theoretic approach for computation offloading in order to optimize the utility of the service providers while also reducing the energy cost and the task execution time of the smart devices.
Abstract: IoT, a heterogeneous interconnection of smart devices, is a great platform to develop novel mobile applications. Resource constrained smart devices, however, often become the bottlenecks to fully realize such developments, especially when it comes to intensive-computation-oriented and low-latency-demanding applications. MEC is a promising approach to address such challenges. In this article, we focus on MEC applications for IoT, and address energy efficiency as well as offloading performance of such applications in terms of end-user experience. In this regard, we present a mobility-aware hierarchical MEC framework for green and low-latency IoT. We deploy a game theoretic approach for computation offloading in order to optimize the utility of the service providers while also reducing the energy cost and the task execution time of the smart devices. Numerical results indicate that the proposed scheme does brings significant enhancement in both energy efficiency and latency performance of MEC applications for IoT.

222 citations


Network Information
Related Topics (5)
Wireless sensor network
142K papers, 2.4M citations
93% related
Network packet
159.7K papers, 2.2M citations
93% related
Wireless network
122.5K papers, 2.1M citations
93% related
Server
79.5K papers, 1.4M citations
93% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
92% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,471
20223,274
20212,978
20203,397
20192,698
20181,649