scispace - formally typeset
Search or ask a question
Author

Cheng Zhang

Bio: Cheng Zhang is an academic researcher from Waseda University. The author has contributed to research in topics: Server & Cellular network. The author has an hindex of 12, co-authored 50 publications receiving 494 citations. Previous affiliations of Cheng Zhang include Zhejiang University & Southeast University.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A deep Q-network (DQN) based technique for task migration in MEC system that can learn the optimal task migration policy from previous experiences without necessarily acquiring the information about users’ mobility pattern in advance.

110 citations

Proceedings ArticleDOI
08 Jul 2019
TL;DR: Experimental results corroborate that CCO can achieve superior performance compared with benchmarks where crossedge collaboration is not allowed, and the-oretical analysis about the complexity and the effectiveness of the proposed framework is provided.
Abstract: Mobile Edge Computing has already become a new paradigm to reduce the latency in data transmission for resource-limited mobile devices by offloading computation tasks onto edge servers. However, for mobility-aware computation-intensive services, existing offloading strategies cannot handle the offloading procedure properly because of the lack of collaboration among edge servers. A data stream application is partitionable if it can be presented by a directed acyclic dataflow graph, which makes cross-edge collaboration possible. In this paper, we propose a cross-edge computation offloading (CCO) framework for partitionable applications. The transmission, execution and coordination cost, as well as the penalty for task failure, are considered. An online algorithm based on Lyapunov optimization is proposed to jointly determine edge site-selection and energy harvesting without priori knowledge. By stabilizing the battery energy level of each mobile device around a positive constant, the proposed algorithm can obtain asymptotic optimality. The-oretical analysis about the complexity and the effectiveness of the proposed framework is provided. Experimental results based on a real-life dataset corroborate that CCO can achieve superior performance compared with benchmarks where crossedge collaboration is not allowed.

54 citations

Journal ArticleDOI
TL;DR: This paper analyzes and builds mathematical models about whether/how to offload tasks from various IoT devices to edge servers and proposes an algorithm for IoT devices’ computation offloading decisions, which can help decide whether service relocation/migration is needed or not.
Abstract: Collaboration spaces formed from edge servers can efficiently improve the quality of experience of service subscribers. In this paper, we first utilize a strategy based on the density of Internet of Things (IoT) devices and ${k}$ -means algorithm to partition network of edge servers, then an algorithm for IoT devices’ computation offloading decisions is proposed, i.e., whether we need to offload IoT devices’ workload to edge servers, and which edge server to choose if migration is needed. The combination of locations of edge servers and the geographic distribution of various IoT devices can significantly improve the scheduling of network resources and satisfy requirements of service subscribers. We analyze and build mathematical models about whether/how to offload tasks from various IoT devices to edge servers. In order to better simulate operations of the mobile edge servers in more realistic scenarios, the input size of each IoT device is uncertain and regarded as a random variable following some probability distribution based on long-term observations. On the basis of that, an algorithm utilizing sample average approximation method is proposed to discuss whether the tasks to be executed locally or offloaded. Besides, the algorithm proposed can also help decide whether service relocation/migration is needed or not. Finally, simulation results show that our algorithm can achieve 20% of global cost less than the benchmark on a true base station dataset of Hangzhou.

50 citations

Journal ArticleDOI
TL;DR: This work designs an autonomous tracking system for a swarm of unmanned aerial vehicles (UAVs) to localize a radio frequency (RF) mobile target and proposes an enhanced multi-agent reinforcement learning to coordinate multiple UAVs performing real-time target tracking.
Abstract: In this paper, we aim to design an autonomous tracking system for a swarm of unmanned aerial vehicles (UAVs) to localize a radio frequency (RF) mobile target. In the system, UAVs equipped with omnidirectional received signal strength (RSS) sensors can cooperatively search the target with a specified tracking accuracy. To achieve fast localization and tracking in the highly dynamic channel environment (e.g., time-varying transmit power and intermittent signal), we formulate a flight decision problem as a constrained Markov decision process (CMDP) with the main objective of avoiding redundant UAV flight path. Then, we propose an enhanced multi-agent reinforcement learning to coordinate multiple UAVs performing real-time target tracking. The core of the proposed scheme is a feedback control system that takes into account the uncertainty of the channel estimate. We prove that the proposed algorithm can converge to the optimal decision. Our simulation results show that the proposed scheme outperforms standard Q-learning and multi-agent Q-learning algorithms in terms of searching time and successful localization probability.

44 citations

Journal ArticleDOI
TL;DR: This paper targets a networked smart grid system, in which future electricity generation is predicted with reasonable accuracy based on weather forecasts, and schedules consumers’ behaviors using a Markov decision process model to optimize the consumers' net benefits.
Abstract: Many recently built residential houses and factories are equipped with facilities for converting energy from green sources, such as solar energy, into electricity. Electricity consumers may input the extra electricity that they do not consume into the smart grid for sale, which is allowed by law in countries such as Japan. To reduce peak-time electricity usage, time-varying pricing schemes are usually adopted in smart grids, for both the electricity sold to consumers and the electricity purchased from consumers. Thanks to the development of cyber-physical systems and advanced technologies for communication and computation, current smart grids are typically networked, and it is possible to integrate information such as weather forecasts into such a networked smart grid. Thus, we can predict future levels of electricity generation (e.g., the energy from solar and wind sources, whose generation is predominantly affected by the weather) with high accuracy using this information and historical data. The key problem for consumers then becomes how to schedule their purchases from and sales to the networked smart grid to maximize their benefits by jointly considering the current storage status, time-varying pricing, and future electricity consumption and generation. This problem is non-trivial and is vitally important for improving smart grid utilization and attracting consumer investment in new energy generation systems, among other purposes. In this paper, we target such a networked smart grid system, in which future electricity generation is predicted with reasonable accuracy based on weather forecasts. We schedule consumers’ behaviors using a Markov decision process model to optimize the consumers’ net benefits. The results of extensive simulations show that the proposed scheme significantly outperforms the baseline competing scheme.

37 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking, and presents applications of DRL for traffic routing, resource sharing, and data collection.
Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.

1,153 citations

Journal ArticleDOI
TL;DR: In this article, the authors divide Edge Intelligence into two categories: Intelligence-enabled Edge Computing (IEC) and Artificial Intelligence on Edge (AI on Edge) to provide more optimal solutions to key problems in edge computing with the help of popular and effective AI technologies.
Abstract: Along with the rapid developments in communication technologies and the surge in the use of mobile devices, a brand-new computation paradigm, Edge Computing, is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications are thriving with the breakthroughs in deep learning and the many improvements in hardware architectures. Billions of data bytes, generated at the network edge, put massive demands on data processing and structural optimization. Thus, there exists a strong demand to integrate Edge Computing and AI, which gives birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial Intelligence on Edge). The former focuses on providing more optimal solutions to key problems in Edge Computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on the edge. This paper provides insights into this new inter-disciplinary field from a broader perspective. It discusses the core concepts and the research road-map, which should provide the necessary background for potential future research initiatives in Edge Intelligence.

362 citations

Journal ArticleDOI
TL;DR: In this paper, the authors divide edge intelligence into AI for edge (intelligence-enabled edge computing) and AI on edge (artificial intelligence on edge), and provide insights into this new interdisciplinary field from a broader perspective.
Abstract: Along with the rapid developments in communication technologies and the surge in the use of mobile devices, a brand-new computation paradigm, edge computing, is surging in popularity. Meanwhile, the artificial intelligence (AI) applications are thriving with the breakthroughs in deep learning and the many improvements in hardware architectures. Billions of data bytes, generated at the network edge, put massive demands on data processing and structural optimization. Thus, there exists a strong demand to integrate edge computing and AI, which gives birth to edge intelligence. In this article, we divide edge intelligence into AI for edge (intelligence-enabled edge computing) and AI on edge (artificial intelligence on edge). The former focuses on providing more optimal solutions to key problems in edge computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on the edge. This article provides insights into this new interdisciplinary field from a broader perspective. It discusses the core concepts and the research roadmap, which should provide the necessary background for potential future research initiatives in edge intelligence.

343 citations

Posted Content
TL;DR: In this paper, a comprehensive literature review on applications of deep reinforcement learning in communications and networking is presented, which includes dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation.
Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.

332 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the emerging applications of federated learning in IoT networks is provided, which explores and analyzes the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing and IoT privacy and security.
Abstract: The Internet of Things (IoT) is penetrating many facets of our daily life with the proliferation of intelligent services and applications empowered by artificial intelligence (AI). Traditionally, AI techniques require centralized data collection and processing that may not be feasible in realistic application scenarios due to the high scalability of modern IoT networks and growing data privacy concerns. Federated Learning (FL) has emerged as a distributed collaborative AI approach that can enable many intelligent IoT applications, by allowing for AI training at distributed IoT devices without the need for data sharing. In this article, we provide a comprehensive survey of the emerging applications of FL in IoT networks, beginning from an introduction to the recent advances in FL and IoT to a discussion of their integration. Particularly, we explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. We then provide an extensive survey of the use of FL in various key IoT applications such as smart healthcare, smart transportation, Unmanned Aerial Vehicles (UAVs), smart cities, and smart industry. The important lessons learned from this review of the FL-IoT services and applications are also highlighted. We complete this survey by highlighting the current challenges and possible directions for future research in this booming area.

319 citations