scispace - formally typeset
Search or ask a question
Author

Celimuge Wu

Other affiliations: Beijing Institute of Technology
Bio: Celimuge Wu is an academic researcher from University of Electro-Communications. The author has contributed to research in topics: Vehicular ad hoc network & Wireless ad hoc network. The author has an hindex of 29, co-authored 186 publications receiving 2544 citations. Previous affiliations of Celimuge Wu include Beijing Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper considers MEC for a representative mobile user in an ultradense sliced RAN, where multiple base stations are available to be selected for computation offloading and proposes a double deep ${Q}$ -network (DQN)-based strategic computation offload algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics.
Abstract: To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. This paper considers MEC for a representative mobile user in an ultradense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modeled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between mobile user and BSs. To break the curse of high dimensionality in state space, we first propose a double deep ${Q}$ -network (DQN)-based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a ${Q}$ -function decomposition technique is combined with the double DQN, which leads to a novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies.

528 citations

Journal ArticleDOI
TL;DR: Efficient job caching is proposed to better schedule jobs based on the information collected on neighboring vehicles, including GPS information, and a scheduling algorithm based on ant colony optimization is designed to solve this job assignment problem.
Abstract: With the emergence of in-vehicle applications, providing the required computational capabilities is becoming a crucial problem. This paper proposes a framework named autonomous vehicular edge (AVE) for edge computing on the road, with the aim of increasing the computational capabilities of vehicles in a decentralized manner. By managing the idle computational resources on vehicles and using them efficiently, the proposed AVE framework can provide computation services in dynamic vehicular environments without requiring particular infrastructures to be deployed. Specifically, this paper introduces a workflow to support the autonomous organization of vehicular edges. Efficient job caching is proposed to better schedule jobs based on the information collected on neighboring vehicles, including GPS information. A scheduling algorithm based on ant colony optimization is designed to solve this job assignment problem. Extensive simulations are conducted, and the simulation results demonstrate the superiority of this approach over competing schemes in typical urban and highway scenarios.

266 citations

Journal ArticleDOI
TL;DR: A taxonomy of edge computing in 5G is established, which gives an overview of existing state-of-the-art solutions of edge Computing in5G on the basis of objectives, computational platforms, attributes, 5G functions, performance measures, and roles.
Abstract: 5G is the next generation cellular network that aspires to achieve substantial improvement on quality of service, such as higher throughput and lower latency. Edge computing is an emerging technology that enables the evolution to 5G by bringing cloud capabilities near to the end users (or user equipment, UEs) in order to overcome the intrinsic problems of the traditional cloud, such as high latency and the lack of security. In this paper, we establish a taxonomy of edge computing in 5G, which gives an overview of existing state-of-the-art solutions of edge computing in 5G on the basis of objectives, computational platforms, attributes, 5G functions, performance measures, and roles. We also present other important aspects, including the key requirements for its successful deployment in 5G and the applications of edge computing in 5G. Then, we explore, highlight, and categorize recent advancements in edge computing for 5G. By doing so, we reveal the salient features of different edge computing paradigms for 5G. Finally, open research issues are outlined.

214 citations

Journal ArticleDOI
TL;DR: A multibeam satellite IIoT in Ka-band is proposed to realize wide-area coverage and long-distance transmissions, which uses nonorthogonal multiple access (NOMA) for each beam to improve transmission rate.
Abstract: The traditional ground industrial Internet of Things (IIoT) cannot supply wireless interconnections anywhere due to its small-scale communication coverage. In this article, a multibeam satellite IIoT in Ka-band is proposed to realize wide-area coverage and long-distance transmissions, which uses nonorthogonal multiple access (NOMA) for each beam to improve transmission rate. To guarantee Quality of Service (QoS) for the satellite IIoT, the beam power is optimized to match the theoretical transmission rate with the service rate. The NOMA transmission rate for each beam is maximized by optimizing the power allocation proportion of each node subject to the constraints of the total power for the beam and the minimal transmission rate for each node within the beam. Satellite-ground integrated IIoT is proposed to use the ground cellular network to supplement the satellite coverage in the blocked areas. The power allocation and network selection for the integrated IIoT are proposed to decrease the transmission cost. Simulation results are provided to validate the superiority of employing NOMA in the satellite IIoT and show higher transmission performance for the QoS-guarantee resource allocation.

209 citations

Journal ArticleDOI
TL;DR: The significance and technical challenges of applying FL in vehicular IoT, and future research directions are discussed, and a brief survey of existing studies on FL and its use in wireless IoT is conducted.
Abstract: Federated learning (FL) is a distributed machine learning approach that can achieve the purpose of collaborative learning from a large amount of data that belong to different parties without sharing the raw data among the data owners. FL can sufficiently utilize the computing capabilities of multiple learning agents to improve the learning efficiency while providing a better privacy solution for the data owners. FL attracts tremendous interests from a large number of industries due to growing privacy concerns. Future vehicular Internet of Things (IoT) systems, such as cooperative autonomous driving and intelligent transport systems (ITS), feature a large number of devices and privacy-sensitive data where the communication, computing, and storage resources must be efficiently utilized. FL could be a promising approach to solve these existing challenges. In this paper, we first conduct a brief survey of existing studies on FL and its use in wireless IoT. Then, we discuss the significance and technical challenges of applying FL in vehicular IoT, and point out future research directions.

194 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking, and presents applications of DRL for traffic routing, resource sharing, and data collection.
Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.

1,153 citations

Book ChapterDOI
Eric V. Denardo1
01 Jan 2011
TL;DR: This chapter sees how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models” and finds an optimal solution that is integer-valued.
Abstract: In this chapter, you will see how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models.” You will also see that if a network flow model has “integer-valued data,” the simplex method finds an optimal solution that is integer-valued.

828 citations

Book ChapterDOI
01 Jan 1997
TL;DR: In this paper, a nonlinear fractional programming problem is considered, where the objective function has a finite optimal value and it is assumed that g(x) + β + 0 for all x ∈ S,S is non-empty.
Abstract: In this chapter we deal with the following nonlinear fractional programming problem: $$P:\mathop{{\max }}\limits_{{x \in s}} q(x) = (f(x) + \alpha )/((x) + \beta )$$ where f, g: R n → R, α, β ∈ R, S ⊆ R n . To simplify things, and without restricting the generality of the problem, it is usually assumed that, g(x) + β + 0 for all x ∈ S,S is non-empty and that the objective function has a finite optimal value.

797 citations