scispace - formally typeset
Search or ask a question
Author

Bei Liu

Bio: Bei Liu is an academic researcher from Tsinghua University. The author has contributed to research in topics: Wireless network & Heterogeneous network. The author has an hindex of 3, co-authored 7 publications receiving 41 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Compared with commonly used simple additive weighting (SAW), random access selection (RAS), and price-based and QoS-based network selection scheme, this scheme has better performance in improving average user satisfaction and reducing access failures.
Abstract: With the coming of the fifth-generation (5G) mobile communications, in mobile edge computing (MEC), the growth of user services and the personalization of QoS requirements have posed great challenges for heterogeneous wireless networks (HWNs) access selection. Based on the multiattribute decision theory and the fuzzy logic theory, we propose a novel network selection scheme for multiservice QoS requirements in MEC. The main procedures of the scheme include dynamic adaptive process, fuzzy process, hierarchical analysis, and integrated attributes assessment. The scheme proposed contributes to efficiently reduce the ping-pong effect and effectively select accurate network in a dynamic environment. Simulation results show that our scheme can select network access according to the type of user services and whether to switch networks. In addition, compared with commonly used simple additive weighting (SAW), random access selection (RAS), and price-based and QoS-based network selection scheme, our scheme has better performance in improving average user satisfaction and reducing access failures.

35 citations

Proceedings ArticleDOI
09 May 2019
TL;DR: The algorithm proposed formalizes the computation offloading problem into an energy and time optimization problem according to user experience and obtains the optimal cost strategy on the basis of deep Q-learning (DQN).
Abstract: Mobile edge computing (MEC) can significantly enhance device computing power by offloading service workflows from mobile device computing to mobile network edges. Thus how to implement an efficient computation offloading mechanism is a major challenge nowadays. For the purpose of addressing this problem, this paper aims to reduce application completion time and energy consumption of user device (UD) in offloading. The algorithm proposed formalizes the computation offloading problem into an energy and time optimization problem according to user experience, and obtains the optimal cost strategy on the basis of deep Q-learning (DQN). The simulation results show that comparing to the known local execution algorithm and random offloading algorithm, the computation offloading algorithm proposed in this paper can significantly reduce the energy consumption and shorten the completion time of service workflow execution.

19 citations

Proceedings ArticleDOI
01 Apr 2018
TL;DR: A downlink optimal power allocation scheme for UDN networks based on virtual cell (VC) local information based on graph theory knowledge is proposed and non-cooperative game problems with penalty are investigated.
Abstract: In order to meet the demand of ultra-high-flow communication after 2020, ultra-dense networks (UDN) is considered to become a key technology of 5G. By deploying a large number of access points (AP) in UDN, the networks can achieve extremely high frequency reuse which makes the system capacity in hot spots (offices, subways, etc.) increase hundreds of times. In this paper, we propose a downlink optimal power allocation scheme for UDN networks based on virtual cell (VC) local information. By applying graph theory knowledge, the analysis of interference in system is simplified. We investigate non-cooperative game problems with penalty. In addition, we propose a distributed optimal power control algorithm accordingly. The final simulation results show that the new algorithm has a certain improvement compared with the average power allocation algorithm.

7 citations

Proceedings ArticleDOI
08 Apr 2019
TL;DR: This paper improves the reward function in Q-Learning using the AHP (Analytic Hierarchy Process) method and makes a simple analysis about network resources competition in the case of multi-agent scenario and proposes two network selection algorithms: SANSA and MANSA which are based on Q- learning and Nash Q-learning respectively to deal with the network selection problem.
Abstract: With the development of heterogeneous wireless networks, it is particularly important to build a reasonable network selection mechanism of user in the 5G heterogeneous networks. In this paper, we improve the reward function in Q-Learning using the AHP (Analytic Hierarchy Process) method and make a simple analysis about network resources competition in the case of multi-agent scenario. Then we propose two network selection algorithms: SANSA (single agent network selection algorithm) and MANSA (multi-agent network selection algorithm) which are based on Q-Learning and Nash Q-Learning respectively to deal with the network selection problem. Simulations show that our proposed algorithms have a better performance of network load balancing than the contrast scheme. In addition, the MANSA can effectively reduce the system total power consumption.

7 citations

Journal ArticleDOI
TL;DR: A network selection algorithm based on evolutionary game named NS-EG is proposed, by using analytic hierarchy process to jointly analyse user preferences and service requirements and results confirm that the proposed algorithm outperforms the contrast algorithms and achieve network load balancing.
Abstract: The network selection in heterogeneous wireless networks is considered as a crucial technology to take advantage of network resource in the coming fifth-generation (5G) mobile networks. Considering the emergence of 5G novel services and the guarantee of quality of service requirements, in the study, the authors propose a network selection algorithm based on evolutionary game named NS-EG, by using analytic hierarchy process to jointly analyse user preferences and service requirements. The utility is structured as a joint function of network decision attributes and available capacity. In addition, the dynamic behaviour of users accessing different networks with replicator dynamics are explicitly provided. In order to verify the superiority of the algorithm proposed, the authors evaluate the evolutionary equilibria as well as the iteration of the algorithm by comparing with the simple additive weighting algorithm, multiplicative exponent weighting algorithm and Q-learning based algorithm. Simulation results confirm that the proposed algorithm outperforms the contrast algorithms and achieve network load balancing.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The concepts, backgrounds, and pros and cons of edge computing are introduced, how it operates and its structure hierarchically with artificial intelligence concepts are explained, examples of its applications in various fields are listed, and some improvements are suggested.
Abstract: The key to the explosion of the Internet of Things and the ability to collect, analyze, and provide big data in the cloud is edge computing, which is a new computing paradigm in which data is processed from edges. Edge Computing has been attracting attention as one of the top 10 strategic technology trends in the past two years and has innovative potential. It provides shorter response times, lower bandwidth costs, and more robust data safety and privacy protection than cloud computing. In particular, artificial intelligence technologies are rapidly incorporating edge computing. In this paper, we introduce the concepts, backgrounds, and pros and cons of edge computing, explain how it operates and its structure hierarchically with artificial intelligence concepts, list examples of its applications in various fields, and finally suggest some improvements and discuss the challenges of its application in three representative technological fields. We intend to clarify various analyses and opinions regarding edge computing and artificial intelligence.

79 citations

Journal ArticleDOI
TL;DR: This article integrates mobile-edge computing (MEC) into blockchain-enabled IIoT systems to promote the computation capability ofIIoT devices and improve the efficiency of the consensus process and introduces deep reinforcement learning (DRL) to solve the formulated problem.
Abstract: Industrial Internet of Things (IIoT) has emerged with the developments of various communication technologies. In order to guarantee the security and privacy of massive IIoT data, blockchain is widely considered as a promising technology and applied into IIoT. However, there are still several issues in the existing blockchain-enabled IIoT: 1) unbearable energy consumption for computation tasks; 2) poor efficiency of consensus mechanism in blockchain; and 3) serious computation overhead of network systems. To handle the above issues and challenges, in this article, we integrate mobile-edge computing (MEC) into blockchain-enabled IIoT systems to promote the computation capability of IIoT devices and improve the efficiency of the consensus process. Meanwhile, the weighted system cost, including the energy consumption and the computation overhead, are jointly considered. Moreover, we propose an optimization framework for blockchain-enabled IIoT systems to decrease consumption, and formulate the proposed problem as a Markov decision process (MDP). The master controller, offloading decision, block size, and computing server can be dynamically selected and adjusted to optimize the devices energy allocation and reduce the weighted system cost. Accordingly, due to the high-dynamic and large-dimensional characteristics, deep reinforcement learning (DRL) is introduced to solve the formulated problem. Simulation results demonstrate that our proposed scheme can improve system performance significantly compared to other existing schemes.

52 citations

Journal ArticleDOI
TL;DR: A two-tier MEC system is studied, which enables data caching and computing offloading policy to minimize the network cost at the user equipment (UE) side, while satisfying the constraints of task offloading deadline, the cache capacity at APs and the computing capability of MEC servers.
Abstract: Mobile edge computing (MEC) can use wireless access network (RAN) to provide the services required by user's information technology (IT) and cloud computing functions nearby, which can create a high-performance and low latency service environment. Performing task offloading and data caching at access points (APs) in a cooperative manner can reduce the heavy backhaul load and the retransmission of content downloading. However, in edge networks (ENs), how to maximize storage utilization while reducing service latency and energy consumption is still a key issue, because the heterogeneity of ENs and the uneven distribution of users make it difficult to determine which MEC server and what data should be cached. In this paper, we study a two-tier MEC system, which enables data caching and computing offloading policy to minimize the network cost at the user equipment (UE) side, while satisfying the constraints of task offloading deadline, the cache capacity at APs and the computing capability of MEC servers. The optimization problem is formulated as a mixed integer nonlinear program (MINLP) problem. In order to solve the problem, we transform it into an equivalent task offloading convex optimization problem by fixing an optimization variable. Furthermore, we solve a cache placement problem by dynamic programming (DP) algorithm. Then we propose a distributed collaborative data caching and computing offloading (CDCCO) iterative algorithm. Simulation results demonstrate that our proposed CDCCO algorithm can significantly reduce the network cost and achieve better performance than other existing schemes.

38 citations

Journal ArticleDOI
TL;DR: An online machine learning-based method is proposed to solve the problem of intelligent transmission mode selection, which consists of a fast D2D clustering module based on unsupervised learning and a smart mode selection Module based on reinforcement learning to achieve larger VR throughput.
Abstract: As an emerging broadband service pattern in the 5G era, VR broadcasting needs a considerable amount of bandwidth and strict quality of service (QoS) control. The traditional eMBMS or enTV transmission mode in HetNets consisting of macro cells and small cells cannot bring about a good trade-off between broadband performance and resource utilization for VR broadcasting service. D2D multicasting applied to VR broadcasting can improve the performance of edge users and resource utilization. Motivated by the rapid development of AI techniques, this paper proposes a novel hybrid transmission mode selection based on online reinforcement learning to address this problem. Each VR broadband user can be associated by one of the three modes: macrocell broadcasting, mmWave small cell unicasting and D2D multicasting. This paper first models this intelligent mode decision process as a problem to pursue the optimal system throughput. Then, an online machine learning-based method is proposed to solve this problem, which consists of a fast D2D clustering module based on unsupervised learning and a smart mode selection module based on reinforcement learning. The simulation results verify that the WoLF-PHC and Nash Q-learning perform better than other algorithms in large-scale scenarios and small-scale scenarios, respectively. The proposed intelligent transmission mode selection can also achieve larger VR throughput than traditional broadcasting strategies with a good balance between broadband performance and resource utilization.

36 citations

Journal ArticleDOI
TL;DR: In this paper, a survey about edge computing from the aspect of methodologies, application scenarios and its role in Industrial Internet is presented, and some open issues of edge computing are also introduced.

36 citations