scispace - formally typeset
Search or ask a question
Author

Xin Peng

Bio: Xin Peng is an academic researcher from Hunan Institute of Science and Technology. The author has contributed to research in topics: Computer science & Mobile edge computing. The author has an hindex of 5, co-authored 5 publications receiving 720 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An optimization problem is formulated to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration, and an EECO scheme is designed, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints.
Abstract: Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.

730 citations

Journal ArticleDOI
TL;DR: A deep ${Q}$ -learning approach is adopted for designing an optimal data transmission scheduling scheme in cognitive vehicular networks to minimize transmission costs while also fully utilizing various communication modes and resources.
Abstract: The Internet of Things (IoT) platform has played a significant role in improving road transport safety and efficiency by ubiquitously connecting intelligent vehicles through wireless communications. Such an IoT paradigm however, brings in considerable strain on limited spectrum resources due to the need of continuous communication and monitoring. Cognitive radio (CR) is a potential approach to alleviate the spectrum scarcity problem through opportunistic exploitation of the underutilized spectrum. However, highly dynamic topology and time-varying spectrum states in CR-based vehicular networks introduce quite a few challenges to be addressed. Moreover, a variety of vehicular communication modes, such as vehicle-to-infrastructure and vehicle-to-vehicle, as well as data QoS requirements pose critical issues on efficient transmission scheduling. Based on this motivation, in this paper, we adopt a deep ${Q}$ -learning approach for designing an optimal data transmission scheduling scheme in cognitive vehicular networks to minimize transmission costs while also fully utilizing various communication modes and resources. Furthermore, we investigate the characteristics of communication modes and spectrum resources chosen by vehicles in different network states, and propose an efficient learning algorithm for obtaining the optimal scheduling strategies. Numerical results are presented to illustrate the performance of the proposed scheduling schemes.

127 citations

Journal ArticleDOI
TL;DR: The extensive simulations demonstrated the exceptional classification performance for new key features based on high order cumulants and the robustness of the proposed method for a variety of conditions, such as frequency offset, multi-path, and so on.
Abstract: By considering the different cumulant combinations of the 2FSK, 4FSK, 2PSK, 4PSK, 2ASK, and 4ASK, this paper established new identification parameters to achieve the recognition of those digital modulations. The deep neural network (DNN) was also employed to improve the recognition rate, which was designed to classify the signal based on the distinct feature of each signal type that was extracted with high order cumulants. The extensive simulations demonstrated the exceptional classification performance for new key features based on high order cumulants. The overall success rate of the proposed algorithm was over 99% at the signal to noise ratio (SNR) of -5 dB and 100% at the SNR of -2 dB. The results of the experiments also showed the robustness of the proposed method for a variety of conditions, such as frequency offset, multi-path, and so on.

70 citations

Journal ArticleDOI
TL;DR: This paper investigates the secrecy outage performance of a typical cooperative downlink non-orthogonal multiple access (NOMA) system over Nakagami-m fading channel, and shows that there exists secrecy performance floor for cooperative NOMA system, and it was determined by the weak user’s secrecy requirement and the channel conditions of the Eves.
Abstract: In this paper, we investigate the secrecy outage performance of a typical cooperative downlink non-orthogonal multiple access (NOMA) system over Nakagami-m fading channel, in which the base station transmits a superimposed signal to two users via a relay. First, the secrecy outage behavior of the considered system over Nakagami-$m$ fading channel under three wiretapping cases, e.g., one eavesdropper (Eve), non-colluding and colluding eavesdroppers, are studied, and both analytical and asymptotic expressions for the secrecy outage probability are derived. Next, by considering the availability of Eves' channel state information, we adopt the two-stage relay selection (RS) strategy to improve the system's secrecy outage performance. Finally, simulation results are provided to corroborate the accuracy of our derived expressions. The results show that: 1) there exists secrecy performance floor for cooperative NOMA system, and it was determined by the weak user's secrecy requirement and the channel conditions of the Eves; 2) the two-stage RS scheme can increase the secrecy outage performance significantly under three wiretapping cases; 3) the secrecy performance of cooperative NOMA network is superior to that of orthogonal multiple access network on the condition of low and medium signal-to-noise ratio regions.

29 citations

Journal ArticleDOI
TL;DR: This letter investigates a jammer-aided cooperative non-orthogonal multiple access (NOMA) system, where one relay is used to deliver information and other relays are acted as jammers, and two simple relay selection strategies, e.g., random RS and max-min RS are considered.
Abstract: This letter investigates a jammer-aided cooperative non-orthogonal multiple access (NOMA) system, where one relay is used to deliver information and other relays are acted as jammers. Two simple relay selection (RS) strategies, e.g., random RS and max-min RS, are considered in this letter. Analytical and asymptotical expressions for secrecy outage probability (SOP) under both RS schemes are developed. Simulation results show that NOMA system with jammer achieves a lower SOP than the one without jammer in the moderate to high SNR range. Moreover, by adopting max-min RS strategy, the secrecy outage performance can be further improved in the low SNR region, while still tends to a constant in high SNR region.

22 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper describes major use cases and reference scenarios where the mobile edge computing (MEC) is applicable and surveys existing concepts integrating MEC functionalities to the mobile networks and discusses current advancement in standardization of the MEC.
Abstract: Technological evolution of mobile user equipment (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. A suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud. Nevertheless, this option introduces significant execution delay consisting of delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such a delay is inconvenient and makes the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling it to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: 1) decision on computation offloading; 2) allocation of computing resource within the MEC; and 3) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,829 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the research on computation offloading in mobile edge computing (MEC), focusing on user-oriented use cases and reference scenarios where the MEC is applicable.
Abstract: Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,759 citations

Journal ArticleDOI
TL;DR: A comprehensive survey, analyzing how edge computing improves the performance of IoT networks and considers security issues in edge computing, evaluating the availability, integrity, and the confidentiality of security strategies of each group, and proposing a framework for security evaluation of IoT Networks with edge computing.
Abstract: The Internet of Things (IoT) now permeates our daily lives, providing important measurement and collection tools to inform our every decision. Millions of sensors and devices are continuously producing data and exchanging important messages via complex networks supporting machine-to-machine communications and monitoring and controlling critical smart-world infrastructures. As a strategy to mitigate the escalation in resource congestion, edge computing has emerged as a new paradigm to solve IoT and localized computing needs. Compared with the well-known cloud computing, edge computing will migrate data computation or storage to the network “edge,” near the end users. Thus, a number of computation nodes distributed across the network can offload the computational stress away from the centralized data center, and can significantly reduce the latency in message exchange. In addition, the distributed structure can balance network traffic and avoid the traffic peaks in IoT networks, reducing the transmission latency between edge/cloudlet servers and end users, as well as reducing response times for real-time IoT applications in comparison with traditional cloud services. Furthermore, by transferring computation and communication overhead from nodes with limited battery supply to nodes with significant power resources, the system can extend the lifetime of the individual nodes. In this paper, we conduct a comprehensive survey, analyzing how edge computing improves the performance of IoT networks. We categorize edge computing into different groups based on architecture, and study their performance by comparing network latency, bandwidth occupation, energy consumption, and overhead. In addition, we consider security issues in edge computing, evaluating the availability, integrity, and the confidentiality of security strategies of each group, and propose a framework for security evaluation of IoT networks with edge computing. Finally, we compare the performance of various IoT applications (smart city, smart grid, smart transportation, and so on) in edge computing and traditional cloud computing architectures.

1,008 citations

Journal ArticleDOI
TL;DR: This survey makes an exhaustive review on the state-of-the-art research efforts on mobile edge networks, including definition, architecture, and advantages, and presents a comprehensive survey of issues on computing, caching, and communication techniques at the network edge.
Abstract: As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.

782 citations

Journal ArticleDOI
TL;DR: This article designs a blockchain empowered secure data sharing architecture for distributed multiple parties, and incorporates privacy-preserved federated learning in the consensus process of permissioned blockchain, so that the computing work for consensus can also be used for federated training.
Abstract: The rapid increase in the volume of data generated from connected devices in industrial Internet of Things paradigm, opens up new possibilities for enhancing the quality of service for the emerging applications through data sharing. However, security and privacy concerns (e.g., data leakage) are major obstacles for data providers to share their data in wireless networks. The leakage of private data can lead to serious issues beyond financial loss for the providers. In this article, we first design a blockchain empowered secure data sharing architecture for distributed multiple parties. Then, we formulate the data sharing problem into a machine-learning problem by incorporating privacy-preserved federated learning. The privacy of data is well-maintained by sharing the data model instead of revealing the actual data. Finally, we integrate federated learning in the consensus process of permissioned blockchain, so that the computing work for consensus can also be used for federated training. Numerical results derived from real-world datasets show that the proposed data sharing scheme achieves good accuracy, high efficiency, and enhanced security.

668 citations