scispace - formally typeset
Search or ask a question
Author

Li Pan

Bio: Li Pan is an academic researcher from Hunan Institute of Science and Technology. The author has contributed to research in topics: Computer science & Scheduling (computing). The author has an hindex of 4, co-authored 5 publications receiving 672 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An optimization problem is formulated to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration, and an EECO scheme is designed, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints.
Abstract: Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.

730 citations

Journal ArticleDOI
TL;DR: A deep ${Q}$ -learning approach is adopted for designing an optimal data transmission scheduling scheme in cognitive vehicular networks to minimize transmission costs while also fully utilizing various communication modes and resources.
Abstract: The Internet of Things (IoT) platform has played a significant role in improving road transport safety and efficiency by ubiquitously connecting intelligent vehicles through wireless communications. Such an IoT paradigm however, brings in considerable strain on limited spectrum resources due to the need of continuous communication and monitoring. Cognitive radio (CR) is a potential approach to alleviate the spectrum scarcity problem through opportunistic exploitation of the underutilized spectrum. However, highly dynamic topology and time-varying spectrum states in CR-based vehicular networks introduce quite a few challenges to be addressed. Moreover, a variety of vehicular communication modes, such as vehicle-to-infrastructure and vehicle-to-vehicle, as well as data QoS requirements pose critical issues on efficient transmission scheduling. Based on this motivation, in this paper, we adopt a deep ${Q}$ -learning approach for designing an optimal data transmission scheduling scheme in cognitive vehicular networks to minimize transmission costs while also fully utilizing various communication modes and resources. Furthermore, we investigate the characteristics of communication modes and spectrum resources chosen by vehicles in different network states, and propose an efficient learning algorithm for obtaining the optimal scheduling strategies. Numerical results are presented to illustrate the performance of the proposed scheduling schemes.

127 citations

Journal ArticleDOI
TL;DR: A multiple feature image capsule network ensemble approach for schizophrenia classification that outperforms some current methods and further improves the accuracy of schizophrenia classification is proposed.
Abstract: Automatic diagnosis and classification of schizophrenia based on functional magnetic resonance imaging (fMRI) data have attracted increasing attention in recent years. Most previous studies abstracted highly compressed functional features from the view of brain science and fed them into shallow classifiers for this purpose. However, their classification performance in practical applications is unstable and unsatisfactory. As an acute psychotic disorder, schizophrenia shows functional complexity in fMRI data. Therefore, additional features and deep classification methods are needed to improve classification performance. In this study, we propose a multiple feature image capsule network ensemble approach for schizophrenia classification. The proposed approach proceeds in three steps: 1) extracting multiple image features from the perspective of linear sparse representation, nonlinear multiple kernel representation, and function connection of brain areas respectively; 2) feeding these image features into three specially designed independent capsule networks for classification; 3) obtaining the final results by fusing the outputs of these three deep capsule network using a ensemble approach. To further improve the classification performance, we design a optimization model of maximizing the square of correlation coefficients and propose a weighted ensemble technology based on this model, which is mathematically proved to be solved as a eigenvalue decomposition problem in certain case. Finally, the proposed approach is implemented and evaluated on the schizophrenia fMRI dataset from COBRE, UCLA and WUSTL. From the experimental results, we conclude that the proposed method outperforms some current methods and further improves the accuracy of schizophrenia classification.

20 citations

Journal ArticleDOI
TL;DR: This paper presents a relaxed mixed semantics model for time Petri nets to address problems by redefining the firability rules of transitions and applies the proposed model to schedulability analysis of a job shop scheduling problem, and compares the features of four semantics models.
Abstract: Several semantics models are adopted by time Petri nets for different applications. Yet they have some limitations on schedulability analysis of flexible manufacturing systems. The scheduling scope of a strong semantics model is greatly limited because of the impact of strong timing requirements, perhaps keeping some optimal schedules out of the consideration. A weak semantics model cannot guarantee the scheduling timeliness as there lacks strong timing enforcement. A mixed semantics model cannot ensure that independent transitions with overlapping firing interval fire in an interleaving way, thus affecting the search for the optimal schedules. In this paper, we present a relaxed mixed semantics model for time Petri nets to address these problems by redefining the firability rules of transitions. In our model, the firability of a transition is determined by maximal concurrent sets containing the transition. This treatment not only extends the scheduling scope of TPN model greatly while avoiding the generation of invalid schedules, but also solves the problem of concurrent scheduling of independent transitions. A state class method is then proposed to support the verification and analysis of temporal properties. Finally, we apply the proposed model to schedulability analysis of a job shop scheduling problem, and compare the features of four semantics models.

10 citations

Journal ArticleDOI
TL;DR: The experimental results on the identification of essential proteins shows that the proposed centrality combination method achieves better results in prediction performance than the 14 centrality mehtods in terms of the prediction precision, sensitivity, specificity, positive predictivevalue, negative predictive value, F-measure and accuracy rate.
Abstract: Essential proteins are important participants in various life activities and play a vital role in the survival and reproduction of life. The network-based centrality methods are a common way to identify essential proteins for protein interaction networks. Due to the differences between the existing centrality methods, it is a feasible approach to improve the identification accuracy of essential proteins by combining centrality methods. In this paper, we propose a centrality combination method based on feature selection. First, the measure values of the 14 classical centrality methods are viewed as feature data. Then, a subset of the relevant features is selected according to the importance of features. Finally, the centrality methods corresponding to the selected features are combined by using the geometric mean method for the identification of essential proteins. To verify the effectiveness of the combination method, we apply the combination method on the original static protein interaction network (SPIN), the dynamic protein interaction network (DPIN) and the refined dynamic protein interaction network (RDPIN), and compare the result with those by each single centrality method (LAC, DC, DMNC, NC, TP, CLC, BC, LC, CC, KC, CR, EC, PR, LR). The experimental results on the identification of essential proteins shows that the combination method achieves better results in prediction performance than the 14 centrality mehtods in terms of the prediction precision, sensitivity, specificity, positive predictive value, negative predictive value, F-measure and accuracy rate. It has been illustrated that the proposed method can help to identify essential proteins more accurately.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper describes major use cases and reference scenarios where the mobile edge computing (MEC) is applicable and surveys existing concepts integrating MEC functionalities to the mobile networks and discusses current advancement in standardization of the MEC.
Abstract: Technological evolution of mobile user equipment (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. A suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud. Nevertheless, this option introduces significant execution delay consisting of delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such a delay is inconvenient and makes the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling it to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: 1) decision on computation offloading; 2) allocation of computing resource within the MEC; and 3) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,829 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the research on computation offloading in mobile edge computing (MEC), focusing on user-oriented use cases and reference scenarios where the MEC is applicable.
Abstract: Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,759 citations

Journal ArticleDOI
TL;DR: A comprehensive survey, analyzing how edge computing improves the performance of IoT networks and considers security issues in edge computing, evaluating the availability, integrity, and the confidentiality of security strategies of each group, and proposing a framework for security evaluation of IoT Networks with edge computing.
Abstract: The Internet of Things (IoT) now permeates our daily lives, providing important measurement and collection tools to inform our every decision. Millions of sensors and devices are continuously producing data and exchanging important messages via complex networks supporting machine-to-machine communications and monitoring and controlling critical smart-world infrastructures. As a strategy to mitigate the escalation in resource congestion, edge computing has emerged as a new paradigm to solve IoT and localized computing needs. Compared with the well-known cloud computing, edge computing will migrate data computation or storage to the network “edge,” near the end users. Thus, a number of computation nodes distributed across the network can offload the computational stress away from the centralized data center, and can significantly reduce the latency in message exchange. In addition, the distributed structure can balance network traffic and avoid the traffic peaks in IoT networks, reducing the transmission latency between edge/cloudlet servers and end users, as well as reducing response times for real-time IoT applications in comparison with traditional cloud services. Furthermore, by transferring computation and communication overhead from nodes with limited battery supply to nodes with significant power resources, the system can extend the lifetime of the individual nodes. In this paper, we conduct a comprehensive survey, analyzing how edge computing improves the performance of IoT networks. We categorize edge computing into different groups based on architecture, and study their performance by comparing network latency, bandwidth occupation, energy consumption, and overhead. In addition, we consider security issues in edge computing, evaluating the availability, integrity, and the confidentiality of security strategies of each group, and propose a framework for security evaluation of IoT networks with edge computing. Finally, we compare the performance of various IoT applications (smart city, smart grid, smart transportation, and so on) in edge computing and traditional cloud computing architectures.

1,008 citations

Journal ArticleDOI
TL;DR: This survey makes an exhaustive review on the state-of-the-art research efforts on mobile edge networks, including definition, architecture, and advantages, and presents a comprehensive survey of issues on computing, caching, and communication techniques at the network edge.
Abstract: As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.

782 citations

Journal ArticleDOI
TL;DR: This article designs a blockchain empowered secure data sharing architecture for distributed multiple parties, and incorporates privacy-preserved federated learning in the consensus process of permissioned blockchain, so that the computing work for consensus can also be used for federated training.
Abstract: The rapid increase in the volume of data generated from connected devices in industrial Internet of Things paradigm, opens up new possibilities for enhancing the quality of service for the emerging applications through data sharing. However, security and privacy concerns (e.g., data leakage) are major obstacles for data providers to share their data in wireless networks. The leakage of private data can lead to serious issues beyond financial loss for the providers. In this article, we first design a blockchain empowered secure data sharing architecture for distributed multiple parties. Then, we formulate the data sharing problem into a machine-learning problem by incorporating privacy-preserved federated learning. The privacy of data is well-maintained by sharing the data model instead of revealing the actual data. Finally, we integrate federated learning in the consensus process of permissioned blockchain, so that the computing work for consensus can also be used for federated training. Numerical results derived from real-world datasets show that the proposed data sharing scheme achieves good accuracy, high efficiency, and enhanced security.

668 citations