scispace - formally typeset
Search or ask a question
Author

Zhu Lei

Bio: Zhu Lei is an academic researcher from Xi'an University of Science and Technology. The author has contributed to research in topics: Cloud computing & Authentication. The author has an hindex of 5, co-authored 40 publications receiving 92 citations. Previous affiliations of Zhu Lei include Chongqing University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper considers cheating problem in bivariate polynomial based secret sharing scheme, and proposes two cheating identification algorithms respectively that are efficient with respect of cheater identification capabilities and achieves stronger capability of cheating identification with the collaboration of the rest n − m users who are not involved in secret reconstruction.

57 citations

Journal ArticleDOI
Xinhong Hei1, Yin Xinyue1, Yichuan Wang1, Ju Ren1, Zhu Lei1 
TL;DR: A Blockchained-Federated Learning based cloud intrusion detection scheme that sends the local training parameters of the IoT intrusion alarm set to the cloud computing center for global prediction, and stores the model training process information and behavior on the blockchain.

39 citations

Journal ArticleDOI
TL;DR: This article proposes a novel IoV block-streaming service awareness and trusted verification in 6 G, combined with identity-based blind signature technology, which can realize the mutual authentication of IoV devices and edge servers while keeping the user's real identity information confidential.
Abstract: 6 G and mobile Internet-of-Vehicles (IoV) technology require a secure, open, and transparent system. Blockchain has the characteristics of decentralization, non-tampering, and traceability, which can improve the robustness, data privacy, and security transparency of the overall system. Therefore, blockchain will be the most promising technology to ensure the security and privacy of 6 G and vehicle networks. In this article, we propose a novel IoV block-streaming service awareness and trusted verification in 6 G. The edge node uploads the microservices and calling diagram of microservices to the blockchain network. The blockchain network serves as an intermediate verification platform for the edge node and the IoV equipment to record the evidence of interaction between the two ends. Moreover, combined with identity-based blind signature technology, we have designed a security scheme in which IoV devices anonymously request services from edge nodes, which can realize the mutual authentication of IoV devices and edge servers while keeping the user's real identity information confidential. In addition, an edge caching mechanism based on user requests and service awareness is designed to pre-compile services at edge nodes, improve the cache hit rate of service requests from vehicle users on the edge server, and achieve efficient processing of resource requests from users.

23 citations

Journal ArticleDOI
01 Dec 2016
TL;DR: A virtual machine placement optimization model based on optimized ant colony algorithm is proposed, able to determine the physical machines suitable for hosting migrated virtual machines and solves the problem of redundant power consumption resulting from idle resource waste of physical machines.
Abstract: A virtual machine placement optimization model based on optimized ant colony algorithm is proposed. The model is able to determine the physical machines suitable for hosting migrated virtual machines. Thus, it solves the problem of redundant power consumption resulting from idle resource waste of physical machines. First, based on the utilization parameters of the virtual machine, idle resources and energy consumption models are proposed. The models are dedicated to quantifying the features of virtual resource utilization and energy consumption of physical machines. Next, a multi-objective optimization strategy is derived for virtual machine placement in cloud environments. Finally, an optimal virtual machines placement scheme is determined based on feature metrics, multi-objective optimization, and the ant colony algorithm. Experimental results indicate that compared with the traditional genetic algorithms-based MGGA model, the convergence rate is increased by 16%, and the optimized highest average energy consumption is reduced by 18%. The model exhibits advantages in terms of algorithm efficiency and efficacy.

17 citations

Journal ArticleDOI
TL;DR: It can be concluded that the fine-grained task migration strategy combines the advantages of mobile edge computing, not only satisfies the smooth execution of tasks, but also reduces the energy consumption of terminal mobile devices.
Abstract: Mobile edge computing (MEC), as the key technology to improve user experience in a 5G network, can effectively reduce network transmission delay. Task migration can migrate complex tasks to remote edge servers through wireless networks, solving the problems of insufficient computing capacity and limited battery capacity of mobile terminals. Therefore, in order to solve the problem of “how to realize low-energy migration of complex dependent applications,” a subtask partitioning model with minimum energy consumption is constructed based on the relationship between the subtasks. Aiming at the problem of execution time constraints, the genetic algorithm is used to find the optimal solution, and the migration decision results of each subtask are obtained. In addition, an improved algorithm based on a genetic algorithm is proposed to dynamically adjust the optimal solution obtained by genetic algorithm by determining the proportion of task energy consumption and mobile phone residual power. According to the experimental results, it can be concluded that the fine-grained task migration strategy combines the advantages of mobile edge computing, not only satisfies the smooth execution of tasks, but also reduces the energy consumption of terminal mobile devices. In addition, experiments show that the improved algorithm is more in line with users’ expectations. When the residual power of mobile devices is reduced to a certain value, tasks are migrated to MEC server to prolong standby time.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A review on the ML-based computation offloading mechanisms in the MEC environment in the form of a classical taxonomy to identify the contemporary mechanisms on this crucial topic and to offer open issues as well.

172 citations

Journal ArticleDOI
TL;DR: The FDL model detects zero-day botnet attacks with high classification performance; guarantees data privacy and security; has low communication overhead; requires low-memory space for the storage of training data; and has low network latency.
Abstract: Deep Learning (DL) has been widely proposed for botnet attack detection in Internet of Things (IoT) networks. However, the traditional Centralized DL (CDL) method cannot be used to detect previously unknown (zero-day) botnet attack without breaching the data privacy rights of the users. In this paper, we propose Federated Deep Learning (FDL) method for zero-day botnet attack detection to avoid data privacy leakage in IoT edge devices. In this method, an optimal Deep Neural Network (DNN) architecture is employed for network traffic classification. A model parameter server remotely coordinates the independent training of the DNN models in multiple IoT edge devices, while Federated Averaging (FedAvg) algorithm is used to aggregate local model updates. A global DNN model is produced after a number of communication rounds between the model parameter server and the IoT edge devices. Zero-day botnet attack scenarios in IoT edge devices is simulated with the Bot-IoT and N-BaIoT data sets. Experiment results show that FDL model: (a) detects zero-day botnet attacks with high classification performance; (b) guarantees data privacy and security; (c) has low communication overhead (d) requires low memory space for the storage of training data; and (e) has low network latency. Therefore, FDL method outperformed CDL, Localized DL, and Distributed DL methods in this application scenario.

90 citations

Journal ArticleDOI
TL;DR: Simulation results show that the proposed hybrid model can appropriately fit the problem with near-optimal accuracy regarding the offloading decision-making, the latency, and the energy consumption predictions in the proposed self-management framework.

75 citations

Journal ArticleDOI
TL;DR: A detailed review of the recent state‐of‐the‐art multiobjective VM placement mechanisms using nature‐inspired metaheuristic algorithms in cloud environments and gives special attention to the parameters and approaches used for placing VMs into PMs.

59 citations

Journal ArticleDOI
TL;DR: In the proposed RISS, the secret image is losslessly decoded by a modular operation, and the original cover image is recovered by a binarization operation, both of which are just simple operations.
Abstract: In reversible image secret sharing (RISS), the cover image can be recovered to some degree, and a share can be comprehensible rather than noise-like. Reversible cover images play an important role in law enforcement and medical diagnosis. The comprehensible share can not only reduce the suspicion of attackers but also improve the management efficiency of shares. In this paper, we first provide a formal definition of RISS. Then, we propose an RISS algorithm for a $(k,n)$ -threshold based on the principle of the Chinese remainder theorem-based ISS (CRTISS). In the proposed RISS, the secret image is losslessly decoded by a modular operation, and the original cover image is recovered by a binarization operation, both of which are just simple operations. Theoretical analyses and experiments are provided to validate the proposed definition and algorithm.

52 citations