scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Industrial Informatics in 2022"


Journal ArticleDOI
TL;DR: A service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing, which leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning.
Abstract: With the potential of implementing computing-intensive applications, edge computing is combined with digital twinning (DT)-empowered Internet of vehicles (IoV) to enhance intelligent transportation capabilities. By updating digital twins of vehicles and offloading services to edge computing devices (ECDs), the insufficiency in vehicles’ computational resources can be complemented. However, owing to the computational intensity of DT-empowered IoV, ECD would overload under excessive service requests, which deteriorates the quality of service (QoS). To address this problem, in this article, a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services. Then, a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing. To obtain optimized offloading decisions, SOL leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning. Eventually, experiments with comparative methods indicate that SOL is effective and adaptable in diverse environments.

107 citations


Journal ArticleDOI
TL;DR: A hybrid deep neural network model based on the integration of MobileNet-v2, YOLOv4, and Openpose, is constructed to identify the real-time status from physical manufacturing environment to virtual space and can achieve a higher detection accuracy for digital twinning in smart manufacturing.
Abstract: Recently, along with several technological advancements in cyber-physical systems, the revolution of Industry 4.0 has brought in an emerging concept named digital twin (DT), which shows its potential to break the barrier between the physical and cyber space in smart manufacturing. However, it is still difficult to analyze and estimate the real-time structural and environmental parameters in terms of their dynamic changes in digital twinning, especially when facing detection tasks of multiple small objects from a large-scale scene with complex contexts in modern manufacturing environments. In this article, we focus on a small object detection model for DT, aiming to realize the dynamic synchronization between a physical manufacturing system and its virtual representation. Three significant elements, including equipment, product, and operator, are considered as the basic environmental parameters to represent and estimate the dynamic characteristics and real-time changes in building a generic DT system of smart manufacturing workshop. A hybrid deep neural network model, based on the integration of MobileNetv2, YOLOv4, and Openpose, is constructed to identify the real-time status from physical manufacturing environment to virtual space. A learning algorithm is then developed to realize the efficient multitype small object detection based on the feature integration and fusion from both shallow and deep layers, in order to facilitate the modeling, monitoring, and optimizing of the whole manufacturing process in the DT system. Experiments and evaluations conducted in three different use cases demonstrate the effectiveness and usefulness of our proposed method, which can achieve a higher detection accuracy for DT in smart manufacturing.

106 citations


Journal ArticleDOI
TL;DR: A coordination graph driven vehicular task offloading scheme, which minimizes offloading costs through efficiently integrating service matching exploitation and intelligent offloading scheduling in both digital twin and physical networks is proposed.
Abstract: Technological advancements of urban informatics and vehicular intelligence have enabled connected smart vehicles as pervasive edge computing platforms for a plethora of powerful applications. However, varies types of smart vehicles with distinct capacities, diverse applications with different resource demands as well as unpredictive vehicular topology, pose significant challenges on realizing efficient edge computing services. To cope with these challenges, we incorporate digital twin technology and artificial intelligence into the design of a vehicular edge computing network. It centrally exploits potential edge service matching through evaluating cooperation gains in a mirrored edge computing system, while distributively scheduling computation task offloading and edge resource allocation in an multiagent deep reinforcement learning approach. We further propose a coordination graph driven vehicular task offloading scheme, which minimizes offloading costs through efficiently integrating service matching exploitation and intelligent offloading scheduling in both digital twin and physical networks. Numerical results based on real urban traffic datasets demonstrate the efficiency of our proposed schemes.

105 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a pairing-free certificateless scheme that utilizes the state-of-the-art blockchain technique and smart contract to construct a novel reliable and efficient CLS scheme.
Abstract: Nowadays, the Industrial Internet of Things (IIoT) has remarkably transformed our personal lifestyles and society operations into a novel digital mode, which brings tremendous associations with all walks of life, such as intelligent logistics, smart grid, and smart city. Moreover, with the rapid increase of IIoT devices, a large amount of data is swapped between heterogeneous sensors and devices every moment. This trend increases the risk of eavesdropping and hijacking attacks in communication channels, so maintaining data privacy and security becomes two notable concerns at present. Recently, based on the mechanism of the Schnorr signature, a more secure and lightweight certificateless signature (CLS) protocol is popular for the resource-constrained IIoT protocol design. Nevertheless, we found most of the existing CLS schemes are susceptible to several common security weaknesses such as man-in-the-middle attacks, key generation center compromised attacks, and distributed denial of service attacks. To tackle the challenges mentioned previously, in this article, we propose a novel pairing-free certificateless scheme that utilizes the state-of-the-art blockchain technique and smart contract to construct a novel reliable and efficient CLS scheme. Then, we simulate the Type-I and Type-II adversaries to verify the trustworthiness of our scheme. Security analysis as well as performance evaluation outcomes prove that our design can hold more reliable security assurance with less computation cost (i.e., reduced by around 40.0% at most) and communication cost (i.e., reduced by around 94.7% at most) than other related schemes.

102 citations


Journal ArticleDOI
TL;DR: The minimum levitation unit of the maglev vehicle system has been established and an amplitude saturation controller (ASC) is proposed, which can ensure the generation of only saturated unidirectional attractive force, and a neural network-based supervisor controller (NNBSC) is designed.
Abstract: When the electromagnetic suspension (EMS) type maglev vehicle is traveling over a track, the airgap must be maintained between the electromagnet and the track to prevent contact with that track. Because of the open-loop instability of the EMS system, the current must be actively controlled to maintain the target airgap. However, the maglev system suffers from the strong nonlinearity, force saturation, track flexibility, and feedback signals with network time-delay, hence making the controller design even more difficult. In this article, the minimum levitation unit of the maglev vehicle system has been established. An amplitude saturation controller (ASC), which can ensure the generation of only saturated unidirectional attractive force, is thus proposed. The stability and convergence of the closed-loop signals are proven based on the Lyapunov method. Subsequently, ASC is improved based on the radial basis function neural networks, and a neural network-based supervisor controller (NNBSC) is thus designed. The ASC plays the main role in the initial stage. As the neural network learns the control trend, it will gradually transition to the neural network controller. Simulation results are provided to illustrate the specific merit of the NNBSC. The hardware experimental results of a full-scale IoT EMS maglev train are included to validate the effectiveness and robustness of the presented control method as regards to time delay.

94 citations


Journal ArticleDOI
TL;DR: Compared with the traditional end-to-end frameworks, the proposed ARHPE network can leverage the asymmetric relation cues for predicting the head pose angle in the incorrect label scenarios and significantly outperforms other state-of-the-art approaches.
Abstract: Head pose estimation (HPE) has wide industrial applications, such as online education, human–robot interaction, and automatic manufacturing. In this article, we address two key problems in HPE based on label learning and asymmetric relation cues: 1) how to bridge the gap between the better prediction performance of networks and incorrectly label pose images in the HPE datasets and 2) how to take full advantage of the adjacent poses information around the centered pose image. We reconstruct all the incorrect labels as a two-dimensional Lorentz distribution to tackle the first problem. Instead of directly adopting the angle values as hard labels, we assign part of the probability values (soft labels) to adjacent labels for learning discriminative feature representations. To address the second problem, we reveal the asymmetric relation nature of HPE datasets. The yaw direction and pitch direction are assigned different weights by introducing the half at half-maximum of the Lorentz distribution. Compared with the traditional end-to-end frameworks, the proposed one can leverage the asymmetric relation cues for predicting the head pose angle in the incorrect label scenarios. Extensive experiments on two public datasets and our infrared dataset demonstrate that the proposed ARHPE network significantly outperforms other state-of-the-art approaches.

93 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an efficient deep matrix factorization (EDMF) with review feature learning for the industrial recommender system, which extracts the interactive features of onefold review by convolutional neural networks with word-attention mechanism.
Abstract: Recommendation accuracy is a fundamental problem in the quality of the recommendation system. In this article, we propose an efficient deep matrix factorization (EDMF) with review feature learning for the industrial recommender system. Two characteristics in user’s review are revealed. First, interactivity between the user and the item, which can also be considered as the former’s scoring behavior on the latter, is exploited in a review. Second, the review is only a partial description of the user’s preferences for the item, which is revealed as the sparsity property. Specifically, in the first characteristic, EDMF extracts the interactive features of onefold review by convolutional neural networks with word-attention mechanism. Subsequently, ${L}_{0}$ norm is leveraged to constrain the review considering that the review information is a sparse feature, which is the second characteristic. Furthermore, the loss function is constructed by maximum a posteriori estimation theory, where the interactivity and sparsity property are converted as two prior probability functions. Finally, the alternative minimization algorithm is introduced to optimize the loss functions. Experimental results on several datasets demonstrate that the proposed methods, which show good industrial conversion application prospects, outperform the state-of-the-art methods in terms of effectiveness and efficiency.

82 citations


Journal ArticleDOI
TL;DR: Two novel strategies for determining the bilateral trading preferences of households participating in a fully Peer-to-Peer (P2P) local energy market are proposed: the first matches between surplus power supply and demand of participants, while the second is based on the distance between them in the network.
Abstract: This paper proposes two novel strategies for determining the bilateral trading preferences of households participating in a fully Peer-to-Peer (P2P) local energy market. The first strategy matches between surplus power supply and demand of participants, while the second is based on the distance between them in the network. The impact of bilateral trading preferences on the price and amount of energy traded is assessed for the two strategies. A decentralized fully P2P energy trading market is developed to generate the results in a day-ahead setting. After that, a permissioned blockchain-smart contract platform is used for the implementation of the decentralized P2P trading market on a digital platform. Actual data from a residential neighborhood in the Netherlands, with different varieties of distributed energy resources, is used for the simulations. Results show that in the two strategies, the energy procurement cost and grid interaction of all participants in P2P trading are reduced compared to a baseline scenario. The total amount of P2P energy traded is found to be higher when the trading preferences are based on distance, which could also be considered as a proxy for energy efficiency in the network by encouraging P2P trading among nearby households. However, the P2P trading prices in this strategy are found to be lower. Further, a comparison is made between two scenarios: with and without electric heating in households. Although the electrification of heating reduces the total amount of P2P energy trading, its impact on the trading prices is found to be limited.

81 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of AI and XAI-based methods adopted in the Industry 4.0 scenario is presented and the opportunities and challenges that elicit future research directions toward responsible or human-centric AI andXAI systems, essential for adopting high-stakes industry applications are illustrated.
Abstract: Nowadays, Industry 4.0 can be considered a reality, a paradigm integrating modern technologies and innovations. Artificial intelligence (AI) can be considered the leading component of the industrial transformation enabling intelligent machines to execute tasks autonomously such as self-monitoring, interpretation, diagnosis, and analysis. AI-based methodologies (especially machine learning and deep learning support manufacturers and industries in predicting their maintenance needs and reducing downtime. Explainable artificial intelligence (XAI) studies and designs approaches, algorithms and tools producing human-understandable explanations of AI-based systems information and decisions. This article presents a comprehensive survey of AI and XAI-based methods adopted in the Industry 4.0 scenario. First, we briefly discuss different technologies enabling Industry 4.0. Then, we present an in-depth investigation of the main methods used in the literature: we also provide the details of what, how, why, and where these methods have been applied for Industry 4.0. Furthermore, we illustrate the opportunities and challenges that elicit future research directions toward responsible or human-centric AI and XAI systems, essential for adopting high-stakes industry applications.

76 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated how to securely invoke patients' records from past case-database while protecting the privacy of both current diagnosed patient and the case database and constructed a privacy-preserving medical record searching scheme based on ElGamal Blind Signature.
Abstract: In medical field, previous patients’ cases are extremely private as well as intensely valuable to current disease diagnosis. Therefore, how to make full use of precious cases while not leaking out patients’ privacy is a leading and promising work especially in future privacy-preserving intelligent medical period. In this article, we investigate how to securely invoke patients’ records from past case-database while protecting the privacy of both current diagnosed patient and the case-database and construct a privacy-preserving medical record searching scheme based on ElGamal Blind Signature. In our scheme, by blinded the healthy data of the patient and the database of the iDoctor, respectively, the patient can securely make self-helped medical diagnosis by invoking past case-database and securely comparing the blinded abstracts of current data and previous records. Moreover, the patient can obtain target searching information intelligently at the same time he knows whether the abstracts match or not instead of obtaining it after matching. It greatly increases the timeliness of information acquisition and meets high-speed information sharing requirements, especially in 5G era. What's more, our proposed scheme achieves bilateral security, that is, whether the abstracts match or not, both of the privacy of the case-database and the private information of the current patient are well protected. Besides, it resists different levels of violent ergodic attacks by adjusting the number of zeros in a bit string according to different security requirements.

75 citations


Journal ArticleDOI
TL;DR: In this article , a discrete memristive Rulkov (m-Rulkov) neuron model is proposed and the bifurcation routes of the model are declared by detecting the eigenvalue loci.
Abstract: The magnetic induction effects have been emulated by various continuous memristive models but they have not been successfully described by a discrete memristive model yet. To address this issue, this article first constructs a discrete memristor and then presents a discrete memristive Rulkov (m-Rulkov) neuron model. The bifurcation routes of the m-Rulkov model are declared by detecting the eigenvalue loci. Using numerical measures, we investigate the complex dynamics shown in the m-Rulkov model, including regime transition behaviors, transient chaotic bursting regimes, and hyperchaotic firing behaviors, all of which are closely relied on the memristor parameter. Consequently, the involvement of memristor can be used to simulate the magnetic induction effects in such a discrete neuron model. Besides, we elaborate a hardware platform for implementing the m-Rulkov model and acquire diverse spiking-bursting sequences. These results show that the presented model is viable to better characterize the actual firing activities in biological neurons than the Rulkov model when biophysical memory effect is supplied.

Journal ArticleDOI
TL;DR: This article focuses on scaled consensus tracking for a class of high-order nonlinear multiagent systems with time delays and external disturbances, and a fully distributed consensus protocol is designed to drive all agents to achieve scaled consensus with preassigned ratios.
Abstract: This article focuses on scaled consensus tracking for a class of high-order nonlinear multiagent systems Different from the existing results, for high-order nonlinear multiagent systems with time delays and external disturbances, a fully distributed consensus protocol is designed to drive all agents to achieve scaled consensus with preassigned ratios The control gains are varying and updated by distributed adaptive laws As a result, the presented protocol is independent of any global information, and thus, could be implemented in a fully distributed manner Simultaneously, the fully distributed control protocol using an adaptive $\sigma$ -modification technique is presented to deal with external disturbances, which can guarantee the tracking errors and coupling weights of all following agents are uniformly ultimately bounded To tackle with the derivatives of the functionals with time delays, the Lyapunov–Krasovskii functional is employed to analyze and compensate them by introducing multiintegral terms Finally, simulation examples are included to verify the effectiveness of the theoretical results

Journal ArticleDOI
TL;DR: A comprehensive survey of AI and XAI-based methods adopted in the Industry 4.0 scenario is presented in this article , where the authors discuss the opportunities and challenges that elicit future research directions toward responsible or human-centric AI-based systems, essential for adopting highstakes industry applications.
Abstract: Nowadays, Industry 4.0 can be considered a reality, a paradigm integrating modern technologies and innovations. Artificial intelligence (AI) can be considered the leading component of the industrial transformation enabling intelligent machines to execute tasks autonomously such as self-monitoring, interpretation, diagnosis, and analysis. AI-based methodologies (especially machine learning and deep learning support manufacturers and industries in predicting their maintenance needs and reducing downtime. Explainable artificial intelligence (XAI) studies and designs approaches, algorithms and tools producing human-understandable explanations of AI-based systems information and decisions. This article presents a comprehensive survey of AI and XAI-based methods adopted in the Industry 4.0 scenario. First, we briefly discuss different technologies enabling Industry 4.0. Then, we present an in-depth investigation of the main methods used in the literature: we also provide the details of what, how, why, and where these methods have been applied for Industry 4.0. Furthermore, we illustrate the opportunities and challenges that elicit future research directions toward responsible or human-centric AI and XAI systems, essential for adopting high-stakes industry applications.

Journal ArticleDOI
TL;DR: Simulation results obtained under heterogeneous home environments indicate the advantage of the proposed approach in terms of convergence speed, appliance energy consumption, and number of agents.
Abstract: This article proposesa novel federated reinforcement learning (FRL) approach for the energy management of multiple smart homes with home appliances, a solar photovoltaic system, and an energy storage system. The novelty of the proposed FRL approach lies in the development of a distributed deep reinforcement learning (DRL) model that consists of local home energy management systems (LHEMSs) and a global server (GS). Using energy consumption data, DRL agents for LHEMSs construct and upload their local models to the GS. Then, the GS aggregates the local models to update a global model for LHEMSs and broadcasts it to the DRL agents. Finally, the DRL agents replace the previous local models with the global model and iteratively reconstruct their local models. Simulation results obtained under heterogeneous home environments indicate the advantage of the proposed approach in terms of convergence speed, appliance energy consumption, and number of agents.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a blockchain-powered decentralized horizontal federated learning (FL) framework for 5G-enabled unmanned aerial vehicles (UAVs), where the authentication of cross-domain UAVs is accomplished through multisignature smart contracts.
Abstract: Motivated by Industry 4.0, 5G-enabled unmanned aerial vehicles (UAVs; also known as drones) are widely applied in various industries. However, the open nature of 5G networks threatens the safe sharing of data. In particular, privacy leakage can lead to serious losses for users. As a new machine learning paradigm, federated learning (FL) avoids privacy leakage by allowing data models to be shared instead of raw data. Unfortunately, the traditional FL framework is strongly dependent on a centralized aggregation server, which will cause the system to crash if the server is compromised. Unauthorized participants may launch poisoning attacks, thereby reducing the usability of models. In addition, communication barriers hinder collaboration among a large number of cross-domain devices for learning. To address the abovementioned issues, a blockchain-empowered decentralized horizontal FL framework is proposed. The authentication of cross-domain UAVs is accomplished through multisignature smart contracts. Global model updates are computed by using these smart contracts instead of a centralized server. Extensive experimental results show that the proposed scheme achieves high efficiency of cross-domain authentication and good accuracy.

Journal ArticleDOI
TL;DR: In this article , a hybrid deep neural network model based on the integration of MobileNetv2, YOLOv4, and Openpose is constructed to identify the real-time status from physical manufacturing environment to virtual space.
Abstract: Recently, along with several technological advancements in cyber-physical systems, the revolution of Industry 4.0 has brought in an emerging concept named digital twin (DT), which shows its potential to break the barrier between the physical and cyber space in smart manufacturing. However, it is still difficult to analyze and estimate the real-time structural and environmental parameters in terms of their dynamic changes in digital twinning, especially when facing detection tasks of multiple small objects from a large-scale scene with complex contexts in modern manufacturing environments. In this article, we focus on a small object detection model for DT, aiming to realize the dynamic synchronization between a physical manufacturing system and its virtual representation. Three significant elements, including equipment, product, and operator, are considered as the basic environmental parameters to represent and estimate the dynamic characteristics and real-time changes in building a generic DT system of smart manufacturing workshop. A hybrid deep neural network model, based on the integration of MobileNetv2, YOLOv4, and Openpose, is constructed to identify the real-time status from physical manufacturing environment to virtual space. A learning algorithm is then developed to realize the efficient multitype small object detection based on the feature integration and fusion from both shallow and deep layers, in order to facilitate the modeling, monitoring, and optimizing of the whole manufacturing process in the DT system. Experiments and evaluations conducted in three different use cases demonstrate the effectiveness and usefulness of our proposed method, which can achieve a higher detection accuracy for DT in smart manufacturing.

Journal ArticleDOI
TL;DR: An open set fault diagnosis method is proposed to address the fault diagnosis problem in a more practical scenario where the test label set consists of a portion of the training label set and some unknown classes.
Abstract: Existing data-driven fault diagnosis methods assume that the label sets of the training data and test data are consistent, which is usually not applicable for real applications since the fault modes that occur in the test phase are unpredictable. To address this problem, open set fault diagnosis (OSFD), where the test label set consists of a portion of the training label set and some unknown classes, is studied in this article. Considering the changeable operating conditions of machinery, OSFD tasks are further divided into shared-domain open set fault diagnosis (SOSFD) and cross-domain open set fault diagnosis (COSFD) in this article. For SOSFD, 1-D convolutional neural networks are trained for learning discriminative features and recognizing fault modes. For COSFD, due to the distribution discrepancy between the source and target domains, the deep model needs to learn domain-invariant features of shared classes and separate features of outlier classes. Thus, by utilizing the output of an additional domain classifier, a model named bilateral weighted adversarial networks is proposed to assign large weights to shared classes and small weights to outlier classes during the feature alignment. In the test phase, samples are classified according to the outputs of the deep model and unknown-class samples are rejected by the extreme value theory model. Experimental results on two bearing datasets demonstrate the effectiveness and superiority of the proposed method.

Journal ArticleDOI
TL;DR: In this article , a lightweight convolutional neural network (CNN) architecture is designed to detect faces of people in mines, avalanches, under water, or other dangerous situations when their face might not be very visible over surrounding background.
Abstract: In this article, we propose a model of face detection in risk situations to help rescue teams speed up the search of people who might need help. The proposed lightweight convolutional neural network (CNN) architecture is designed to detect faces of people in mines, avalanches, under water, or other dangerous situations when their face might not be very visible over surrounding background. We have designed a novel light architecture cooperating with the proposed sliding window procedure. The designed model works with maximum simplicity to support mobile devices. An output from processing presents a box on face location in the screen of device. The model was trained by using Adam and tested on various images. Results show that proposed lightweight CNN detects human faces over various textures with accuracy above 99% and precision above 98% what proves the efficiency of our proposed model.

Journal ArticleDOI
TL;DR: In this paper , the authors present an extensive survey of the various federated learning (FL) models currently developed by researchers for providing authentication, privacy, trust management, and attack detection.
Abstract: Federated learning (FL) is a recent development in artificial intelligence, which is typically based on the concept of decentralized data. As cyberattacks are frequently happening in the various applications deployed in real time, most industrialists are hesitating to move forward in adopting the technology of the Internet of Everything. This article aims to provide an extensive study on how FL could be utilized for providing better cybersecurity and prevent various cyberattacks in real time. We present an extensive survey of the various FL models currently developed by researchers for providing authentication, privacy, trust management, and attack detection. We also discuss few real-time use cases that have been deployed recently and how FL is adopted in them for preserving privacy of data and improving the performance of the system. Based on the study, we conclude this article with some prominent challenges and future directions on which the researchers can focus for adopting FL in real-time scenarios.

Journal ArticleDOI
TL;DR: In this article , a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services, and a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered Internet of vehicles in edge computing.
Abstract: With the potential of implementing computing-intensive applications, edge computing is combined with digital twinning (DT)-empowered Internet of vehicles (IoV) to enhance intelligent transportation capabilities. By updating digital twins of vehicles and offloading services to edge computing devices (ECDs), the insufficiency in vehicles’ computational resources can be complemented. However, owing to the computational intensity of DT-empowered IoV, ECD would overload under excessive service requests, which deteriorates the quality of service (QoS). To address this problem, in this article, a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services. Then, a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing. To obtain optimized offloading decisions, SOL leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning. Eventually, experiments with comparative methods indicate that SOL is effective and adaptable in diverse environments.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multiple-strategies differential privacy framework on STF for HOHDST network traffic data analysis, which comprises three differential privacy (DP) mechanisms, i.e., concentrated DP, and local DP.
Abstract: Due to high capacity and fast transmission speed, 5G plays a key role in modern electronic infrastructure. Meanwhile, sparse tensor factorization (STF) is a useful tool for dimension reduction to analyze high-order, high-dimension, and sparse tensor (HOHDST) data, which is transmitted on 5G Internet-of-things (IoT). Hence, HOHDST data relies on STF to obtain complete data and discover rules for real time and accurate analysis. From another view of computation and data security, the current STF solution seeks to improve the computational efficiency but neglects privacy security of the IoT data, e.g., data analysis for network traffic monitor system. To overcome these problems, this article proposes a multiple-strategies differential privacy framework on STF ( MDPSTF ) for HOHDST network traffic data analysis. MDPSTF comprises three differential privacy (DP) mechanisms, i.e., $\varepsilon -$ DP, concentrated DP, and local DP. Furthermore, the theoretical proof of privacy bound is presented. Hence, MDPSTF can provide general data protection for HOHDST network traffic data with high-security promise. We conduct experiments on two real network traffic datasets ( $Abilene$ and $G\grave{E}ANT$ ). The experimental results show that MDPSTF has high universality on the various degrees of privacy protection demands and high recovery accuracy for the HOHDST network traffic data.

Journal ArticleDOI
TL;DR: In this article, an adaptive optimized formation control problem is studied for the second-order stochastic multiagent systems (MASs) with unknown nonlinear dynamics using the actor-critic architecture and Lyapunov stability theory to ensure that all the error signals are bounded in probability.
Abstract: In this article, an adaptive optimized formation control problem is studied for the second-order stochastic multiagent systems (MASs) with unknown nonlinear dynamics. Compared with first-order formation control, the second-order MASs consider not only the states but also the states rates, which is certainly more challenging and difficult work. In the control design of this article, the fuzzy logic systems are applied to approximate the nonlinear functions. By employing the actor-critic architecture and Lyapunov stability theory, the proposed optimal formation control strategy ensures that all the error signals are bounded in probability. Finally, the simulation examples verify that the proposed formation control approach achieves desired results.

Journal ArticleDOI
TL;DR: A novel deep convolutional neural network based human activity recognition classifier is presented to enhance identification accuracy in electrocardiogram (ECG) patterns monitoring during daily activity.
Abstract: In next-generation network architecture, the Cybertwin drove the sixth generation of cellular networks sixth-generation (6G) to play an active role in many applications, such as healthcare and computer vision. Although the previous sixth-generation (5G) network provides the concept of edge cloud and core cloud, the internal communication mechanism has not been explained with a specific application. This article introduces a possible Cybertwin based multimodal network (beyond 5G) for electrocardiogram (ECG) patterns monitoring during daily activity. This network paradigm consists of a cloud-centric network and several Cybertwin communication ends. The Cybertwin nodes combine support locator/identifier identification, data caching, behavior logger, and communications assistant in the edge cloud. The application focuses on monitoring the ECG patterns during daily activity because few studies analyze them under different motions. We present a novel deep convolutional neural network based human activity recognition classifier to enhance identification accuracy. The healthcare monitoring values and potential clinical medicine are provided by the Cybertwin based network for ECG patterns observing.

Journal ArticleDOI
TL;DR: In this paper , a hybrid adaptive differential evolution (HADE) algorithm is proposed to solve the problem of job shop scheduling with fuzzy processing time and completion time, where the new individuals are selected according to the fitness value obtained from a population consisting of parents and children in HADE.
Abstract: The job-shop scheduling problem (JSP) is NP hard, which has very important practical significance. Because of many uncontrollable factors, such as machine delay or human factors, it is difficult to use a single real-number to express the processing and completion time of the jobs. JSP with fuzzy processing time and completion time (FJSP) can model the scheduling more comprehensively, which benefits from the developments of fuzzy sets. Fuzzy relative entropy leads to a method that can evaluate the quality of a feasible solution following the comparison between the actual value and the ideal value (the due date). Therefore, the multiobjective FJSP can be transformed into a single-objective optimization problem and solved by a hybrid adaptive differential evolution (HADE) algorithm. The maximum completion time, the total delay time, and the total energy consumption of jobs will be considered. HADE adopts a mutation strategy based on DE-current-to-best. Its parameters (CR and F ) are all made adaptive and normally distributed. The new individuals are selected according to the fitness value (FRE) obtained from a population consisting of N parents and N children in HADE. The algorithm is analyzed from different viewpoints. As the experimental results demonstrate, the performance of the HADE algorithm is better than those of some other state-of-the-art algorithms (namely, ant colony optimization, artificial bee colony, and particle swarm optimization).

Journal ArticleDOI
TL;DR: In this article , a federated deep generative learning framework, called Fed-LSGAN, is proposed by integrating federated learning and least square generative adversarial networks (LSGANs) for renewable scenario generation.
Abstract: Scenario generation is a fundamental and crucial tool for decision-making in power systems with high-penetration renewables. Based on big historical data, in this article, a novel federated deep generative learning framework, called Fed-LSGAN, is proposed by integrating federated learning and least square generative adversarial networks (LSGANs) for renewable scenario generation. Specifically, federated learning learns a shared global model in a central server from renewable sites at network edges, which enables the Fed-LSGAN to generate scenarios in a privacy-preserving manner without sacrificing the generation quality by transferring model parameters, rather than all data. Meanwhile, the LSGANs-based deep generative model generates scenarios that conform to the distribution of historical data through fully capturing the spatial-temporal characteristics of renewable powers, which leverages the least squares loss function to improve the training stability and generation quality. The simulation results demonstrate that the proposal manages to generate high-quality renewable scenarios and outperforms the state-of-the-art centralized methods. Besides, an experiment with different federated learning settings is designed and conducted to verify the robustness of our method.

Journal ArticleDOI
TL;DR: In this article , a memristive-coupled neural network (MCNN) model based on two sub-neural networks and one multistable memristor synapse was proposed for biomedical image encryption.
Abstract: Neural networks have been widely and deeply studied in the field of computational neurodynamics. However, coupled neural networks and their brain-like chaotic dynamics have not been noticed yet. In this article, we focus on the coupled neural network-based brain-like initial boosting coexisting hyperchaos and its application in biomedical image encryption. We first construct a memristive-coupled neural network (MCNN) model based on two subneural networks and one multistable memristor synapse. Then we investigate its coupling strength-related dynamical behaviors, initial states-related dynamical behaviors, and initial-boosted coexisting hyperchaos using bifurcation diagrams, phase portraits, Lyapunov exponents, and attraction basins. The numerical results demonstrate that the proposed MCNN not only can generate hyperchaotic attractors with high complexity but also can boost the attractor positions by switching their initial states. This makes the MCNN more suitable for many chaos-based engineering applications. Moreover, we design a biomedical image encryption scheme to explore the application of the MCNN. Performance evaluations show that the designed cryptosystem has several advantages in the keyspace, information entropy, and key sensitivity. Finally, we develop a field-programmable gate array test platform to verify the practicability of the presented MCNN and the designed medical image cryptosystem.

Journal ArticleDOI
TL;DR: In this paper , a vehicular edge computing network is designed to exploit potential edge service matching through evaluating cooperation gains in a mirrored edge computing system, while distributively scheduling computation task offloading and edge resource allocation in an multiagent deep reinforcement learning approach.
Abstract: Technological advancements of urban informatics and vehicular intelligence have enabled connected smart vehicles as pervasive edge computing platforms for a plethora of powerful applications. However, varies types of smart vehicles with distinct capacities, diverse applications with different resource demands as well as unpredictive vehicular topology, pose significant challenges on realizing efficient edge computing services. To cope with these challenges, we incorporate digital twin technology and artificial intelligence into the design of a vehicular edge computing network. It centrally exploits potential edge service matching through evaluating cooperation gains in a mirrored edge computing system, while distributively scheduling computation task offloading and edge resource allocation in an multiagent deep reinforcement learning approach. We further propose a coordination graph driven vehicular task offloading scheme, which minimizes offloading costs through efficiently integrating service matching exploitation and intelligent offloading scheduling in both digital twin and physical networks. Numerical results based on real urban traffic datasets demonstrate the efficiency of our proposed schemes.

Journal ArticleDOI
TL;DR: This paper proposes a method based on Hybrid Particle Swarm Optimization in order to design a WADC that ensures robustness to power system operating uncertainties, time delays variations on the WadC channels and the permanent failure of the W ADC communication channels.
Abstract: The presence of low-frequency and low-dampened oscillation modes can compromise the operating stability of power systems. Recent research has shown that the use of phasor measurement units data to compose a wide-area damping controller (WADC) has been shown to be effective in mitigating such oscillation modes but the possibility of loss of communication channels due to cyber-attacks or failures can compromise the proper operation of this controller. Besides, traditional control design methods present difficulties for the WADC control design. This article proposes a method based on hybrid particle swarm optimization in order to design a WADC that ensures robustness to power system-operating uncertainties, time delays variations on the WADC channels, and the permanent failure of the WADC communication channels. Modal analysis and nonlinear time-domain simulations were conducted in the IEEE 68-bus power system considering a set of scenarios.

Journal ArticleDOI
TL;DR: In this paper , an intelligent trust cloud management method is proposed to assure secure and reliable communication in 5G edge computing and D2D enabled Internet of Medical Things (IoMT) systems.
Abstract: 5G edge computing enabled Internet of Medical Things (IoMT) is an efficient technology to provide decentralized medical services while device-to-device (D2D) communication is a promising paradigm for future 5G networks. To assure secure and reliable communication in 5G edge computing and D2D enabled IoMT systems, this article presents an intelligent trust cloud management method. First, an active training mechanism is proposed to construct the standard trust clouds. Second, individual trust clouds of the IoMT devices can be established through fuzzy trust inferring and recommending. Third, a trust classification scheme is proposed to determine whether an IoMT device is malicious. Finally, a trust cloud update mechanism is presented to make the proposed trust management method adaptive and intelligent under an open wireless medium. Simulation results demonstrate that the proposed method can effectively address the trust uncertainty issue and improve the detection accuracy of malicious devices.

Journal ArticleDOI
TL;DR: A new metaheuristic with blockchain based resource allocation technique (MWBA-RAT) for cybertwin driven 6G on IoE environment and a new quasi-oppositional search and rescue optimization (QO-SRO) algorithm for the optimal resource allocation process are presented.
Abstract: Rapid advancements of sixth-generation (6G) network and Internet of Everything (IoE) supports numerous emerging services and application. Increasing mobile internet traffic and services, on the other hand, presented a number of challenges that could not be addressed with the current network design. The cybertwin is equipped with a variety of capabilities, including communication assistants, network data loggers, and digital asset owners, to address these difficulties. While spectrum resources are limited, effective resource management and sharing are essential in achieving these requirements. With this motivation, this article presents a new metaheuristic with blockchain based resource allocation technique (MWBA-RAT) for cybertwin driven 6G on IoE environment. The incorporation of the blockchain in 6G enables the network to monitor, manage, and share resources effectively. The proposed MWBA-RAT technique designs a new quasi-oppositional search and rescue optimization (QO-SRO) algorithm for the optimal resource allocation process and this shows the novelty of the work. The QO-SRO algorithm involves the integration of the quasi oppositional based learning concept with the traditional SRO algorithm to improve its convergence rate. A wide range of experiments are performed to highlight the enhanced outcomes of the MWBA-RAT technique.