scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Industrial Informatics in 2019"


Journal ArticleDOI
TL;DR: This paper thoroughly reviews the state-of-the-art of the DT research concerning the key components of DTs, the current development ofDTs, and the major DT applications in industry and outlines the current challenges and some possible directions for future work.
Abstract: Digital twin (DT) is one of the most promising enabling technologies for realizing smart manufacturing and Industry 4.0. DTs are characterized by the seamless integration between the cyber and physical spaces. The importance of DTs is increasingly recognized by both academia and industry. It has been almost 15 years since the concept of the DT was initially proposed. To date, many DT applications have been successfully implemented in different industries, including product design, production, prognostics and health management, and some other fields. However, at present, no paper has focused on the review of DT applications in industry. In an effort to understand the development and application of DTs in industry, this paper thoroughly reviews the state-of-the-art of the DT research concerning the key components of DTs, the current development of DTs, and the major DT applications in industry. This paper also outlines the current challenges and some possible directions for future work.

1,467 citations


Journal ArticleDOI
TL;DR: A novel deep learning framework to achieve highly accurate machine fault diagnosis using transfer learning to enable and accelerate the training of deep neural network is developed.
Abstract: We develop a novel deep learning framework to achieve highly accurate machine fault diagnosis using transfer learning to enable and accelerate the training of deep neural network. Compared with existing methods, the proposed method is faster to train and more accurate. First, original sensor data are converted to images by conducting a Wavelet transformation to obtain time-frequency distributions. Next, a pretrained network is used to extract lower level features. The labeled time-frequency images are then used to fine-tune the higher levels of the neural network architecture. This paper creates a machine fault diagnosis pipeline and experiments are carried out to verify the effectiveness and generalization of the pipeline on three main mechanical datasets including induction motors, gearboxes, and bearings with sizes of 6000, 9000, and 5000 time series samples, respectively. We achieve state-of-the-art results on each dataset, with most datasets showing test accuracy near 100%, and in the gearbox dataset, we achieve significant improvement from 94.8% to 99.64%. We created a repository including these datasets located at mlmechanics.ics.uci.edu.

721 citations


Journal ArticleDOI
TL;DR: The proposed approach mainly addresses energy trading users’ privacy in smart grid and screens the distribution of energy sale of sellers deriving from the fact that various energy trading volumes can be mined to detect its relationships with other information, such as physical location and energy usage.
Abstract: Implementing blockchain techniques has enabled secure smart trading in many realms, e.g. neighboring energy trading. However, trading information recorded on the blockchain also brings privacy concerns. Attackers can utilize data mining algorithms to obtain users’ privacy, specially, when the user group is located in nearby geographic positions. In this paper, we present a consortium blockchain-oriented approach to solve the problem of privacy leakage without restricting trading functions. The proposed approach mainly addresses energy trading users’ privacy in smart grid and screens the distribution of energy sale of sellers deriving from the fact that various energy trading volumes can be mined to detect its relationships with other information, such as physical location and energy usage. Experiment evaluations have demonstrated the effectiveness of the proposed approach.

407 citations


Journal ArticleDOI
TL;DR: This work proposes a credit-based proof-of-work (PoW) mechanism for IoT devices, which can guarantee system security and transaction efficiency simultaneously, and designs a data authority management method to regulate the access to sensor data.
Abstract: Industrial Internet of Things (IIoT) plays an indispensable role for Industry 4.0, where people are committed to implement a general, scalable, and secure IIoT system to be adopted across various industries. However, existing IIoT systems are vulnerable to single point of failure and malicious attacks, which cannot provide stable services. Due to the resilience and security promise of blockchain, the idea of combining blockchain and Internet of Things (IoT) gains considerable interest. However, blockchains are power-intensive and low-throughput, which are not suitable for power-constrained IoT devices. To tackle these challenges, we present a blockchain system with credit-based consensus mechanism for IIoT. We propose a credit-based proof-of-work (PoW) mechanism for IoT devices, which can guarantee system security and transaction efficiency simultaneously. In order to protect sensitive data confidentiality, we design a data authority management method to regulate the access to sensor data. In addition, our system is built based on directed acyclic graph -structured blockchains, which is more efficient than the Satoshi-style blockchain in performance. We implement the system on Raspberry Pi, and conduct a case study for the smart factory. Extensive evaluation and analysis results demonstrate that credit-based PoW mechanism and data access control are secure and efficient in IIoT.

388 citations


Journal ArticleDOI
TL;DR: A review of the state-of-the-art of distributed filtering and control of industrial CPSs described by differential dynamics models is presented and some challenges are raised to guide the future research.
Abstract: Industrial cyber-physical systems (CPSs) are large-scale, geographically dispersed, and life-critical systems, in which lots of sensors and actuators are embedded and networked together to facilitate real-time monitoring and closed-loop control. Their intrinsic features in geographic space and resources put forward to urgent requirements of reliability and scalability for designed filtering or control schemes. This paper presents a review of the state-of-the-art of distributed filtering and control of industrial CPSs described by differential dynamics models. Special attention is paid to sensor networks, manipulators, and power systems. For real-time monitoring, some typical Kalman-based distributed algorithms are summarized and their performances on calculation burden and communication burden, as well as scalability, are discussed in depth. Then, the characteristics of non-Kalman cases are further disclosed in light of constructed filter structures. Furthermore, the latest development is surveyed for distributed cooperative control of mobile manipulators and distributed model predictive control in industrial automation systems. By resorting to droop characteristics, representative distributed control strategies classified by controller structures are systematically summarized for power systems with the requirements of power sharing and voltage and frequency regulation. In addition, distributed security control of industrial CPSs is reviewed when cyber-attacks are taken into consideration. Finally, some challenges are raised to guide the future research.

376 citations


Journal ArticleDOI
TL;DR: A deep transfer learning (DTL) network based on sparse autoencoder (SAE) is presented and case study on remaining useful life (RUL) prediction of cutting tool is performed to validate effectiveness of the DTL method.
Abstract: Deep learning with ability to feature learning and nonlinear function approximation has shown its effectiveness for machine fault prediction. While, how to transfer a deep network trained by historical failure data for prediction of a new object is rarely researched. In this paper, a deep transfer learning (DTL) network based on sparse autoencoder (SAE) is presented. In the DTL method, three transfer strategies, that is, weight transfer, transfer learning of hidden feature, and weight update, are used to transfer an SAE trained by historical failure data to a new object. By these strategies, prediction of the new object without supervised information for training is achieved. Moreover, the learned features by deep transfer network for the new object share joint and similar characteristic to that of historical failure data, which is beneficial to accurate prediction. Case study on remaining useful life (RUL) prediction of cutting tool is performed to validate effectiveness of the DTL method. An SAE network is first trained by run-to-failure data with RUL information of a cutting tool in an off-line process. The trained network is then transferred to a new tool under operation for on-line RUL prediction. The prediction result with high accuracy shows advantage of the DTL method for RUL prediction.

336 citations


Journal ArticleDOI
TL;DR: The paper proposes BodyEdge, a novel architecture well suited for human-centric applications, in the context of the emerging healthcare industry, which consists of a tiny mobile client module and a performing edge gateway supporting multiradio and multitechnology communication.
Abstract: Edge computing paradigm has attracted many interests in the last few years as a valid alternative to the standard cloud-based approaches to reduce the interaction timing and the huge amount of data coming from Internet of Things (IoT) devices toward the Internet. In the next future, Edge-based approaches will be essential to support time-dependent applications in the Industry 4.0 context; thus, the paper proposes BodyEdge , a novel architecture well suited for human-centric applications, in the context of the emerging healthcare industry. It consists of a tiny mobile client module and a performing edge gateway supporting multiradio and multitechnology communication to collect and locally process data coming from different scenarios; moreover, it also exploits the facilities made available from both private and public cloud platforms to guarantee a high flexibility, robustness, and adaptive service level. The advantages of the designed software platform have been evaluated in terms of reduced transmitted data and processing time through a real implementation on different hardware platforms. The conducted study also highlighted the network conditions (data load and processing delay) in which BodyEdge is a valid and inexpensive solution for healthcare application scenarios.

287 citations


Journal ArticleDOI
TL;DR: A decentralized adaptive formation controller is designed that ensures uniformly ultimate boundedness of the closed-loop system with prescribed performance and avoids collision between each vehicle and its leader.
Abstract: This paper addresses a decentralized leader–follower formation control problem for a group of fully actuated unmanned surface vehicles with prescribed performance and collision avoidance. The vehicles are subject to time-varying external disturbances, and the vehicle dynamics include both parametric uncertainties and uncertain nonlinear functions. The control objective is to make each vehicle follow its reference trajectory and avoid collision between each vehicle and its leader. We consider prescribed performance constraints, including transient and steady-state performance constraints, on formation tracking errors. In the kinematic design, we introduce the dynamic surface control technique to avoid the use of vehicle's acceleration. To compensate for the uncertainties and disturbances, we apply an adaptive control technique to estimate the uncertain parameters including the upper bounds of the disturbances and present neural network approximators to estimate uncertain nonlinear dynamics. Consequently, we design a decentralized adaptive formation controller that ensures uniformly ultimate boundedness of the closed-loop system with prescribed performance and avoids collision between each vehicle and its leader. Simulation results illustrate the effectiveness of the decentralized formation controller.

273 citations


Journal ArticleDOI
TL;DR: A blockchain-enabled efficient data collection and secure sharing scheme combining Ethereum blockchain and deep reinforcement learning (DRL) to create a reliable and safe environment that can provide higher security level and stronger resistance to attack than a traditional database based data sharing scheme.
Abstract: With the rapid development of smart mobile terminals (MTs), various industrial Internet of things (IIoT) applications can fully leverage them to collect and share data for providing certain services. However, two key challenges still remain. One is how to achieve high-quality data collection with limited MT energy resource and sensing range. Another is how to ensure security when sharing and exchanging data among MTs, to prevent possible device failure, network communication failure, malicious users or attackers, etc. To this end, we propose a blockchain-enabled efficient data collection and secure sharing scheme combining Ethereum blockchain and deep reinforcement learning (DRL) to create a reliable and safe environment. In this scheme, DRL is used to achieve the maximum amount of collected data, and the blockchain technology is used to ensure security and reliability of data sharing. Extensive simulation results demonstrate that the proposed scheme can provide higher security level and stronger resistance to attack than a traditional database based data sharing scheme for different levels/types of attacks.

251 citations


Journal ArticleDOI
TL;DR: The Blockchain architecture, which is an emerging scheme for constructing the distributed networks, is introduced to reshape the traditional IIoT architecture to form a new multicenter partially decentralized architecture that provides better security and privacy protection than the traditional architecture.
Abstract: Through the Industrial Internet of Things (IIoT), a smart factory has entered the booming period. However, as the number of nodes and network size become larger, the traditional IIoT architecture can no longer provide effective support for such enormous system. Therefore, we introduce the Blockchain architecture, which is an emerging scheme for constructing the distributed networks, to reshape the traditional IIoT architecture. First, the major problems of the traditional IIoT architecture are analyzed, and the existing improvements are summarized. Second, we introduce a security and privacy model to help design the Blockchain-based architecture. On this basis, we decompose and reorganize the original IIoT architecture to form a new multicenter partially decentralized architecture. Then, we introduce some relative security technologies to improve and optimize the new architecture. After that we design the data interaction process and the algorithms of the architecture. Finally, we use an automatic production platform to discuss the specific implementation. The experimental results show that the proposed architecture provides better security and privacy protection than the traditional architecture. Thus, the proposed architecture represents a significant improvement of the original architecture, which provides a new direction for the IIoT development.

242 citations


Journal ArticleDOI
TL;DR: Simulations results show that the proposed framework can effectively improve the performance of blockchain-enabled IIoT systems and well adapt to the dynamics of the IIeT.
Abstract: Recent advances in the industrial Internet of things (IIoT) provide plenty of opportunities for various industries. To address the security and efficiency issues of the massive IIoT data, blockchain is widely considered as a promising solution to enable data storing/processing/sharing in a secure and efficient way. To meet the high throughput requirement, this paper proposes a novel deep reinforcement learning (DRL)-based performance optimization framework for blockchain-enabled IIoT systems, the goals of which are threefold: 1) providing a methodology for evaluating the system from the aspects of scalability, decentralization, latency, and security; 2) improving the scalability of the underlying blockchain without affecting the system's decentralization, latency, and security; and 3) designing a modulable blockchain for IIoT systems, where the block producers, consensus algorithm, block size, and block interval can be selected/adjusted using the DRL technique. Simulations results show that our proposed framework can effectively improve the performance of blockchain-enabled IIoT systems and well adapt to the dynamics of the IIoT.

Journal ArticleDOI
TL;DR: A distributed secondary control scheme with a sampled-data-based event-triggered communication mechanism is proposed to achieve active power sharing and frequency regulation in a unified framework, where neighborhood sampled- data exchange occurs only when the predefined triggering condition is violated.
Abstract: This paper is concerned with active power sharing and frequency regulation in an islanded microgrid under event-triggered communication. A distributed secondary control scheme with a sampled-data-based event-triggered communication mechanism is proposed to achieve active power sharing and frequency regulation in a unified framework, where neighborhood sampled-data exchange occurs only when the predefined triggering condition is violated. Compared with traditional periodic communication mechanisms, the proposed event-triggered communication mechanism shows some prominent ability in reducing the number of communication among neighbors while guaranteeing the desired performance level of microgirds. By employing the Lyapunov–Kravovskii functional method, some sufficient conditions are derived to characterize the effects of control gains, system parameters, and sampling period on stability of microgrids. Finally, case studies on a modified IEEE 34-bus test system are conducted to evaluate the performance of the proposed distributed control scheme, showcasing its effectiveness, robustness against load changes, and plug-and-play ability.

Journal ArticleDOI
TL;DR: Experimental testbed reveals that the proposed FCDAA enhances energy efficiency and battery lifetime at acceptable reliability (∼0.95) by appropriately tuning duty cycle and TPC unlike conventional methods.
Abstract: Due to various challenging issues such as, computational complexity and more delay in cloud computing, edge computing has overtaken the conventional process by efficiently and fairly allocating the resources i.e., power and battery lifetime in Internet of things (IoT)-based industrial applications. In the meantime, intelligent and accurate resource management by artificial intelligence (AI) has become the center of attention especially in industrial applications. With the coordination of AI at the edge will remarkably enhance the range and computational speed of IoT-based devices in industries. But the challenging issue in these power hungry, short battery lifetime, and delay-intolerant portable devices is inappropriate and inefficient classical trends of fair resource allotment. Also, it is interpreted through extensive industrial datasets that dynamic wireless channel could not be supported by the typical power saving and battery lifetime techniques, for example, predictive transmission power control (TPC) and baseline. Thus, this paper proposes 1) a forward central dynamic and available approach (FCDAA) by adapting the running time of sensing and transmission processes in IoT-based portable devices; 2) a system-level battery model by evaluating the energy dissipation in IoT devices; and 3) a data reliability model for edge AI-based IoT devices over hybrid TPC and duty-cycle network. Two important cases, for instance, static (i.e., product processing) and dynamic (i.e., vibration and fault diagnosis) are introduced for proper monitoring of industrial platform. Experimental testbed reveals that the proposed FCDAA enhances energy efficiency and battery lifetime at acceptable reliability (∼0.95) by appropriately tuning duty cycle and TPC unlike conventional methods.

Journal ArticleDOI
TL;DR: A data-driven diagnostic technique, Gaussian process regression for in situ capacity estimation (GP-ICE), which estimates battery capacity using voltage measurements over short periods of galvanostatic operation, with approximately 2%–3% root-mean-squared error.
Abstract: Accurate on-board capacity estimation is of critical importance in lithium-ion battery applications. Battery charging/discharging often occurs under a constant current load, and hence voltage versus time measurements under this condition may be accessible in practice. This paper presents a data-driven diagnostic technique, Gaussian process regression for in situ capacity estimation (GP-ICE), which estimates battery capacity using voltage measurements over short periods of galvanostatic operation. Unlike previous works, GP-ICE does not rely on interpreting the voltage–time data as incremental capacity (IC) or differential voltage (DV) curves. This overcomes the need to differentiate the voltage–time data (a process that amplifies measurement noise), and the requirement that the range of voltage measurements encompasses the peaks in the IC/DV curves. GP-ICE is applied to two datasets, consisting of 8 and 20 cells, respectively. In each case, within certain voltage ranges, as little as 10 s of galvanostatic operation enables capacity estimates with approximately 2%–3% root-mean-squared error (RMSE).

Journal ArticleDOI
TL;DR: A novel face-pose estimation framework named multitask manifold deep learning, based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning is proposed.
Abstract: Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we use multimodal data and propose a novel face-pose estimation framework named multitask manifold deep learning ( $\text{M}^2\text{DL}$ ). It is based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning. In the proposed CNNs, manifold regularized convolutional layers learn the relationship between outputs of neurons in a low-rank space. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined by applying multitask learning with incoherent sparse and low-rank learning with a least-squares loss. Experimental results on three challenging benchmark datasets demonstrate the performance of $\text{M}^2\text{DL}$ .

Journal ArticleDOI
TL;DR: In this treatise, the cloud computing service is introduced into the blockchain platform for the sake of assisting to offload computational task from the IIoT network itself and a multiagent reinforcement learning algorithm is conceived for searching the near-optimal policy.
Abstract: Past few years have witnessed the compelling applications of the blockchain technique in our daily life ranging from the financial market to health care. Considering the integration of the blockchain technique and the industrial Internet of Things (IoT), blockchain may act as a distributed ledger for beneficially establishing a decentralized autonomous trading platform for industrial IoT (IIoT) networks. However, the power and computation constraints prevent IoT devices from directly participating in this proof-of-work process. As a remedy, in this treatise, the cloud computing service is introduced into the blockchain platform for the sake of assisting to offload computational task from the IIoT network itself. In addition, we study the resource management and pricing problem between the cloud provider and miners. More explicitly, we model the interaction between the cloud provider and miners as a Stackelberg game, where the leader, i.e., cloud provider, makes the price first, and then miners act as the followers. Moreover, in order to find the Nash equilibrium of the proposed Stackelberg game, a multiagent reinforcement learning algorithm is conceived for searching the near-optimal policy. Finally, extensive simulations are conducted to evaluate our proposed algorithm in comparison to some state-of-the-art schemes.

Journal ArticleDOI
TL;DR: This paper proposes an efficient framework for fruit classification using deep learning based on a proposed light model of six convolutional neural network layers, whereas the second is a fine-tuned visual geometry group-16 pretrained deep learning model.
Abstract: Fruit classification is an important task in many industrial applications. A fruit classification system may be used to help a supermarket cashier identify the fruit species and prices. It may also be used to help people decide whether specific fruit species meet their dietary requirements. In this paper, we propose an efficient framework for fruit classification using deep learning. More specifically, the framework is based on two different deep learning architectures. The first is a proposed light model of six convolutional neural network layers, whereas the second is a fine-tuned visual geometry group-16 pretrained deep learning model. Two color image datasets, one of which is publicly available, are used to evaluate the proposed framework. The first dataset (dataset 1) consists of clear fruit images, whereas the second dataset (dataset 2) contains fruit images that are challenging to classify. Classification accuracies of 99.49% and 99.75% were achieved on dataset 1 for the first and second models, respectively. On dataset 2, the first and second models obtained accuracies of 85.43% and 96.75%, respectively.

Journal ArticleDOI
TL;DR: A practical privacy-preserving data aggregation scheme is proposed without TTP, in which the users with some extent trust construct a virtual aggregation area to mask the single user's data, and meanwhile, the aggregation result almost has no effect for the data utility in large scale applications.
Abstract: The real-time electricity consumption data can be used in value-added service such as big data analysis, meanwhile the single user's privacy needs to be protected. How to balance the data utility and the privacy preservation is a vital issue, where the privacy-preserving data aggregation could be a feasible solution. Most of the existing data aggregation schemes rely on a trusted third party (TTP). However, this assumption will have negative impact on reliability, because the system can be easily knocked down by the denial of service attack. In this paper, a practical privacy-preserving data aggregation scheme is proposed without TTP, in which the users with some extent trust construct a virtual aggregation area to mask the single user's data, and meanwhile, the aggregation result almost has no effect for the data utility in large scale applications. The computation cost and communication overhead are reduced in order to promote the practicability. Moreover, the security analysis and the performance evaluation show that the proposed scheme is robust and efficient.

Journal ArticleDOI
TL;DR: A method for conserving position confidentiality of roaming PBSs users using machine learning techniques is proposed and it is confirmed that the proposed method achieved above 90% of the position confidentiality in PBSs.
Abstract: Position-based services (PBSs) that deliver networked amenities based on roaming user's positions have become progressively popular with the propagation of smart mobile devices. Position is one of the important circumstances in PBSs. For effective PBSs, extraction and recognition of meaningful positions and estimating the subsequent position are fundamental procedures. Several researchers and practitioners have tried to recognize and predict positions using various techniques; however, only few deliberate the progress of position-based real-time applications considering significant tasks of PBSs. In this paper, a method for conserving position confidentiality of roaming PBSs users using machine learning techniques is proposed. We recommend a three-phase procedure for roaming PBS users. It identifies user position by merging decision trees and k-nearest neighbor and estimates user destination along with the position track sequence using hidden Markov models. Moreover, a mobile edge computing service policy is followed in the proposed paradigm, which will ensure the timely delivery of PBSs. The benefits of mobile edge service policy offer position confidentiality and low latency by means of networking and computing services at the vicinity of roaming users. Thorough experiments are conducted, and it is confirmed that the proposed method achieved above 90% of the position confidentiality in PBSs.

Journal ArticleDOI
TL;DR: A multiple-layer data-driven cyber-attack detection system utilizing network, system, and process data is developed and shows that this approach detects physically impactful cyber attacks before significant consequences occur.
Abstract: The growing number of attacks against cyber-physical systems in recent years elevates the concern for cybersecurity of industrial control systems (ICSs). The current efforts of ICS cybersecurity are mainly based on firewalls, data diodes, and other methods of intrusion prevention, which may not be sufficient for growing cyber threats from motivated attackers. To enhance the cybersecurity of ICS, a cyber-attack detection system built on the concept of defense-in-depth is developed utilizing network traffic data, host system data, and measured process parameters. This attack detection system provides multiple-layer defense in order to gain the defenders precious time before unrecoverable consequences occur in the physical system. The data used for demonstrating the proposed detection system are from a real-time ICS testbed. Five attacks, including man in the middle (MITM), denial of service (DoS), data exfiltration, data tampering, and false data injection, are carried out to simulate the consequences of cyber attack and generate data for building data-driven detection models. Four classical classification models based on network data and host system data are studied, including k-nearest neighbor (KNN), decision tree, bootstrap aggregating (bagging), and random forest (RF), to provide a secondary line of defense of cyber-attack detection in the event that the intrusion prevention layer fails. Intrusion detection results suggest that KNN, bagging, and RF have low missed alarm and false alarm rates for MITM and DoS attacks, providing accurate and reliable detection of these cyber attacks. Cyber attacks that may not be detectable by monitoring network and host system data, such as command tampering and false data injection attacks by an insider, are monitored for by traditional process monitoring protocols. In the proposed detection system, an auto-associative kernel regression model is studied to strengthen early attack detection. The result shows that this approach detects physically impactful cyber attacks before significant consequences occur. The proposed multiple-layer data-driven cyber-attack detection system utilizing network, system, and process data is a promising solution for safeguarding an ICS.

Journal ArticleDOI
TL;DR: An innovative two-stage automated approach to estimate the RUL of bearings using deep neural networks (DNNs) is presented, which has achieved satisfactory prediction performance for a real bearing degradation dataset with different working conditions.
Abstract: The degradation of bearings plays a key role in the failures of industrial machinery Prognosis of bearings is critical in adopting an optimal maintenance strategy to reduce the overall cost and to avoid unwanted downtime or even casualties by estimating the remaining useful life (RUL) of the bearings Traditional data-driven approaches of RUL prediction rely heavily on manual feature extraction and selection using human expertise This paper presents an innovative two-stage automated approach to estimate the RUL of bearings using deep neural networks (DNNs) A denoising autoencoder-based DNN is used to classify the acquired signals of the monitored bearings into different degradation stages Representative features are extracted directly from the raw signal by training the DNN Then, regression models based on shallow neural networks are constructed for each health stage The final RUL result is obtained by smoothing the regression results from different models The proposed approach has achieved satisfactory prediction performance for a real bearing degradation dataset with different working conditions

Journal ArticleDOI
TL;DR: A novel framework for privacy-preserved traffic sharing among taxi companies is proposed, which jointly considers the privacy, profits, and fairness for participants.
Abstract: Due to the prominent development of public transportation systems, the taxi flows could nowadays work as a reasonable reference to the trend of urban population. Being aware of this knowledge will significantly benefit regular individuals, city planners, and the taxi companies themselves. However, to mindlessly publish such contents will severely threaten the private information of taxi companies. Both their own market ratios and the sensitive information of passengers and drivers will be revealed. Consequently, we propose in this paper a novel framework for privacy-preserved traffic sharing among taxi companies, which jointly considers the privacy, profits, and fairness for participants. The framework allows companies to share scales of their taxi flows, and common knowledge will be derived from these statistics. Two algorithms are proposed for the derivation of sharing schemes in different scenarios, depending on whether the common knowledge can be accessed by third parties like individuals and governments. The differential privacy is utilized in both cases to preserve the sensitive information for taxi companies. Finally, both algorithms are validated on real-world data traces under multiple market distributions.

Journal ArticleDOI
TL;DR: It is shown that the proposed scheme ensures security even if a sensor node is captured by an adversary, and the proposed protocol uses the lightweight cryptographic primitives, such as one way cryptographic hash function, physically unclonable function, and bitwise exclusive operations.
Abstract: Industrial wireless sensor network (IWSN) is an emerging class of a generalized WSN having constraints of energy consumption, coverage, connectivity, and security. However, security and privacy is one of the major challenges in IWSN as the nodes are connected to Internet and usually located in an unattended environment with minimum human interventions. In IWSN, there is a fundamental requirement for a user to access the real-time information directly from the designated sensor nodes. This task demands to have a user authentication protocol. To satisfy this requirement, this paper proposes a lightweight and privacy-preserving mutual user authentication protocol in which only the user with a trusted device has the right to access the IWSN. Therefore, in the proposed scheme, we considered the physical layer security of the sensor nodes. We show that the proposed scheme ensures security even if a sensor node is captured by an adversary. The proposed protocol uses the lightweight cryptographic primitives, such as one way cryptographic hash function, physically unclonable function, and bitwise exclusive operations. Security and performance analysis shows that the proposed scheme is secure, and is efficient for the resource-constrained sensing devices in IWSN.

Journal ArticleDOI
TL;DR: This work proposes an effective triple-phase adjustment method to produce feasible disassembly sequences based on an AOG graph that is capable of rapidly generating satisfactory Pareto results and outperforms a well-known genetic algorithm.
Abstract: Disassembly sequencing is important for remanufacturing and recycling used or discarded products. AND / OR graphs (AOGs) have been applied to describe practical disassembly problems by using “ AND ” and “ OR ” nodes. An AOG-based disassembly sequence planning problem is an NP-hard combinatorial optimization problem. Heuristic evolution methods can be adopted to handle it. While precedence and “ AND ” relationship issues can be addressed, OR (exclusive OR ) relations are not well addressed by the existing heuristic methods. Thus, an ineffective result may be obtained in practice. A conflict matrix is introduced to cope with the exclusive OR relation in an AOG graph. By using it together with precedence and succession matrices in the existing work, this work proposes an effective triple-phase adjustment method to produce feasible disassembly sequences based on an AOG graph. Energy consumption is adopted to evaluate the disassembly efficiency. Its use with the traditional economical criterion leads to a novel dual-objective optimization model such that disassembly profit is maximized and disassembly energy consumption is minimized. An improved artificial bee colony algorithm is developed to effectively generate a set of Pareto solutions for this dual-objective disassembly optimization problem. This methodology is employed to practical disassembly processes of two products to verify its feasibility and effectiveness. The results show that it is capable of rapidly generating satisfactory Pareto results and outperforms a well-known genetic algorithm.

Journal ArticleDOI
TL;DR: A deep learning based collaborative filtering framework, namely, deep matrix factorization (DMF), which can integrate any kind of side information effectively and handily, and implicit feedback embedding (IFE) is proposed, which converts the high-dimensional and sparse implicit feedback information into a low-dimensional real-valued vector retaining primary features.
Abstract: Automatic recommendation has become an increasingly relevant problem to industries, which allows users to discover new items that match their tastes and enables the system to target items to the right users. In this paper, we propose a deep learning (DL) based collaborative filtering framework, namely, deep matrix factorization (DMF), which can integrate any kind of side information effectively and handily. In DMF, two feature transforming functions are built to directly generate latent factors of users and items from various input information. As for the implicit feedback that is commonly used as input of recommendation algorithms, implicit feedback embedding (IFE) is proposed. IFE converts the high-dimensional and sparse implicit feedback information into a low-dimensional real-valued vector retaining primary features. Using IFE could reduce the scale of model parameters conspicuously and increase model training efficiency. Experimental results on five public databases indicate that the proposed method performs better than the state-of-the-art DL-based recommendation algorithms on both accuracy and training efficiency in terms of quantitative assessments.

Journal ArticleDOI
TL;DR: This paper proposes a blockchain-based fair nonrepudiation service provisioning scheme for IIoT scenarios in which the blockchain is used as a service publisher and an evidence recorder and an impartial smart contract is implemented to resolve disputes.
Abstract: Emerging network computing technologies extend the functionalities of industrial IoT (IIoT) terminals. However, this promising service-provisioning scheme encounters problems in untrusted and distributed IIoT scenarios because malicious service providers or clients may deny service provisions or usage for their own interests. Traditional nonrepudiation solutions fade in IIoT environments due to requirements of trusted third parties or unacceptable overheads. Fortunately, the blockchain revolution facilitates innovative solutions. In this paper, we propose a blockchain-based fair nonrepudiation service provisioning scheme for IIoT scenarios in which the blockchain is used as a service publisher and an evidence recorder. Each service is separately delivered via on-chain and off-chain channels with mandatory evidence submissions for nonrepudiation purpose. Moreover, a homomorphic-hash-based service verification method is designed that can function with mere on-chain evidence. And an impartial smart contract is implemented to resolve disputes. The security analysis demonstrates the dependability, and the evaluations reveal the effectiveness and efficiency.

Journal ArticleDOI
TL;DR: This paper proposes a lightweight blockchain system called LightChain, which is resource-efficient and suitable for power-constrained IIoT scenarios, and presents a green consensus mechanism named Synergistic Multiple Proof for stimulating the cooperation ofIIoT devices, and a lightweight data structure called LightBlock to streamline broadcast content.
Abstract: While the intersection of blockchain and Industrial Internet of Things (IIoT) has received considerable research interest lately, the conflict between the high resource requirements of blockchain and the generally inadequate performance of IIoT devices has not been well tackled. On one hand, due to the introductions of mathematical concepts, including Public Key Infrastructure, Merkle Hash Tree, and Proof of Work (PoW), deploying blockchain demands huge computing power. On the other hand, full nodes should synchronize massive block data and deal with numerous transactions in peer-to-peer network, whose occupation of storage capacity and bandwidth makes IIoT devices difficult to afford. In this paper, we propose a lightweight blockchain system called LightChain , which is resource-efficient and suitable for power-constrained IIoT scenarios. Specifically, we present a green consensus mechanism named Synergistic Multiple Proof for stimulating the cooperation of IIoT devices, and a lightweight data structure called LightBlock to streamline broadcast content. Furthermore, we design a novel Unrelated Block Offloading Filter to avoid the unlimited growth of ledger without affecting blockchain's traceability. The extensive experiments demonstrate that LightChain can reduce the individual computational cost to 39.32% and speed up the block generation by up to 74.06%. In terms of storage and network usage, the reductions are 43.35% and 90.55%, respectively.

Journal ArticleDOI
TL;DR: A fundamental tradeoff between energy consumption and service delay when provisioning mobile services in vehicular networks is explored and a novel model is proposed to depict the users’ willingness of contributing their resources to the public is proposed.
Abstract: In the past decade, network data communication has experienced a rapid growth, which has led to explosive congestion in heterogeneous networks. Moreover, the emerging industrial applications, such as automatic driving put forward higher requirements on both networks and devices. On the contrary, running computation-intensive industrial applications locally are constrained by the limited resources of devices. Correspondingly, fog computing has recently emerged to reduce the congestion of content-centric networks. It has proven to be a good way in industry and traffic for reducing network delay and processing time. In addition, device-to-device offloading is viewed as a promising paradigm to transmit network data in mobile environment, especially for autodriving vehicles. In this paper, jointly taking both the network traffic and computation workload of industrial traffic into consideration, we explore a fundamental tradeoff between energy consumption and service delay when provisioning mobile services in vehicular networks. In particular, when the available resource in mobile vehicles becomes a bottleneck, we propose a novel model to depict the users’ willingness of contributing their resources to the public. We then formulate a cost minimization problem by exploiting the framework of Markov decision progress (MDP) and propose the dynamic reinforcement learning scheduling algorithm and the deep dynamic scheduling algorithm to solve the offloading decision problem. By adopting different mobile trajectory traces, we conduct extensive simulations to evaluate the performance of the proposed algorithms. The results show that our proposed algorithms outperform other benchmark schemes in the mobile edge networks.

Journal ArticleDOI
TL;DR: This paper investigates the issues of day-ahead and real-time cooperative energy management for multienergy systems formed by many energy bodies and proposes an event-triggered-based distributed algorithm with some desirable features, namely, distributed execution, asynchronous communication, and independent calculation.
Abstract: This paper investigates the issues of day-ahead and real-time cooperative energy management for multienergy systems formed by many energy bodies. To address these issues, we propose an event-triggered-based distributed algorithm with some desirable features, namely, distributed execution, asynchronous communication, and independent calculation. First, the energy body, seen as both energy supplier and customer, is introduced for system model development. On this basis, energy bodies cooperate with each other to achieve the objective of maximizing the day-ahead social welfare and smoothing out the real-time loads variations as well as renewable resource fluctuations with the consideration of different timescale characteristics between electricity and heat power. To this end, the day-ahead and real-time energy management models are established and formulated as a class of distributed coupled optimization problem by felicitously converting some system coordinates. Such problems can be effectively solved by implementing the proposed algorithm. With the effort, each energy body can determine its owing optimal operations through only local communication and computation, resulting in enhanced system reliability, scalability, and privacy. Meanwhile, the designed communication strategy is event-triggered, which can dramatically reduce the communication among energy bodies. Simulations are provided to illustrate the effectiveness of the proposed models and algorithm.

Journal ArticleDOI
Kedi Zheng1, Qixin Chen1, Yi Wang1, Chongqing Kang1, Qing Xia1 
TL;DR: The maximum information coefficient (MIC) can be used to precisely detect thefts that appear normal in shapes and the clustering technique by fast search and find of density peaks (CFSFDP) finds the abnormal users among thousands of load profiles, making it quite suitable for detecting electricity thefts with arbitrary shapes.
Abstract: The two-way flow of information and energy is an important feature of the Energy Internet. Data analytics is a powerful tool in the information flow that aims to solve practical problems using data mining techniques. As the problem of electricity thefts via tampering with smart meters continues to increase, the abnormal behaviors of thefts become more diversified and more difficult to detect. Thus, a data analytics method for detecting various types of electricity thefts is required. However, the existing methods either require a labeled dataset or additional system information, which is difficult to obtain in reality or have poor detection accuracy. In this paper, we combine two novel data mining techniques to solve the problem. One technique is the maximum information coefficient (MIC), which can find the correlations between the nontechnical loss and a certain electricity behavior of the consumer. MIC can be used to precisely detect thefts that appear normal in shapes. The other technique is the clustering technique by fast search and find of density peaks (CFSFDP). CFSFDP finds the abnormal users among thousands of load profiles, making it quite suitable for detecting electricity thefts with arbitrary shapes. Next, a framework for combining the advantages of the two techniques is proposed. Numerical experiments on the Irish smart meter dataset are conducted to show the good performance of the combined method.