scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Reliability in 2021"


Journal ArticleDOI
TL;DR: This article provides a survey of recent research on fault prognosis and reports on some of the significant application domains where prognosis techniques are employed.
Abstract: Fault diagnosis and prognosis are some of the most crucial functionalities in complex and safety-critical engineering systems, and particularly fault diagnosis, has been a subject of intensive research in the past four decades. Such capabilities allow for detection and isolation of early developing faults as well as prediction of fault propagation, which can allow for preventive maintenance, or even serve as a countermeasure to the possibility of catastrophic incidence as a result of a failure. Following a short preliminary overview and definitions, this article provides a survey of recent research on fault prognosis. Additionally, we report on some of the significant application domains where prognosis techniques are employed. Finally, some potential directions for future research are outlined.

194 citations


Journal ArticleDOI
TL;DR: This article performs a comprehensive review of the TL algorithms used in different wireless communication fields, such as base stations/access points switching, indoor wireless localization and intrusion detection in wireless networks, etc.
Abstract: In the coming 6G communications, network densification, high throughput, positioning accuracy, energy efficiency, and many other key performance indicator requirements are becoming increasingly strict In the future, how to improve work efficiency while saving costs is one of the foremost research directions in wireless communications Being able to learn from experience is an important way to approach this vision Transfer learning (TL) encourages new tasks/domains to learn from experienced tasks/domains for helping new tasks become faster and more efficient TL can help save energy and improve efficiency with the correlation and similarity information between different tasks in many fields of wireless communications Therefore, applying TL to future 6G communications is a very valuable topic TL has achieved some good results in wireless communications In order to improve the development of TL applied in 6G communications, this article performs a comprehensive review of the TL algorithms used in different wireless communication fields, such as base stations/access points switching, indoor wireless localization and intrusion detection in wireless networks, etc Moreover, the future research directions of mutual relationship between TL and 6G communications are discussed in detail Challenges and future issues about integrate TL into 6G are proposed at the end This article is intended to help readers understand the past, present, and future between TL and wireless communications

131 citations


Journal ArticleDOI
Yun Lin1, Haojun Zhao1, Xuefei Ma1, Ya Tu1, Meiyu Wang1 
TL;DR: The results indicate that the accuracy of the target model reduce significantly by adversarial attacks, when the perturbation factor is 0.001, and iterative methods show greater attack performances than that of one-step method.
Abstract: Deep learning (DL) models are vulnerable to adversarial attacks, by adding a subtle perturbation which is imperceptible to the human eye, a convolutional neural network (CNN) can lead to erroneous results, which greatly reduces the reliability and security of the DL tasks. Considering the wide application of modulation recognition in the communication field and the rapid development of DL, by adding a well-designed adversarial perturbation to the input signal, this article explores the performance of attack methods on modulation recognition, measures the effectiveness of adversarial attacks on signals, and provides the empirical evaluation of the reliabilities of CNNs. The results indicate that the accuracy of the target model reduce significantly by adversarial attacks, when the perturbation factor is 0.001, the accuracy of the model could drop by about 50 ${\%}$ on average. Among them, iterative methods show greater attack performances than that of one-step method. In addition, the consistency of the waveform before and after the perturbation is examined, to consider whether the added adversarial examples are small enough (i.e., hard to distinguish by human eyes). This article also aims at inspiring researchers to further promote the CNNs reliabilities against adversarial attacks.

89 citations


Journal ArticleDOI
TL;DR: A novel hybrid learning algorithm for training radial basis function network, which integrates the clustering learning algorithm and the orthogonal least squares learning algorithm, is proposed in this article.
Abstract: With the wide application of industrial robots in the field of precision machining, reliability analysis of positioning accuracy becomes increasingly important for industrial robots. Since the industrial robot is a complex nonlinear system, the traditional approximate reliability methods often produce unreliable results in analyzing its positioning accuracy. In order to study the positioning accuracy reliability of industrial robot more efficiently and accurately, a radial basis function network is used to construct the mapping relationship between the uncertain parameters and the position coordinates of the end-effector. Combining with the Monte Carlo simulation method, the positioning accuracy reliability is then evaluated. A novel hybrid learning algorithm for training radial basis function network, which integrates the clustering learning algorithm and the orthogonal least squares learning algorithm, is proposed in this article. Examples are presented to illustrate the high proficiency and reliability of the proposed method.

75 citations


Journal ArticleDOI
TL;DR: A task merging strategy based on mobile program component call graphs to minimize the computational complexity of the program partition is proposed and a reliable shadow component scheme between multilevel severs for the reliability problem is designed.
Abstract: Mobile edge computing system provides cloud computing capabilities at the edge of wireless mobile networks, ensuring low latency, highly efficient computing, and improved user experience. At the same time, computationally intensive components are offloaded from mobile devices to edge servers and distributed among the servers. Due to the special constraints (mobile devices’ battery capacities, limited computing resources of one single edge server, inevitable edge server failure, etc.), there emerges a following problem. 1) How to guarantee the reliability of the offloaded computing? This problem brings in the following two other problems. 2) How to find the appropriate offloading point in the mobile program such that the computing tasks offloaded to cloud can be maximized, while the transmission energy consumption is minimized? 3) What is the achievable minimum latency tasks allocation strategy among multiple users’ mobile devices and multiple edge servers? In this paper, we try to address the aforementioned problems. First, for the appropriate offloading point problem, we consider the offloading valuable basic constraint and propose a task merging strategy based on mobile program component call graphs to minimize the computational complexity of the program partition. Second, we formulate the second problem as a combinatorial optimization problem and transform it into an n -fold integer programming problem by mapping the remaining computing resources to a virtual component. Third, we design a reliable shadow component scheme between multilevel severs for the reliability problem. Finally, we develop a fast algorithm for the mix problem and analyze its performance and conduct experiments to prove the accuracy of our theoretical results.

62 citations


Journal ArticleDOI
TL;DR: FCCA, a deep-learning-based code clone detection approach on top of a hybrid code representation by preserving multiple code features, including unstructured and structured code information, equipped with an attention mechanism that pays attention to important code parts and features that contribute to the final detection accuracy.
Abstract: Code cloning, which reuses a fragment of source code via copy-and-paste with or without modifications, is a common way for code reuse and software prototyping. However, the duplicated code fragments often affect software quality, resulting in high maintenance cost. The existing clone detectors using shallow textual or syntactical features to identify code similarity are still ineffective in accurately finding sophisticated functional code clones in real-world code bases. This article proposes functional code clone detector using attention ( FCCA ), a deep-learning-based code clone detection approach on top of a hybrid code representation by preserving multiple code features, including unstructured (code in the form of sequential tokens) and structured (code in the form of abstract syntax trees and control-flow graphs) information. Multiple code features are fused into a hybrid representation, which is equipped with an attention mechanism that pays attention to important code parts and features that contribute to the final detection accuracy. We have implemented and evaluated FCCA using 275 777 real-world code clone pairs written in Java. The experimental results show that FCCA outperforms several state-of-the-art approaches for detecting functional code clones in terms of accuracy, recall, and F1 score.

50 citations


Journal ArticleDOI
TL;DR: A state-based maintenance policy with multifunctional maintenance windows to minimize the cost rate via the joint optimization of inspection interval, postponed interval, and opportunistic threshold is introduced.
Abstract: Industrial assets exposed to random environment often exhibit complex deterioration mechanisms with health status variations. In actual field operation, hidden defect signals are usually crucial indicators of upcoming malfunctions and also reminders of proactive maintenance executions. Despite the extensive applications of defect-centered maintenance, in the literature, little attempt has: a) captured the impact of random environments on health variation and restoration, and b) explored the differentiated functions of maintenance windows in separate states. This article addresses these challenges by introducing a state-based maintenance policy with multifunctional maintenance windows. The impact of environmental disturbance on both defect initialization and propagation is characterized by random increment of the state transition rate as well as probabilistic malfunction risk. Three types of maintenance windows (regular, opportunistic, and postponed) are scheduled to ensure a flexible scheduling of inspection and spare part resources. Importantly, the function of opportunistic window is state-based, defect identification when normal and removal when defective. The objective is to minimize the cost rate via the joint optimization of inspection interval, postponed interval, and opportunistic threshold. Experimental studies demonstrate the superior performance of this policy over some conventional policies.

49 citations


Journal ArticleDOI
TL;DR: A blockchain-assisted secure data sharing (BSDS) model is introduced that is responsible for administering inbound and outbound security in data acquisition and dissemination and capable of maximizing the response rate by confining false alarm progression, failure rate, and time delay.
Abstract: Industrial Internet of Things is focused to improve the performance of smart factories through automation and scalable functions. IoT paradigm, information and communication technology, and intelligent computing are assimilated as a single entity for industrial automation, optimization, sharing and security, and scalability. In a view of the security requirement in smart industry data sharing through IoT, this article introduces a blockchain-assisted secure data sharing (BSDS) model. This model is responsible for administering inbound and outbound security in data acquisition and dissemination. The inbound acquisition is first classified using recurrent learning to identify adverse sequences in data dissemination. In the outbound security measure, end-to-end authentication based on the blockchain information of reputation and sequence differentiation is engaged. The blockchain paradigm controls the data gathering and dissemination instances through the classification and integrity verification in both the industry and processing terminals. For this purpose, the functions of the blockchain are riven for data gathering and monitoring in the smart industry whereas integrity and sequence verification is performed by the nonmining blockchain terminal in the processing environment. The integrated security measures are capable of maximizing the response rate by confining false alarm progression, failure rate, and time delay. Statistical analysis shows that the BSDS achieves a 5.67% high response rate and reduces the failure rate by 2.14%. Further, it achieves 3.12%, maximizes response rate by 6.63%, and reduces delay by 11.91%, respectively.

48 citations


Journal ArticleDOI
TL;DR: It is shown that neglecting either the effects of dynamic environments or the correlation among component lifetimes would underestimate the reliability of series systems and overestimateThe reliability of parallel systems.
Abstract: The working conditions of multicomponent systems are usually dynamic and stochastic. Reliability evaluation of such systems is challenging, since the components are generally positively correlated. Based on the cumulative exposure principle, we model the effects of the dynamic environments on the component lifetimes by a common stochastic time scale, and exponential dispersion process is utilized to describe the stochastic time scale. Then, the component lifetimes are shown to be positively quadrant dependent, and the joint survival function of the component lifetimes is derived, which includes the results of [1] as special cases. In this article, we show that neglecting either the effects of dynamic environments or the correlation among component lifetimes would underestimate the reliability of series systems and overestimate the reliability of parallel systems. We also investigate the problem of parameter redundancy of the model, and give some suggestions for data analysis. Simulation studies show that the unified model is flexible and useful for suggesting an optimal model given observed data.

47 citations


Journal ArticleDOI
TL;DR: A comparative analysis of the proposed encryption method with the Catalan numbers and data encryption standard (DES) algorithm, which is performed with machine learning-based identification of the encryption method using ciphertext only, showed that it was much more difficult to recognize ciphertext generated with theCatalan method than one made with the DES algorithm.
Abstract: This article presents a novel data encryption technique suitable for Internet of Things (IoT) applications. The cryptosystem is based on the application of a Catalan object (as a cryptographic key) that provides encryption based on combinatorial structures with noncrossing or nonnested matching. The experimental part of this article includes a comparative analysis of the proposed encryption method with the Catalan numbers and data encryption standard (DES) algorithm, which is performed with machine learning-based identification of the encryption method using ciphertext only. These tests showed that it is much more difficult to recognize ciphertext generated with the Catalan method than one made with the DES algorithm. System reliability depends on the quality of the key, therefore, statistical testing proposed by National Institute of Standards and Technology was also performed. Twelve standard tests, the approximate entropy measurement, and random digression complexity analysis are applied in order to evaluate the quality of the generated Catalan key. A proposal for applying this method in e-Health IoT is also given. Possibilities of applying this method in the IoT applications for smart cities data storage and processing are provided.

42 citations


Journal ArticleDOI
TL;DR: A novel data fusion method based on deep learning for HI construction for prognostic analysis with a significant improvement on remaining useful life prediction compared to existing data fusion methods is proposed.
Abstract: Degradation modeling is a critical and challenging problem as it serves as the basis for system prognostics and evolution mechanism analysis. In practice, multiple sensors are used to monitor the status of a system. Thus, multisensor data fusion techniques have been proposed to capture comprehensive information for prognostic modeling and analysis, which aims at developing a composite health index (HI) through the fusion of multiple sensor signals. In the literature, most existing methods use a linear data-fusion model for integration of multisensor data to construct the HI, which is insufficient to model nonlinear relations between sensing signals and HI in a complicated system. This article proposes a novel data fusion method based on deep learning for HI construction for prognostic analysis. A pair of adversarial networks is proposed to enable the training procedure of neural networks. To guarantee the stability of the algorithm, we propose a root mean square propagation (i.e., RMSprop)-based sampling algorithm to estimate model parameters. A set of simulation studies and a case study on a set of degradation signals of aircraft engines are conducted. The results demonstrate that the proposed method has a significant improvement on remaining useful life prediction compared to existing data fusion methods.

Journal ArticleDOI
TL;DR: A defect prediction method based on gated hierarchical long short-term memory networks (GH-LSTMs) is proposed, which uses hierarchical LSTM networks to extract both semantic features from word embeddings of abstract syntax trees of source code files, and traditional features provided by the PROMISE repository.
Abstract: Software defect prediction, aimed at assisting software practitioners in allocating test resources more efficiently, predicts the potential defective modules in software products. With the development of defect prediction technology, the inability of traditional software features to capture semantic information is exposed, hence related researchers have turned to semantic features to build defect prediction models. However, sometimes traditional features such as lines of code (LOC) also play an important role in defect prediction. Most of the existing researches only focus on using a single type of feature as the input of the model. In this article, a defect prediction method based on gated hierarchical long short-term memory networks (GH-LSTMs) is proposed, which uses hierarchical LSTM networks to extract both semantic features from word embeddings of abstract syntax trees (ASTs) of source code files, and traditional features provided by the PROMISE repository. More importantly, we adopt a gated fusion strategy to combine the outputs of the hierarchical networks properly. Experimental results show that GH-LSTMs outperforms existing methods under both noneffort-aware and effort-aware scenarios.

Journal ArticleDOI
TL;DR: This work uses a data stream of 5 million app packages to reconstruct versioned lineages of Android apps, and applies state-of-the-art vulnerability-finding tools and investigates systematically the reports produced by each tool, study which types of vulnerabilities are found, how they are introduced in the app code, where they are located, and whether they foreshadow malware.
Abstract: The Android ecosystem today is a growing universe of a few billion devices, hundreds of millions of users and millions of applications targeting a wide range of activities where sensitive information is collected and processed. Security of communication and privacy of data are thus of utmost importance in application development. Yet, regularly, there are reports of successful attacks targeting Android users. While some of those attacks exploit vulnerabilities in the Android operating system, others directly concern application-level code written by a large pool of developers with varying experience. Recently, a number of studies have investigated this phenomenon, focusing, however, only on a specific vulnerability type appearing in apps, and based on only a snapshot of the situation at a given time. Thus, the community is still lacking comprehensive studies exploring how vulnerabilities have evolved over time, and how they evolve in a single app across developer updates. Our work fills this gap by leveraging a data stream of 5 million app packages to reconstruct versioned lineages of Android apps, and finally, obtained 28 564 app lineages (i.e., successive releases of the same Android apps) with more than ten app versions each, corresponding to a total of 465 037 apks. Based on these app lineages, we apply state-of-the-art vulnerability-finding tools and investigate systematically the reports produced by each tool. In particular, we study which types of vulnerabilities are found, how they are introduced in the app code, where they are located, and whether they foreshadow malware. We provide insights based on the quantitative data as reported by the tools, but we further discuss the potential false positives. Our findings and study artifacts constitute a tangible knowledge to the community. It could be leveraged by developers to focus verification tasks, and by researchers to drive vulnerability discovery and repair research efforts.

Journal ArticleDOI
TL;DR: This article investigates optimal recovery strategy of components for maximizing the resilience of the cyber–physical power system (CPPS), where a component represents a unique node or branch, such as generating stations and communication transmission lines.
Abstract: This article investigates optimal recovery strategy of components for maximizing the resilience of the cyber–physical power system (CPPS), where a component represents a unique node or branch, such as generating stations and communication transmission lines. The proposed optimization model is built as a multimode resource-constrained project scheduling problem to incorporate system resilience, cascading failures of the CPPS, the diversity of recovery resources, execution modes of recovery activities, precedence of damaged components, as well as the availability, cost, and time of recovery resources. The failure propagation mechanisms are characterized by a cascading failure model, which is further embedded in the optimization model to quantify the system real-time performance during the recovery process, and determine whether repaired components can be reconnected to the system. The system resilience is quantified using a proposed time-dependent annual composite resilience metric based on a compound Poisson process. The proposed optimization model is solved using a modified simulated annealing algorithm. The system that couples the IEEE 30-bus model and a small-world communication network is used as a testbed to demonstrate the feasibility and effectiveness of the proposed modeling approach. Comparisons with existing optimization models in the literature show the superiority of the proposed model.

Journal ArticleDOI
Xu Jiaxi1, Fei Wang1, Jun Ai1
TL;DR: This article proposes a practical approach for identifying software defect patterns via the combination of semantics and context information using abstract syntax tree representation learning and shows that the proposed approach performs better than the state-of-the-art approach and five traditional machine learning baselines.
Abstract: To optimize the process of software testing and to improve software quality and reliability, many attempts have been made to develop more effective methods for predicting software defects. Previous work on defect prediction has used machine learning and artificial software metrics. Unfortunately, artificial metrics are unable to represent the features of syntactic, semantic, and context information of defective modules. In this article, therefore, we propose a practical approach for identifying software defect patterns via the combination of semantics and context information using abstract syntax tree representation learning. Graph neural networks are also leveraged to capture the latent defect information of defective subtrees, which are pruned based on a fix-inducing change. To validate the proposed approach for predicting defects, we define mining rules based on the GitHub workflow and collect 6052 defects from 307 projects. The experiments indicate that the proposed approach performs better than the state-of-the-art approach and five traditional machine learning baselines. An ablation study shows that the information about code concepts leads to a significant increase in accuracy.

Journal ArticleDOI
TL;DR: This article proposes a novel FMEA approach using hesitant uncertain linguistic Z numbers (HULZNs) and density-based spatial clustering of applications with noise (DBSCAN) algorithm to assess and cluster the risk of failure modes.
Abstract: Failure mode and effect analysis (FMEA), as a proactive reliability management technique, has been widely employed to reduce the risk of systems and assure the quality of products in various industries. Nevertheless, the traditional FMEA method shows many weaknesses when used in real-life scenarios. This causes the dilemma for practitioners that it is not great as expected. Many alternative risk priority models have been developed to enhance the capacity of FMEA, but the majority of them focus on how to acquire a complete ranking of failure modes. In this article, we propose a novel FMEA approach using hesitant uncertain linguistic Z numbers (HULZNs) and density-based spatial clustering of applications with noise (DBSCAN) algorithm to assess and cluster the risk of failure modes. The HULZNs are adopted to represent the uncertain and hesitant risk evaluation information of FMEA team members. The normal DBSCAN algorithm is improved to cluster the recognized failure modes into different risk classes. Moreover, the weights of FMEA experts are dynamically obtained with a weight adjustment method. Finally, a practical case of a geothermal power plant is given to demonstrate the effectiveness and validity of our introduced FMEA approach.

Journal ArticleDOI
TL;DR: The coverage reliability of wireless sensor network is evaluated from a new perspective and the concept of belief degree of sensing result is defined, a belief-degree-coverage model is established, and three indexes are defined to evaluate the reliability of the above coverage model.
Abstract: In this article, the coverage reliability of wireless sensor network is evaluated from a new perspective. Based on the D–S evidence theory, we define the concept of belief degree of sensing result, establish a belief-degree-coverage model considering common cause failures, and define three indexes to evaluate the reliability of the above coverage model. In order to calculate the belief degree of the sensor's sensing result considering common cause failures, we use a membership function to obtain the basic probability assignment (BPA) of sensing results and derive the calculation formula of BPA of interference sources. Two algorithms to calculate three reliability indexes are proposed. The first is the perimeter coverage algorithm considering interference, the complexity of which is better than that of some other algorithm. The second is the algorithm based on Monte Carlo simulation, which is easy to program on the computer. We use the second algorithm to evaluate the coverage reliability and analyze the influence of parameters (the target's attribute value, interference factor, some threshold, and so on) on coverage reliability.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a data quality evaluation framework that includes the quality criteria and their corresponding evaluation approaches to address the data quality issue and build a high-performance medical concept normalization system.
Abstract: Poor data quality has a direct impact on the performance of the machine learning system that is built on the data. As a demonstrated effective approach for data quality improvement, transfer learning has been widely used to improve machine learning quality. However, the “quality improvement” brought by transfer learning was rarely rigorously validated, and some of the quality improvement results were misleading. This article first exposed the hidden quality problem in the datasets used to build a machine learning system for normalizing medical concepts in social media text. The system was claimed to have achieved the best performance compared to existing work on a machine learning task. However, the results of our experiments showed that the “best performance” was due to the poor quality of the datasets and the defective validation process. To address the data quality issue and build a high-performance medical concept normalization system, we developed a transfer-learning-based strategy for data quality enhancement and system performance improvement. The results of the experiments showed a strong correlation between the quality of the datasets and the performance of the machine learning system. The results also demonstrated that a rigorous evaluation of data quality is necessary for guiding the quality improvement of machine learning. Therefore, we propose a data quality evaluation framework that includes the quality criteria and their corresponding evaluation approaches. The data validation process, the performance improvement strategy, and the data quality evaluation framework discussed in this article can be used for machine learning researchers and practitioners to build high-performance machine learning systems. The code and datasets used in this research are available in GitHub ( https://github.com/haihua0913/dataEvaluationML ).

Journal ArticleDOI
TL;DR: The proposed distributed algorithm is capable of restoring connectivity restoration approach for heterogeneous WSANs where the nodes can be static or mobile with up to 35.5% lower sent Bytes and up to 40.9% lower movement cost than the existing algorithms.
Abstract: Connectivity maintenance is an important requirement in wireless sensor and actuator networks (WSANs) because node failures can, potentially, lead to destructive changes in the network topology, which, in turn, can create a partitioned network. Preserving $k$ -connectivity in a WSAN is important for keeping stable connections. A $k$ -connected network is a network that remains connected after removing any $k$ -1 nodes. Higher $k$ values provide more reliable connectivity and a higher level of fault tolerance. In this article, we present a distributed $k$ -connectivity restoration approach for heterogeneous WSANs where the nodes can be static or mobile. In the proposed algorithm, each node identifies the mobile nodes in the network and its 2-hop local subgraph. After a node is incapacitated, a neighbor of the failed node calls a mobile node with minimum moving cost to the location of the failed node if the failure reduces $k$ . A minimum cost movement path between a neighbor of the failed node and a mobile node is constructed by considering the locations of the nodes, moving costs, and obstacles. Testbed experiments and comprehensive simulations reveal that the proposed distributed algorithm is capable of restoring $k$ -connectivity with up to 35.5% lower sent Bytes and up to 40.9% lower movement cost than the existing algorithms.

Journal ArticleDOI
TL;DR: This article proposes a condition-based maintenance policy for systems that are subject to both degradation-induced soft failure and sudden hard failure, where a higher degradation level leads to a higher hazard rate of the hard failure.
Abstract: Most systems can fail in multiple ways, and the failure modes are usually positively correlated. This phenomenon complicates the reliability analysis and makes the corresponding maintenance planning challenging. This article proposes a condition-based maintenance policy for systems that are subject to both degradation-induced soft failure and sudden hard failure, where a higher degradation level leads to a higher hazard rate of the hard failure. The Wiener process is adopted for the degradation process, and the Weibull model is used to describe the baseline hazard rate of the hard failure. The degradation level is then treated as a time-varying covariate that affects the hazard rate of the hard failure, and the closed-form of the reliability function is derived by using the Brownian bridge theory. An inspection/replacement maintenance policy is employed, and the long-run cost rate is formulated based on the semiregenerative property of the system state. The optimal inspection interval and the preventive replacement threshold are then jointly determined by minimizing the long-run cost rate. A numerical study on a hydraulic sliding spool system is conducted to validate the derived reliability function and the maintenance policy.

Journal ArticleDOI
TL;DR: A reliable and efficient traffic monitoring system that integrates blockchain and the Internet of vehicles technologies effectively is proposed that can crowdsource its tasks of traffic information collection to vehicles that run on the road instead of installing cameras in every corner.
Abstract: Real-time traffic monitoring is a fundamental mission in a smart city to understand traffic conditions and avoid dangerous accidents. In this article, we propose a reliable and efficient traffic monitoring system that integrates blockchain and the Internet of Vehicles technologies effectively. It can crowdsource its tasks of traffic information collection to vehicles that run on the road instead of installing cameras in every corner. First, we design a lightweight blockchain-based information trading framework to model the interactions between traffic administration and vehicles. It guarantees reliability, efficiency, and security during executing trading. Second, we define the utility functions for the entities in this system and come up with a budgeted auction mechanism that motivates vehicles to undertake the collection tasks actively. In our algorithm, it not only ensures that the total payment to the selected vehicles does not exceed a given budget but also maintains the truthfulness of the auction process that prevents some vehicles from offering unreal bids for getting greater utilities. Finally, we conduct a group of numerical simulations to evaluate the reliability of our trading framework and performance of our algorithms, whose results demonstrate their correctness and efficiency perfectly.

Journal ArticleDOI
TL;DR: This article proposes RFID-Pose, a vision-aided realtime 3-D human pose estimation system, which is based on deep learning assisted by CV, and utilizes the rotation angles of each human limb to reconstruct human pose in realtime with the forward kinematic technique.
Abstract: In recent years, human pose tracking has become an important topic in computer vision (CV). To improve the privacy of human pose tracking, there is considerable interest in techniques without using a video camera. To this end, radio-frequency identification (RFID) tags, as a low-cost wearable sensor, provide an effective solution for 3-D human pose tracking. In this article, we propose RFID-Pose, a vision-aided realtime 3-D human pose estimation system, which is based on deep learning assisted by CV. The RFID phase data are calibrated to effectively mitigate the severe phase distortion, and high accuracy low rank tensor completion is employed to impute the missing RFID data. The system then estimates the spatial rotation angle of each human limb, and utilizes the rotation angles to reconstruct human pose in realtime with the forward kinematic technique. A prototype is developed with commodity RFID devices. High pose estimation accuracy and realtime operation of RFID-Pose are demonstrated in our experiments using Kinect 2.0 as a benchmark.

Journal ArticleDOI
TL;DR: A novel hybrid methodology, namely the Hellinger net model, for imbalanced learning to improve defect prediction for software modules is proposed, which proves the theoretical consistency of the proposed model.
Abstract: Software defect prediction (SDP) is a convenient way to identify defects in the early phases of the software development life cycle. This early warning system can help in the removal of software defects and yield a cost-effective and good quality of software products. A wide range of statistical and machine learning models have been employed to predict defects in software modules. But the imbalanced nature of this type of SDP datasets is pivotal for the successful development of a defect prediction model. Imbalanced software datasets contain nonuniform class distributions with a few instances belonging to a specific class compared to that of the other class. This article proposes a novel hybrid methodology, namely the Hellinger net model, for imbalanced learning to improve defect prediction for software modules. Hellinger net, a tree to network mapped model, is a deep feedforward neural network with a built-in hierarchy, just like decision trees. Hellinger net also utilizes the strength of a skew insensitive distance measure, namely Hellinger distance, in handling class imbalance problems. On the theoretical side, this article proves the theoretical consistency of the proposed model. A thorough experiment was conducted over ten NASA SDP datasets to show the superiority of the proposed method.

Journal ArticleDOI
TL;DR: A new systemic risk assessment approach, which combines failure mode and effect analysis (FMEA) and pessimistic–optimistic fuzzy information axiom (POFIA) considering acceptable risk coefficient (ARC) is proposed to evaluate the risk of railway dangerous goods transportation system (RDNGTS).
Abstract: In this article, a new systemic risk assessment approach, which combines failure mode and effect analysis (FMEA) and pessimistic–optimistic fuzzy information axiom (POFIA) considering acceptable risk coefficient (ARC), is proposed to evaluate the risk of railway dangerous goods transportation system (RDNGTS). This approach transforms the system risk assessment problem into the rank problem of severity of risk factors affecting system security. The triangular fuzzy numbers (TFNs) are applied to score the severity of failures ( S ), portability of occurrence ( O ), and possibility of detection ( D ) for each RDNGTS risk subindicator. The information contents of S , O , and D are calculated for each risk subindicator, two models are applied to calculate the information contents: POFIA, and POFIA considering ARC (POFIA-ARC). The product of information contents of S , O , and D is used to replace the risk priority number of FMEA. entropy weight method is used to calculate the weight of each risk subindicator. The comparison among the FMEA-POFIA-ARC, FMEA-POFIA, FMEA, FMEA with TFNs is conducted based on the historical data of Chinese RDNGTS accidents from 1986 to 2017. Results show that the potential human risk should be paid more attention. Compared with the analysis results of statistical accident number, the results of the approaches proposed in this article (especially the FMEA-POFIA-ARC) are more reliable than the results of FMEA and FMEA TFNs combined approach.

Journal ArticleDOI
TL;DR: A model for the risk-based scheduling of predictive maintenance activities on a railway line to intervene when a track segment has reached a certain state of degradation, thus preventing faults and possible failures is proposed, thereby responding to the increasing understanding of real-world processes.
Abstract: This article proposes a model for the risk-based scheduling of predictive maintenance activities on a railway line to intervene when a track segment has reached a certain state of degradation, thus preventing faults and possible failures. With the aim of taking into account the stochastic nature of real environments, the rail-track degradation process is represented as a stochastic process, and the failure probability is evaluated as the probability of reaching a degradation threshold. Moreover, a rolling-horizon framework is introduced to manage newly available real-time information and unpredicted faults or maintenance activity delays. Whereas the traditional scheduling models are offline models that cover the long-term horizon but neglect operational disturbances, the presented model allows for dynamic day-to-day planning and adaptation of the maintenance plan to real-time information, thereby responding to the increasing understanding of real-world processes. The optimization problem on maintenance scheduling is formulated as a mixed-integer linear programming problem based on risk minimization, in adherence to ISO 55 000 guidelines. Finally, the application of the approach to a real rail network is reported and discussed, with a focus on the planning of tamping activities at the operational level.

Journal ArticleDOI
TL;DR: The aim of the framework is to provide a complete methodology to help users to model both software and hardware parts of cloud systems and automatically test the validity of these clouds using a cost-effective approach.
Abstract: The validation of a cloud system can be complicated by the size of the system, the number of users that can concurrently request services, and the virtualization used to give the illusion of using dedicated machines. Unfortunately, it is not feasible to use conventional testing methods with cloud systems. This article proposes a framework, called TEA- Cloud , that integrates simulation with testing methods for validating cloud system designs. Testing is applied on both functional and nonfunctional aspects of the cloud, like performance and cost. The aim of the framework is to provide a complete methodology to help users to model both software and hardware parts of cloud systems and automatically test the validity of these clouds using a cost-effective approach. Metamorphic testing is used to overcome the lack of an oracle that checks whether the behavior observed in testing is allowed. Metamorphic testing is based on metamorphic relations (MRs). We define three families of MRs, which target issues such as performance, resource provisioning, and cost. TEA- Cloud was evaluated through an empirical study that used fault seeding (mutation) and ten MRs for testing different cloud configurations. The results were promising, with TEA- Cloud finding all seeded faults.

Journal ArticleDOI
TL;DR: The performance analysis signifies the fact that the proposed scheme not only offers PFS, untraceability, and anonymity to the participants, but is also resilient to known attacks using light-weight symmetric operations, which makes it an imperative advancement in the category of intelligent and reliable security solutions.
Abstract: Fuzzy systems can aid in diminishing uncertainty and noise from biometric security applications by providing an intelligent layer to the existing physical systems to make them reliable. In the absence of such fuzzy systems, a little random perturbation in captured human biometrics could disrupt the whole security system, which may even decline the authentication requests of legitimate entities during the protocol execution. In the literature, few fuzzy logic-based biometric authentication schemes have been presented; however, they lack significant security features including perfect forward secrecy (PFS), untraceability, and resistance to known attacks. This article, therefore, proposes a novel two-factor biometric authentication protocol enabling efficient and secure combination of physically unclonable functions, a physical object analogous to human fingerprint, with user biometrics by employing fuzzy extractor-based procedures in the loop. This combination enables the participants in the protocol to achieve PFS. The security of the proposed scheme is tested using the well-known real-or-random model. The performance analysis signifies the fact that the proposed scheme not only offers PFS, untraceability, and anonymity to the participants, but is also resilient to known attacks using light-weight symmetric operations, which makes it an imperative advancement in the category of intelligent and reliable security solutions.

Journal ArticleDOI
TL;DR: This article proposes to generate testbeds for IDS evaluation using strategies from model-driven engineering, which greatly improves configurability and flexibility of test beds and allows to reuse components across multiple scenarios.
Abstract: Evaluations of intrusion detection systems (IDS) require log datasets collected in realistic system environments. Existing testbeds therefore offer user simulations and attack scenarios that target specific use-cases. However, not only does the preparation of such testbeds require domain knowledge and time-consuming work, but also maintenance and modifications for other use-cases involve high manual efforts and repeated execution of tasks. In this article, we therefore propose to generate testbeds for IDS evaluation using strategies from model-driven engineering. In particular, our approach models system infrastructure, simulated normal behavior, and attack scenarios as testbed-independent modules. A transformation engine then automatically generates arbitrary numbers of testbeds, each with a particular set of characteristics and capable of running in parallel. Our approach greatly improves configurability and flexibility of testbeds and allows to reuse components across multiple scenarios. We use our proof-of-concept implementation to generate a labeled dataset for IDS evaluation that is published with this article.

Journal ArticleDOI
TL;DR: A memetic algorithm MA-MSP is proposed to help WSNs resist cascading failures via multisink placement optimization, in which the local search operator is designed based on a new network balancing metric “multioriented network entropy.”
Abstract: Current research on cascading failures of wireless sensor networks (WSNs) mainly focuses on single-sink networks and rarely involves multisink networks. To this end, this article proposes a realistic cascading model for multisink WSNs based on a new load metric “multioriented link betweenness.” On this basis, a memetic algorithm MA-MSP is proposed to help WSNs resist cascading failures via multisink placement optimization, in which the local search operator is designed based on a new network balancing metric “multioriented network entropy.” Extensive simulations have shown that the proposed cascading model can properly characterize the cascading process of multisink WSNs. Link capacity is a key factor in determining network robustness. MA-MSP can obtain a more robust placement scheme with less time compared to existing algorithms. The network communication efficiency is positively related to network robustness, and the average shortest path length is negatively related to network robustness.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed SSD method can improve the efficiency for reliability evaluation of multistates networks, which provides the reliability engineers and facility managers a more powerful tool for the design and maintenance of more complex multistate networks.
Abstract: This article presents an improved state space decomposition (SSD) method for reliability evaluation of multistate networks. We observe that the decomposition process of the existing SSD is a sequential decomposition process. However, the SSD has the basis of parallel operations, as the decomposition process from each unspecified state is independent of each other. In addition, the existing SSD methods applied heuristic to select a proper minimal path vector ( d -MP). When there is a tie during the process, the tie is broken arbitrarily. However, we find that different d -MPs with same function value of the current heuristic affect the efficiency of the decomposition procedure. Based on these observations, an improved SSD method is developed for reliability evaluation of multistate networks. First, a parallel mechanism is proposed and incorporated into the SSD algorithm. Second, several improved heuristics, namely R1, R2, R3, and R4, are developed to select a proper d -MP. The experimental results show that the proposed SSD method can improve the efficiency for reliability evaluation of multistate networks, which provides the reliability engineers and facility managers a more powerful tool for the design and maintenance of more complex multistate networks.