scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2021"


Journal ArticleDOI
01 Jun 2021
TL;DR: In this paper, a bibliometric survey of research papers focused on the security aspects of Internet of Things (IoT) aided smart grids is presented, which is the very first survey paper in this specific field.
Abstract: The integration of sensors and communication technology in power systems, known as the smart grid, is an emerging topic in science and technology. One of the critical issues in the smart grid is its increased vulnerability to cyber-threats. As such, various types of threats and defense mechanisms are proposed in literature. This paper offers a bibliometric survey of research papers focused on the security aspects of Internet of Things (IoT) aided smart grids. To the best of the authors’ knowledge, this is the very first bibliometric survey paper in this specific field. A bibliometric analysis of all journal articles is performed and the findings are sorted by dates, authorship, and key concepts. Furthermore, this paper also summarizes the types of cyber-threats facing the smart grid, the various security mechanisms proposed in literature, as well as the research gaps in the field of smart grid security .

109 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic and thorough review is carried out to investigate safety and security of oil and gas pipelines based on bibliometric analysis, and the evolution of research topics and research methods are identified based on keywords and bibliographic analysis.

104 citations


Journal ArticleDOI
TL;DR: In this paper, an improved identity-based encryption algorithm (IIBE) is proposed, which can effectively simplify the key generation process, reduce the network traffic, and improve the network security.
Abstract: Wireless sensor networks (WSN) have problems such as limited power, weak computing power, poor communication ability, and vulnerability to attack. However, the existing encryption methods cannot effectively solve the above problems when applied to WSN. To this end, according to WSN’s characteristics and based on the identity-based encryption idea, an improved identity-based encryption algorithm (IIBE) is proposed, which can effectively simplify the key generation process, reduce the network traffic, and improve the network security. The design idea of this algorithm lies between the traditional public key encryption and identity-based public tweezers’ encryption. Compared with the traditional public key encryption, the algorithm does not need a public key certificate and avoids the management of the certificate. Compared with identity-based public key encryption, the algorithm addresses the key escrow and key revocation problems. The results of the actual network distribution experiments demonstrate that IIBE has low energy consumption and high security, which are suitable for application in WSN with high requirements on security.

97 citations


Journal ArticleDOI
TL;DR: Experimental results show that $\mu$VulDeePecker is effective for multiclass vulnerability detection and that accommodating control-dependence (other than data-Dependence) can lead to higher detection capabilities.
Abstract: Fine-grained software vulnerability detection is an important and challenging problem. Ideally, a detection system (or detector) not only should be able to detect whether or not a program contains vulnerabilities, but also should be able to pinpoint the type of a vulnerability in question. Existing vulnerability detection methods based on deep learning can detect the presence of vulnerabilities (i.e., addressing the binary classification or detection problem), but cannot pinpoint types of vulnerabilities (i.e., incapable of addressing multiclass classification). In this paper, we propose the first deep learning-based system for multiclass vulnerability detection, dubbed $\mu$ μ VulDeePecker. The key insight underlying $\mu$ μ VulDeePecker is the concept of code attention , which can capture information that can help pinpoint types of vulnerabilities, even when the samples are small. For this purpose, we create a dataset from scratch and use it to evaluate the effectiveness of $\mu$ μ VulDeePecker. Experimental results show that $\mu$ μ VulDeePecker is effective for multiclass vulnerability detection and that accommodating control-dependence (other than data-dependence) can lead to higher detection capabilities.

93 citations


Journal ArticleDOI
TL;DR: Funded leverages the advances in graph neural networks to develop a novel graph-based learning method to capture and reason about the program’s control, data, and call dependencies to identify software vulnerabilities at the function level from program source code.
Abstract: This paper presents FUNDED (Flow-sensitive vUl-Nerability coDE Detection), a novel learning framework for building vulnerability detection models. Funded leverages the advances in graph neural networks (GNNs) to develop a novel graph-based learning method to capture and reason about the program’s control, data, and call dependencies. Unlike prior work that treats the program as a sequential sequence or an untyped graph, Funded learns and operates on a graph representation of the program source code, in which individual statements are connected to other statements through relational edges. By capturing the program syntax, semantics and flows, Funded finds better code representation for the downstream software vulnerability detection task. To provide sufficient training data to build an effective deep learning model, we combine probabilistic learning and statistical assessments to automatically gather high-quality training samples from open-source projects. This provides many real-life vulnerable code training samples to complement the limited vulnerable code samples available in standard vulnerability databases. We apply Funded to identify software vulnerabilities at the function level from program source code. We evaluate Funded on large real-world datasets with programs written in C, Java, Swift and Php, and compare it against six state-of-the-art code vulnerability detection models. Experimental results show that Funded significantly outperforms alternative approaches across evaluation settings.

90 citations


Journal ArticleDOI
TL;DR: A comprehensive overview of fault tolerance-related issues in cloud computing is presented, emphasizing upon the significant concepts, architectural details, and the state-of-art techniques and methods.

84 citations


Journal ArticleDOI
Lu Wei1, Jie Cui1, Yan Xu1, Jiujun Cheng2, Hong Zhong1 
TL;DR: An SSK updating algorithm is designed, which is constructed on Shamir’s secret sharing algorithm and secure pseudo random function, so that the TPDs of unrevoked vehicles can update SSK securely.
Abstract: Owing to the development of wireless communication technology and the increasing number of automobiles, vehicular ad hoc networks (VANETs) have become essential tools to secure traffic safety and enhance driving convenience. It is necessary to design a conditional privacy-preserving authentication (CPPA) scheme for VANETs because of their vulnerability and security requirements. Traditional CPPA schemes have two deficiencies. One is that the communication or storage overhead is not sufficiently low, but the traffic emergency message requires an ultra-low transmission delay. The other is that traditional CPPA schemes do not consider updating the system secret key (SSK), which is stored in an unhackable Tamper Proof Device (TPD), whereas side-channel attack methods and the wide usage of the SSK increase the probability of breaking the SSK. To solve the first issue, we propose a CPPA signature scheme based on elliptic curve cryptography, which can achieve message recovery and be reduced to elliptic curve discrete logarithm assumption, so that traffic emergency messages are secured with ultra-low communication overhead. To solve the second issue, we design an SSK updating algorithm, which is constructed on Shamir’s secret sharing algorithm and secure pseudo random function, so that the TPDs of unrevoked vehicles can update SSK securely. Formal security proof and analysis show that our proposed scheme satisfies the security and privacy requirements of VANETs. Performance analysis demonstrates that our proposed scheme requires less storage size and has a lower transmission delay compared with related schemes.

67 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a deep and comprehensive overview of ICS, presenting the architecture design, the employed devices, and the security protocols implemented, highlighting key challenges and design guidelines to keep in mind in the design phases.
Abstract: The increasing digitization and interconnection of legacy Industrial Control Systems (ICSs) open new vulnerability surfaces, exposing such systems to malicious attackers. Furthermore, since ICSs are often employed in critical infrastructures (e.g., nuclear plants) and manufacturing companies (e.g., chemical industries), attacks can lead to devastating physical damages. In dealing with this security requirement, the research community focuses on developing new security mechanisms such as Intrusion Detection Systems (IDSs), facilitated by leveraging modern machine learning techniques. However, these algorithms require a testing platform and a considerable amount of data to be trained and tested accurately. To satisfy this prerequisite, Academia, Industry, and Government are increasingly proposing testbed (i.e., scaled-down versions of ICSs or simulations) to test the performances of the IDSs. Furthermore, to enable researchers to cross-validate security systems (e.g., security-by-design concepts or anomaly detectors), several datasets have been collected from testbeds and shared with the community. In this paper, we provide a deep and comprehensive overview of ICSs, presenting the architecture design, the employed devices, and the security protocols implemented. We then collect, compare, and describe testbeds and datasets in the literature, highlighting key challenges and design guidelines to keep in mind in the design phases. Furthermore, we enrich our work by reporting the best performing IDS algorithms tested on every dataset to create a baseline in state of the art for this field. Finally, driven by knowledge accumulated during this survey’s development, we report advice and good practices on the development, the choice, and the utilization of testbeds, datasets, and IDSs.

61 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: This paper finds that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples.
Abstract: Recent studies have revealed a security threat to natural language processing (NLP) models, called the Backdoor Attack. Victim models can maintain competitive performance on clean samples while behaving abnormally on samples with a specific trigger word inserted. Previous backdoor attacking methods usually assume that attackers have a certain degree of data knowledge, either the dataset which users would use or proxy datasets for a similar task, for implementing the data poisoning procedure. However, in this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples. Experimental results on sentiment analysis and sentence-pair classification tasks show that our method is more efficient and stealthier. We hope this work can raise the awareness of such a critical security risk hidden in the embedding layers of NLP models. Our code is available at https://github.com/lancopku/Embedding-Poisoning.

58 citations


Journal ArticleDOI
TL;DR: A deep-learning-based framework that utilizes transferable knowledge from pre-existing data sources for vulnerability detection and combines the heterogeneous data sources to learn unified representations of the patterns of the vulnerable source codes that are feasible and effective, and transferable for real-world vulnerability detection.
Abstract: Machine learning (ML) has great potential in automated code vulnerability discovery. However, automated discovery application driven by off-the-shelf machine learning tools often performs poorly due to the shortage of high-quality training data. The scarceness of vulnerability data is almost always a problem for any developing software project during its early stages, which is referred to as the cold-start problem. This article proposes a framework that utilizes transferable knowledge from pre-existing data sources. In order to improve the detection performance, multiple vulnerability-relevant data sources were selected to form a broader base for learning transferable knowledge. The selected vulnerability-relevant data sources are cross-domain, including historical vulnerability data from different software projects and data from the Software Assurance Reference Database (SARD) consisting of synthetic vulnerability examples and proof-of-concept test cases. To extract the information applicable in vulnerability detection from the cross-domain data sets, we designed a deep-learning-based framework with Long-short Term Memory (LSTM) cells. Our framework combines the heterogeneous data sources to learn unified representations of the patterns of the vulnerable source codes. Empirical studies showed that the unified representations generated by the proposed deep learning networks are feasible and effective, and are transferable for real-world vulnerability detection. Our experiments demonstrated that by leveraging two heterogeneous data sources, the performance of our vulnerability detection outperformed the static vulnerability discovery tool Flawfinder . The findings of this article may stimulate further research in ML-based vulnerability detection using heterogeneous data sources.

55 citations


Journal ArticleDOI
TL;DR: There are structures still in service with a high seismic vulnerability, which proposes an urgent need for a screening system’s damageability grading system, and the necessity of developing a rapid, reliable, and computationally easy method of seismic vulnerability assessment, more commonly known as RVS.
Abstract: Seismic vulnerability assessment of existing buildings is of great concern around the world. Different countries develop various approaches and methodologies to overcome the disastrous effects of earthquakes on the structural parameters of the building and the human losses. There are structures still in service with a high seismic vulnerability, which proposes an urgent need for a screening system's damageability grading system. Rapid urbanization and the proliferation of slums give rise to improper construction practices that make the building stock's reliability ambiguous, including old structures that were constructed either when the seismic codes were not advanced or not enforced by law. Despite having a good knowledge of structural analysis, it is impractical to conduct detailed nonlinear analysis on each building in the target area to define their seismic vulnerability. This indicates the necessity of developing a rapid, reliable, and computationally easy method of seismic vulnerability assessment, more commonly known as Rapid Visual Screening (RVS). This method begins with a walk-down survey by a trained evaluator , and an initial score is assigned to the structure. Further, the vulnerability parameters are defined (predictor variables), and the damage grades are defined. Various methods are then adopted to develop an optimum correlation between the parameters and damage grades. Soft Computing (SC) techniques including probabilistic approaches , meta-heuristics, and Artificial Intelligence (AI) theories such as artificial neural networks , machine learning, fuzzy logic, etc. due to their capabilities in targeting inherent imprecision of phenomena in real-world are among the most important and widely used approaches in this regard. In this paper, a comprehensive literature review of the most commonly used and newly developed innovative methodologies in RVS using powerful SC techniques has been presented to shed light on key factors, strengths, and applications of each SC technique in advancing the RVS field of study.

Journal ArticleDOI
01 Jun 2021
TL;DR: In this paper, the investigation of embedding the Deep learning methodology is discussed, the DBN enhancement to the security network is compared with standard DGAs and IDS algorithms, and the results are analyzed.
Abstract: Internet of Things (IoT) is a new age technology, developed with the vision to connect and interconnect all the objects everywhere. This technology enables an overwhelming smartness, which helps the humankind in many ways. Connecting the objects around us, make them communicate with each other towards a mission of intelligent healthcare, safety, Industrial processing applications. As the Internet of Things involved in many various entities and diverse applications, that the vulnerability to unauthorized access is much higher. Today, cyber-attacks faced by the communication networks are very strong and critically alarming. This research represents an intelligent technique or methodology to defend the security breach , developed with the enhancement of Deep Learning algorithms (Deep Belief Network), i.e., Deep Belief Network . This intelligent intrusion detection methodology scrutinizes the malicious activity that is active inside the network, and one tries to get its entry. In this paper, the investigation of embedding the Deep learning methodology is discussed. The DBN enhancement to the security network is compared with standard DGAs and IDS algorithms, and the results are analyzed.

Journal ArticleDOI
TL;DR: The integration of “digital twins (or Building Information Modelling, BIM + bridge risk inspection model” has been established, which will become a more effective information platform for all stakeholders to mitigate risks and uncertainties of exposure to extreme weather conditions over the entire life cycle.
Abstract: Over the past centuries, millions of bridge infrastructures have been constructed globally. Many of those bridges are ageing and exhibit significant potential risks. Frequent risk-based inspection and maintenance management of highway bridges is particularly essential for public safety. At present, most bridges rely on manual inspection methods for management. The efficiency is extremely low, causing the risk of bridge deterioration and defects to increase day by day, reducing the load-bearing capacity of bridges, and restricting the normal and safe use of them. At present, the applications of digital twins in the construction industry have gained significant momentum and the industry has gradually entered the information age. In order to obtain and share relevant information, engineers and decision makers have adopted digital twins over the entire life cycle of a project, but their applications are still limited to data sharing and visualization. This study has further demonstrated the unprecedented applications of digital twins to sustainability and vulnerability assessments, which can enable the next generation risk-based inspection and maintenance framework. This study adopts the data obtained from a constructor of Zhongcheng Village Bridge in Zhejiang Province, China as a case study. The applications of digital twins to bridge model establishment, information collection and sharing, data processing, inspection and maintenance planning have been highlighted. Then, the integration of “digital twins (or Building Information Modelling, BIM) + bridge risk inspection model” has been established, which will become a more effective information platform for all stakeholders to mitigate risks and uncertainties of exposure to extreme weather conditions over the entire life cycle.

Journal ArticleDOI
TL;DR: This work first considers the system from the attacker's point of view with a limited attack budget to study the smart grid vulnerability, referred to as Maximum-Impact through Critical-Line with Limited Budget (MICLLB) problem, and proposes an efficient algorithm by considering the interdependency property of the system, called Greedy Based Partition Algorithm (GBPA) to solve the MICLLB problem.
Abstract: Most of today's smart grids are highly vulnerable to cascading failure attacks in which the failure of one or more critical components may trigger the sequential failure of other components, resulting in the eventual breakdown of the whole system. Existing works design different ranking methods for critical node or link identifications that fail to identify potential cascading failure attacks. In this work, we first consider the system from the attacker's point of view with a limited attack budget to study the smart grid vulnerability, referred to as Maximum-Impact through Critical-Line with Limited Budget (MICLLB) problem. We propose an efficient algorithm by considering the interdependency property of the system, called Greedy Based Partition Algorithm (GBPA) to solve the MICLLB problem. In addition, we design an algorithm, namely Homogeneous-Equality Based Defense Algorithm (HEBDA) to help reduce damages in case the system is suffering from the cascading failure attacks. Through rigorous theoretical analysis and experimentation, we demonstrate that the investigated problem is NP-complete problem and our proposed methods perform well within reasonable bounds of computational complexity.

Journal ArticleDOI
TL;DR: This research presents a new Penetration Testing framework for smart contracts and decentralized apps and compared results from the proposed penetration-testing framework with automated penetration test Scanners, which detected missing vulnerability that were not reported during regular pen test process.
Abstract: Smart contracts powered by blockchain ensure transaction processes are effective, secure and efficient as compared to conventional contacts. Smart contracts facilitate trustless process, time efficiency, cost effectiveness and transparency without any intervention by third party intermediaries like lawyers. While blockchain can counter traditional cybersecurity attacks on smart contract applications, cyberattacks keep evolving in the form of new threats and attack vectors that influence blockchain similar to other web and application based systems. Effective blockchain testing help organizations to build and utilize the technology securely withe connected infrastructure. However, during the course of our research, the authors detected that Blockchain technology comes with security considerations like irreversible transactions, insufficient access, and non-competent strategies. Attack vectors, like these are not found on web portals and other applications. This research presents a new Penetration Testing framework for smart contracts and decentralized apps. The authors compared results from the proposed penetration-testing framework with automated penetration test Scanners. The results detected missing vulnerability that were not reported during regular pen test process.

Journal ArticleDOI
TL;DR: A novel distributed filter is constructed and its gain is designed via a set of recursive formulas on the upper bound of covariance of filtering errors, to avoid the calculational challenge of cross-covariance matrices and realize the requirement of distributed implementation, simultaneously.
Abstract: This article is concerned with the distributed recursive filtering of cyber-physical systems consisting of a set of spatially distributed subsystems. Due to the vulnerability of communication networks, the transmitted data among subsystems could be subject to deception attacks. In this article, attackers do not have enough knowledge of the full network topology and the system parameters and therefore cannot carry out stealth attacks. For this scenario, a defense strategy dependent on the received innovation is proposed to identify the occurring attacks as far as possible. In light of identified attacks, a novel distributed filter is constructed and its gain is designed via a set of recursive formulas on the upper bound of covariance of filtering errors. The utilization of upper bound is to avoid the calculational challenge of cross-covariance matrices and realize the requirement of distributed implementation, simultaneously. Furthermore, the developed scheme only depends on the neighboring information and the information from the subsystem itself, and thereby satisfying the requirement of the scalability. Finally, a standard IEEE 39-bus power system is utilized to verify the effectiveness of the proposed filtering scheme.

Proceedings ArticleDOI
25 May 2021
TL;DR: D2A as discussed by the authors is a dataset built by analyzing version pairs from multiple open source projects, where each project selects bug fixing commits and runs static analysis on the versions before and after such commits.
Abstract: Static analysis tools are widely used for vulnerability detection as they understand programs with complex behavior and millions of lines of code. Despite their popularity, static analysis tools are known to generate an excess of false positives. The recent ability of Machine Learning models to understand programming languages opens new possibilities when applied to static analysis. However, existing datasets to train models for vulnerability identification suffer from multiple limitations such as limited bug context, limited size, and synthetic and unrealistic source code. We propose D2A, a differential analysis based approach to label issues reported by static analysis tools. The D2A dataset is built by analyzing version pairs from multiple open source projects. From each project, we select bug fixing commits and we run static analysis on the versions before and after such commits. If some issues detected in a before-commit version disappear in the corresponding after-commit version, they are very likely to be real bugs that got fixed by the commit. We use D2A to generate a large labeled dataset to train models for vulnerability identification. We show that the dataset can be used to build a classifier to identify possible false alarms among the issues reported by static analysis, hence helping developers prioritize and investigate potential true positives first.

Journal ArticleDOI
TL;DR: In this article, a machine learning-based framework, named VULMA (VULnerability analysis using machine learning), is proposed for vulnerability analysis of existing buildings in order to provide an indication of the seismic vulnerability by exploiting available photographs.

Journal ArticleDOI
TL;DR: It is demonstrated how a more principled approach to data collection and model design, based on realistic settings of vulnerability prediction, can lead to better solutions.
Abstract: Automated detection of software vulnerabilities is a fundamental problem in software security. Existing program analysis techniques either suffer from high false positives or false negatives. Recent progress in Deep Learning (DL) has resulted in a surge of interest in applying DL for automated vulnerability detection. Several recent studies have demonstrated promising results achieving an accuracy of up to 95% at detecting vulnerabilities. In this paper, we ask, "how well do the state-of-the-art DL-based techniques perform in a real-world vulnerability prediction scenario". To our surprise, we find that their performance drops by more than 50%. A systematic investigation of what causes such precipitous performance drop reveals that existing DL-based vulnerability prediction approaches suffer from challenges with the training data (e.g., data duplication, unrealistic distribution of vulnerable classes, etc.) and with the model choices (e.g., simple token-based models). As a result, these approaches often do not learn features related to the actual cause of the vulnerabilities. Instead, they learn unrelated artifacts from the dataset (e.g., specific variable/function names, etc.). Leveraging these empirical findings, we demonstrate how a more principled approach to data collection and model design, based on realistic settings of vulnerability prediction, can lead to better solutions. The resulting tools perform significantly better than the studied baseline up to 33.57% boost in precision and 128.38% boost in recall compared to the best performing model in the literature. Overall, this paper elucidates existing DL-based vulnerability prediction systems' potential issues and draws a roadmap for future DL-based vulnerability prediction research. In that spirit, we make available all the artifacts supporting our results: https://git.io/Jf6IA.

Journal ArticleDOI
Pingchuan Ma1, Jiang Bo, Zhigang Lu1, Ning Li, Zhengwei Jiang1 
TL;DR: This paper proposes a novel cybersecurity entity identification model based on Bidirectional Long Short-Term Memory with Conditional Random Fields (Bi-LSTM with CRF) to extract security-related concepts and entities from unstructured text and achieves better cybersecurity entity extraction than state-of-the-art models.

Journal ArticleDOI
TL;DR: The concept of device's score and use entropy weight method to measure the quality of model update is proposed and the proposed BAFL framework performs better in the aspects of both efficiency and anti-poisoning attacks than other distributed ML methods.
Abstract: As an emerging distributed machine learning (ML) technology, federated learning can protect data privacy through collaborative learning AI models across a large number of IoT devices. However, inefficiency and vulnerability to poisoning attacks have slowed federated learning performance. To solve the above problems, a blockchain-based asynchronous federated learning framework (BAFL) is proposed to pursue both the security and efficiency. Blockchain ensures that data cannot be tampered with and secured while the asynchrony of learning speeds up global aggregation. In further, we propose the concept of device's score and use entropy weight method to measure the quality of model update. The score design directly determines the proportion of the device's model in the global aggregation and the allowed local update delay. By analyzing the optimal block generation rate, the paper also balances the equipment energy consumption and local update delay by adjusting local training delay and communication delay. The extensive evaluation results show that the proposed BAFL framework has performs better in the aspects of both efficiency and anti-poisoning attacks than other distributed ML methods.

Journal ArticleDOI
TL;DR: UNSW-NB15 data set is considered as the benchmark dataset to design UIDS for detecting malicious activities in the network and the performance analysis proves that the attack detection rate of the proposed model is higher compared to two existing approaches ENADS and DENDRON.
Abstract: Intrusion detection system (IDS) using machine learning approach is getting popularity as it has an advantage of getting updated by itself to defend against any new type of attack. Another emerging technology, called internet of things (IoT) is taking the responsibility to make automated system by communicating the devices without human intervention. In IoT based systems, the wireless communication between several devices through the internet causes vulnerability for different security threats. This paper proposes a novel unified intrusion detection system for IoT environment (UIDS) to defend the network from four types of attacks such as: exploit, DoS, probe, and generic. The system is also able to detect normal category of network traffic. Most of the related works on IDS are based on KDD99 or NSL-KDD 99 data sets which are unable to detect new type of attacks. In this paper, UNSW-NB15 data set is considered as the benchmark dataset to design UIDS for detecting malicious activities in the network. The performance analysis proves that the attack detection rate of the proposed model is higher compared to two existing approaches ENADS and DENDRON which also worked on UNSW-NB15 data set.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a drone-swarm-aided distributed monitoring system in a blockchain-powered network, where the security protocol and encryption algorithm are applied to ensure the security of each stage of the system, so as to realize the cooperative drone swarm to reliably perform monitoring tasks.
Abstract: Intelligent UAV-based monitoring systems are becoming an essential apparatus for crowd monitoring as they have proven to be viable and cost-effective solutions. Applications of such systems may include detecting antisocial and abnormal behavior among a crowd to ensure public safety and security, especially during periods of pandemic or social unrest when technology is aimed at replacing the human factor to ensure scalability and reduce risk. On the other hand, the modern architectures of autonomous UAV-based systems requires processing the captured information at the edge and cloud facilities, which requires transmission and/or retransmission of the captured data. This vulnerability of data security during transmission may compromise the benefits of the technology. Therefore, there is need for an effective strategy to achieve a secure architecture that takes into consideration the limited computing capabilities onboard the UAV agents and the distributed nature of the system. Blockchain, as a distributed network technology, will provide a safe, transparent, and efficient network system for UAV systems. Therefore, this article proposes a drone-swarm-aided distributed monitoring system in a blockchain-powered network. In the proposed monitoring mechanism, the security protocol and encryption algorithm are applied to ensure the security of each stage of the system, so as to realize the cooperative drone swarm to reliably perform monitoring tasks. The blockchain technology is introduced to achieve tamper-proof monitoring log recording and support group decision making of monitoring transactions.

Journal ArticleDOI
TL;DR: The aim of the research is to define a correct model of vulnerability curves, in PGA, for masonry structures in Italy, by heuristic approach starting from damage probability matrices (DPMs).
Abstract: In the framework of the emergency management in the case of seismic events, the evaluation of the expected damage represents a basic requirement for risk informed planning. Seismic risk is defined by the probability to reach a level of damage on given exposed elements caused by seismic events occurring in a fixed period and in a fixed area. To this purpose, the expected seismic input, the exposed elements and their vulnerability have to be correctly evaluated. The aim of the research is to define a correct model of vulnerability curves, in PGA, for masonry structures in Italy, by heuristic approach starting from damage probability matrices (DPMs). To this purpose, the PLINIVS database, containing data on major Italian seismic events, has been used and supported by “critical” assumption on missing data. To support the reliability of this assumption, two vulnerability models, considering or not the hypothesis on the missing data, have been estimated and used to calculate the seismic scenario of the L’Aquila 2009 earthquake through the IRMA (Italian Risk MAp) platform. Finally, a comparison between the outcomes elaborated by IRMA platform and the observed damage collected in the AEDES forms, has been done.

Journal ArticleDOI
TL;DR: In this article, the authors provide a deep and comprehensive overview of ICSs, presenting the architecture design, the employed devices, and the security protocols implemented, highlighting key challenges and design guidelines to keep in mind in the design phases.
Abstract: The increasing digitization and interconnection of legacy Industrial Control Systems (ICSs) open new vulnerability surfaces, exposing such systems to malicious attackers. Furthermore, since ICSs are often employed in critical infrastructures (e.g., nuclear plants) and manufacturing companies (e.g., chemical industries), attacks can lead to devastating physical damages. In dealing with this security requirement, the research community focuses on developing new security mechanisms such as Intrusion Detection Systems (IDSs), facilitated by leveraging modern machine learning techniques. However, these algorithms require a testing platform and a considerable amount of data to be trained and tested accurately. To satisfy this prerequisite, Academia, Industry, and Government are increasingly proposing testbed (i.e., scaled-down versions of ICSs or simulations) to test the performances of the IDSs. Furthermore, to enable researchers to cross-validate security systems (e.g., security-by-design concepts or anomaly detectors), several datasets have been collected from testbeds and shared with the community. In this paper, we provide a deep and comprehensive overview of ICSs, presenting the architecture design, the employed devices, and the security protocols implemented. We then collect, compare, and describe testbeds and datasets in the literature, highlighting key challenges and design guidelines to keep in mind in the design phases. Furthermore, we enrich our work by reporting the best performing IDS algorithms tested on every dataset to create a baseline in state of the art for this field. Finally, driven by knowledge accumulated during this survey's development, we report advice and good practices on the development, the choice, and the utilization of testbeds, datasets, and IDSs.

Journal ArticleDOI
TL;DR: In this paper, a zero-parameter-information DIA (ZDIA) was proposed, which makes it possible for the attacker to execute stealthy data tampering attacks without any information of the branch parameters.
Abstract: Data integrity attack (DIA) is one class of threatening cyber attacks against the Internet-of-Things (IoT)-based smart grid. With the assumption that the attacker is capable of obtaining complete or incomplete information of the system topology and branch parameters, it has been widely recognized that the highly synthesized DIA can evade being detected and undermine the smart grid state estimation. However, the branch parameters cannot be easily obtained or inferred by the attacker in practice. They can be changed or disturbed with time. In this article, we complete the class of DIA by designing the zero-parameter-information DIA (ZDIA), which makes it possible for the attacker to execute stealthy data tampering attacks without any information of the branch parameters. Only the topology information about the cut line is required to construct such attack. We prove that, the attacker can arbitrarily modify the state estimate of a one-degree bus , which is connected to the outside only by a single cut line; and modify the state estimates of all buses, with the same arbitrary bias, in a one-degree super-bus , which is a group of buses that is connected to the outside only by a single cut line. Besides, we extend ZDIA to the cases where a bus and super-bus are connected to the outside only by several cut lines. Moreover, we propose two countermeasures to address the topology vulnerability exploited by ZDIA, and present a branch perturbation strategy to defend against general DIAs. Finally, we conduct extensive simulations with the IEEE standard power systems to validate the theoretical results.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a secure fuzzy testing approach for honeypot identification inspired by vulnerability mining, which utilizes error handling to distinguish honeypots and real devices, and adopts mutation rules and security rules to generate effective and secure probe packets.
Abstract: In softwarized industrial networking, honeypot identification is very important for both the attacker and the defender. Existing honeypot identification relies on simple features of honeypot. There exist two challenges: The simple feature is easily simulated, which causes inaccurate results, whereas the advanced feature relies on high interactions, which lead to security risks. To cope with these challenges, in this article, we propose a secure fuzzy testing approach for honeypot identification inspired by vulnerability mining. It utilizes error handling to distinguish honeypots and real devices. Specifically, we adopt a novel identification architecture with two steps. First, a multiobject fuzzy testing is proposed. It adopts mutation rules and security rules to generate effective and secure probe packets. Then, these probe packets are used for scanning and identification. Experiments show that the fuzzy testing is effective and corresponding probe packet can acquire more features than other packets. These features are helpful for honeypot identification.

Journal ArticleDOI
01 Feb 2021
TL;DR: A correlation‐based feature selection integrated with neural network for identifying anomalies and the results show that the proposed model is superior in terms of accuracy, sensitivity, and specificity in comparison with some of the state‐of‐the‐art techniques.
Abstract: Serious concerns regarding vulnerability and security have been raised as a result of the constant growth of computer networks. Intrusion detection systems (IDS) have been adopted by netwo...

Journal ArticleDOI
17 Mar 2021
TL;DR: This article describes and illustrates various aspects of face morphing attacks, including different techniques for generating morphed face images and state-of-the-art morph attack detection (MAD) algorithms based on a stringent taxonomy as well as the availability of public databases, which allow us to benchmark new MAD algorithms in a reproducible manner.
Abstract: Face recognition has been successfully deployed in real-time applications, including secure applications such as border control. The vulnerability of face recognition systems (FRSs) to various kinds of attacks (both direct and indirect attacks) and face morphing attacks has received great interest from the biometric community. The goal of a morphing attack is to subvert an FRS at an automatic border control (ABC) gate by presenting an electronic machine-readable travel document (eMRTD) or e-passport that is obtained based on a morphed face image. Since the application process for an e-passport in the majority of countries requires a passport photograph to be presented by the applicant, a malicious actor and an accomplice can generate a morphed face image to obtain the e-passport. An e-passport with a morphed face image can be used by both the malicious actor and the accomplice to cross a border, as the morphed face image can be verified against both of them. This can result in a significant threat, as a malicious actor can cross the border without revealing the trace of his/her criminal background, while the details of the accomplice are recorded in the log of the access control system. This survey aims to present a systematic overview of the progress made in the area of face morphing in terms of both morph generation and morph detection. In this article, we describe and illustrate various aspects of face morphing attacks, including different techniques for generating morphed face images and state-of-the-art morph attack detection (MAD) algorithms based on a stringent taxonomy as well as the availability of public databases, which allow us to benchmark new MAD algorithms in a reproducible manner. The outcomes of competitions and benchmarking, vulnerability assessments, and performance evaluation metrics are also provided in a comprehensive manner. Furthermore, we discuss the open challenges and potential future areas that need to be addressed in the evolving field of biometrics.

Journal ArticleDOI
TL;DR: A public-permissioned blockchain security mechanism using elliptic curve crypto (ECC) digital signature that that supports a distributed ledger database (server) to provide an immutable security solution, transaction transparency and prevent the patient records tampering at the IoTs fog layer is proposed.
Abstract: The recent developments in fog computing architecture and cloud of things (CoT) technology includes data mining management and artificial intelligence operations. However, one of the major challenges of this model is vulnerability to security threats and cyber-attacks against the fog computing layers. In such a scenario, each of the layers are susceptible to different intimidations, including the sensed data (edge layer), computing and processing of data (fog (layer), and storage and management for public users (cloud). The conventional data storage and security mechanisms that are currently in use appear to not be suitable for such a huge amount of generated data in the fog computing architecture. Thus, the major focus of this research is to provide security countermeasures against medical data mining threats, which are generated from the sensing layer (a human wearable device) and storage of data in the cloud database of internet of things (IoT). Therefore, we propose a public-permissioned blockchain security mechanism using elliptic curve crypto (ECC) digital signature that that supports a distributed ledger database (server) to provide an immutable security solution, transaction transparency and prevent the patient records tampering at the IoTs fog layer. The blockchain technology approach also helps to mitigate these issues of latency, centralization, and scalability in the fog model.