scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2012"


Proceedings ArticleDOI
03 Jun 2012
TL;DR: This work demonstrates that an attacker can decipher the obfuscated nctlist, in a time linear to the number of keys, by sensitizing the key values to the output, and develops techniques to fix this vulnerability and make obfuscation truly exponential in thenumber of inserted keys.
Abstract: Due to globalization of Integrated Circuit (IC) design flow, rogue elements in the supply chain can pirate ICs, overbuild ICs, and insert hardware trojans. EPIC [1] obfuscates the design by randomly inserting additional gates; only a correct key makes the design to produce correct outputs. We demonstrate that an attacker can decipher the obfuscated nctlist, in a time linear to the number of keys, by sensitizing the key values to the output. We then develop techniques to fix this vulnerability and make obfuscation truly exponential in the number of inserted keys.

489 citations


Journal ArticleDOI
TL;DR: Article deposited according to Hindawi Publishing Corporation policy for the International Journal of Navigation and Observation.
Abstract: Article deposited according to Hindawi Publishing Corporation policy for the International Journal of Navigation and Observation: http://www.hindawi.com/journals/ijno/guidelines/, August 22, 2012.

337 citations


Proceedings ArticleDOI
Ilya Mironov1
16 Oct 2012
TL;DR: A new type of vulnerability present in many implementations of differentially private mechanisms is described, based on irregularities of floating-point implementations of the privacy-preserving Laplacian mechanism, which allows one to breach differential privacy with just a few queries into the mechanism.
Abstract: We describe a new type of vulnerability present in many implementations of differentially private mechanisms. In particular, all four publicly available general purpose systems for differentially private computations are susceptible to our attack.The vulnerability is based on irregularities of floating-point implementations of the privacy-preserving Laplacian mechanism. Unlike its mathematical abstraction, the textbook sampling procedure results in a porous distribution over double-precision numbers that allows one to breach differential privacy with just a few queries into the mechanism.We propose a mitigating strategy and prove that it satisfies differential privacy under some mild assumptions on available implementation of floating-point arithmetic.

183 citations


Journal ArticleDOI
TL;DR: This paper investigates the vulnerability of the power system state estimator to attacks performed against the communication infrastructure and proposes approximations of these metrics, that are based on the communication network topology only, and provides efficient algorithms to calculate the security metrics.
Abstract: Critical power system applications like contingency analysis and optimal power flow calculation rely on the power system state estimator. Hence the security of the state estimator is essential for the proper operation of the power system. In the future more applications are expected to rely on it, so that its importance will increase. Based on realistic models of the communication infrastructure used to deliver measurement data from the substations to the state estimator, in this paper we investigate the vulnerability of the power system state estimator to attacks performed against the communication infrastructure. We define security metrics that quantify the importance of individual substations and the cost of attacking individual measurements. We propose approximations of these metrics, that are based on the communication network topology only, and we compare them to the exact metrics. We provide efficient algorithms to calculate the security metrics. We use the metrics to show how various network layer and application layer mitigation strategies, like single and multi-path routing and data authentication, can be used to decrease the vulnerability of the state estimator. We illustrate the efficiency of the algorithms on the IEEE 118 and 300 bus benchmark power systems.

152 citations


Journal ArticleDOI
TL;DR: Numerical results show that an “impact area” vulnerability analysis approach is proposed to evaluate the consequences of a link closure within its impact area instead of the whole network, and can significantly reduce the search space for determining the most critical links in large-scale networks.
Abstract: To assess the vulnerability of congested road networks, the commonly used full network scan approach is to evaluate all possible scenarios of link closure using a form of traffic assignment. This approach can be computationally burdensome and may not be viable for identifying the most critical links in large-scale networks. In this study, an “impact area” vulnerability analysis approach is proposed to evaluate the consequences of a link closure within its impact area instead of the whole network. The proposed approach can significantly reduce the search space for determining the most critical links in large-scale networks. In addition, a new vulnerability index is introduced to examine properly the consequences of a link closure. The effects of demand uncertainty and heterogeneous travellers’ risk-taking behaviour are explicitly considered. Numerical results for two different road networks show that in practice the proposed approach is more efficient than traditional full scan approach for identifying the same set of critical links. Numerical results also demonstrate that both stochastic demand and travellers’ risk-taking behaviour have significant impacts on network vulnerability analysis, especially under high network congestion and large demand variations. Ignoring their impacts can underestimate the consequences of link closures and misidentify the most critical links.

142 citations


Proceedings ArticleDOI
16 Jul 2012
TL;DR: The fundamental nature of current insider threats will remain relatively unchanged in a cloud environment, but the paradigm does reveal new exploit possibilities, and how the nature of cloud systems architectures enables attacks to succeed is shown.
Abstract: Cloud computing related insider threats are often listed as a serious concern by security researchers, but to date this threat has not been thoroughly explored. We believe the fundamental nature of current insider threats will remain relatively unchanged in a cloud environment, but the paradigm does reveal new exploit possibilities. The common notion of a cloud insider as a rogue administrator of a service provider is discussed, but we also present two additional cloudrelated insider risks: the insider who exploits a cloud-related vulnerability to steal information from a cloud system, and the insider who uses cloud systems to carry out an attack on an employer's local resources. We also characterize a hierarchy of administrators within cloud service providers, give examples of attacks from real insider threat cases, and show how the nature of cloud systems architectures enables attacks to succeed. Finally, we discuss our position on future cloud research.

128 citations


Journal ArticleDOI
TL;DR: In order to overcome the effects of optimistic bias, firms need more security awareness training and systematic treatments of security threats instead of relying on ad hoc approach to security measure implementation.

108 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine the effectiveness of market-based vulnerability disclosure mechanisms and find that marketbased disclosure restricts the diffusion of vulnerability exploitations, reduces the risk of exploitation, and decreases the volume of exploitation attempts.
Abstract: Current reward structures in security vulnerability disclosure may be skewed toward benefitting nefarious usage of vulnerability information rather than responsible disclosure. Recently suggested market-based mechanisms offer incentives to responsible security researchers for discovering and reporting vulnerabilities. However, concerns exist that any benefits gained through increased incentives for responsible discovery may be lost through information leakage. Using perspectives drawn from the diffusion of innovations literature, we examine the effectiveness of market-based vulnerability disclosure mechanisms. Empirical examination of two years of security alert data finds that market-based disclosure restricts the diffusion of vulnerability exploitations, reduces the risk of exploitation, and decreases the volume of exploitation attempts.

94 citations


Journal ArticleDOI
TL;DR: A defense scheme is proposed to combat this vulnerability by adding artificial spoofing packets and it is shown by numerical results that the defense scheme can effectively prevent the security challenge.
Abstract: An experiment is carried out to measure the power consumption of households. The analysis on the real measurement data shows that the significant change of power consumption arrives in a Poisson manner. Based on this experiment, a novel wireless communication scheme is proposed for the advanced metering infrastructure (AMI) in smart grid that can significantly improve the spectrum efficiency. The main idea is to transmit only when a significant power consumption change occurs. On the other hand, the policy of transmitting only when change occurs may bring a security issue; i.e., an eavesdropper can monitor the daily life of the house owner, particularly the information of whether the owner is at home. Hence, a defense scheme is proposed to combat this vulnerability by adding artificial spoofing packets. It is shown by numerical results that the defense scheme can effectively prevent the security challenge.

92 citations


Proceedings ArticleDOI
02 Nov 2012
TL;DR: This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing and vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques.
Abstract: Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.

91 citations


Patent
Jan Blom1
13 Jun 2012
TL;DR: In this paper, an approach for providing privacy protection for data associated with a user and/or a user device is presented, which is based on aggregating data associated to one or more modalities of a user.
Abstract: An approach is presented for providing privacy protection for data associated with a user and/or a user device. The approach includes aggregating data associated with one or more modalities of a user device; determining one or more categories of richness for the data associated with each one of the one or more modalities, wherein the richness is indicative of one or more parameters associated with the data user information, or a combination thereof; and determining a user privacy vulnerability level based, at least in part, on the one or more categories of richness.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: The main idea is to use two-layer hierarchy to separate the network topology information from the vulnerability information of each host (in the lower layer) in the phase of construction, evaluation and modification of hierarchical attack representation models.
Abstract: Attack models can be used to assess network security. Purely graph based attack representation models (e.g., attack graphs) have a state-space explosion problem. Purely tree-based models (e.g., attack trees) cannot capture the path information explicitly. Moreover, the complex relationship between the host and the vulnerability information in attack models create difficulty in adjusting to changes in the network, which is impractical for modern large and dynamic network systems. To deal with these issues, we propose hierarchical attack representation models (HARMs). The main idea is to use two-layer hierarchy to separate the network topology information (in the upper layer) from the vulnerability information of each host (in the lower layer). We compare the HARMs with existing attack models (including attack graph and attack tree) in model complexity in the phase of construction, evaluation and modification.

Patent
05 Jun 2012
TL;DR: In this article, the authors describe technologies for time-correlating administrative events within virtual machines of a datacenter across many users and/or deployments, which enables the detection of confluences of repeated unusual events that may indicate a mass hacking attack.
Abstract: Technologies are generally described for time-correlating administrative events within virtual machines of a datacenter across many users and/or deployments. In some examples, the correlation of administrative events enables the detection of confluences of repeated unusual events that may indicate a mass hacking attack, thereby allowing attacks lacking network signatures to be detected. Detection of the attack may also allow the repair of affected systems and the prevention of further hacking before the vulnerability has been analyzed or repaired.

Patent
20 Jul 2012
TL;DR: In this article, the server identifies a rule-based vulnerability profile to the client and scores client responses in accordance with established scoring rules for various defensive and offensive asset training scenarios, and the server provides a set of rules for each scenario.
Abstract: A process for facilitating a client system defense training exercise implemented over a client-server architecture includes designated modules and hardware for protocol version identification message; registration; profiling; health reporting; vulnerability status messaging; storage; access and scoring. More particularly, the server identifies a rule-based vulnerability profile to the client and scores client responses in accordance with established scoring rules for various defensive and offensive asset training scenarios.

Journal ArticleDOI
TL;DR: This paper builds on the proposed framework to put forth concrete definitions for security and discoverability, for a class of models that can represent dynamics of numerous cyber-physical networks of interest: namely, dynamical network spread models.
Abstract: Motivated by the increasing need for developing automated decision-support tools for cyber-physical networks subject to uncertainties, we have been pursuing development of a new control-theoretic framework for network security and vulnerability. In this paper, we build on the proposed framework to put forth concrete definitions for security and (dually) discoverability, for a class of models that can represent dynamics of numerous cyber-physical networks of interest: namely, dynamical network spread models. These security and discoverability definitions capture whether or not, and to what extent, a stakeholder can infer the temporal dynamics of the spread from localized and noisy measurements. We then equivalence these security and security-level definitions to the control-theoretic notions of observability and optimal estimation, and so obtain explicit algebraic and spectral conditions for security and analyses of the security level. Further drawing on graph-theory constructs, a series of graphical conditions for security, as well as characterizations of security levels, are derived. A case study on zoonotic disease spread is also included, to illustrate concrete application of the analyses in management of cyber-physical infrastructure networks.

23 Mar 2012
TL;DR: To enable secure communication with the traditional Internet one of the well tested secure Internet protocols such as IPsec should be extended to WSN.
Abstract: With the advent of 6LoWPAN [1], wireless sensor networks (WSN) can be connected to the traditional Internet using the well tested IP protocol. These protocols form the Internet of Things (IoT) or strictly speaking the IP-connected IoT. There is no doubt that the 6LoWPAN enabled sensors will be the underlying technology of the IoT [2]. In the IPconnected IoT, IP enabled WSN are connected to the untrusted, unreliable, and vulnerable Internet. Moreover, the wireless medium adds to untruthfulness and vulnerability. To enable secure communication with the traditional Internet one of the well tested secure Internet protocols such as IPsec should be extended to WSN. Network or higher layer security and link layer security are not interchangeable. Upper layers are used to provide end-to-end security whereas link layer security controls access to the wireless medium. It is important to detect data modification attacks as early as possible in the wireless medium. With upper layer security protocols such as IPsec the authenticity of the data is verified at the end nodes as

Patent
07 Mar 2012
TL;DR: In this paper, a trust score corresponding to each device of a plurality of devices and representing a level of security applied to the device is generated using a severity of the vulnerability data.
Abstract: A system comprising at least one component running on at least one server and receiving vulnerability data and, for each device of a plurality of devices, device data that includes data of at least one device component. The system includes a trust score corresponding to each device of the plurality of devices and representing a level of security applied to the device. The trust score is generated using a severity of the vulnerability data. The system includes an access control component coupled to the at least one component and controlling access of the plurality of devices to an enterprise using the trust score.


Proceedings ArticleDOI
25 Jun 2012
TL;DR: A compositional model of attackers is introduced that simplifies enforcement of security, and it is demonstrated that standard information-flow control mechanisms can be easily adapted to enforce security for a broad and useful class of attackers.
Abstract: In systems that handle confidential information, the security policy to enforce on information frequently changes: new users join the system, old users leave, and sensitivity of data changes over time. It is challenging, yet important, to specify what it means for such systems to be secure, and to gain assurance that a system is secure. We present a language-based model for specifying, reasoning about, and enforcing information security in systems that dynamically change the security policy. We specify security for such systems as a simple and intuitive extensional knowledge-based semantic condition: an attacker can only learn information in accordance with the current security policy. Importantly, the semantic condition is parameterized by the ability of the attacker. Learning is about change in knowledge, and an observation that allows one attacker to learn confidential information may provide a different attacker with no new information. A program that is secure against an attacker with perfect recall may not be secure against a more realistic, weaker, attacker. We introduce a compositional model of attackers that simplifies enforcement of security, and demonstrate that standard information-flow control mechanisms, such as security-type systems and information-flow monitors, can be easily adapted to enforce security for a broad and useful class of attackers.

Proceedings ArticleDOI
24 Jun 2012
TL;DR: This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds, which dynamically assesses node integrity by comparing job replica outputs for consistency.
Abstract: Data and computation integrity and security are major concerns for users of cloud computing facilities. Many production-level clouds optimistically assume that all cloud nodes are equally trustworthy when dispatching jobs; jobs are dispatched based on node load, not reputation. This increases their vulnerability to attack, since compromising even one node suffices to corrupt the integrity of many distributed computations. This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds. Hatman dynamically assesses node integrity by comparing job replica outputs for consistency. This yields agreement feedback for a trust manager based on EigenTrust. Low overhead and high scalability is achieved by formulating both consistency-checking and trust management as secure cloud computations; thus, the cloud's distributed computing power is leveraged to strengthen its security. Experiments demonstrate that with feedback from only 100 jobs, Hatman attains over 90% accuracy when 25% of the Hadoop cloud is malicious.

Book ChapterDOI
26 Jun 2012
TL;DR: This work performs an analysis that emulates an actual attack on a real review corpus, and discusses different attack strategies, as well as the various contributing factors that determine the attack's impact.
Abstract: Product reviews have been the focus of numerous research efforts. In particular, the problem of identifying fake reviews has recently attracted significant interest. Writing fake reviews is a form of attack, performed to purposefully harm or boost an item's reputation. The effective identification of such reviews is a fundamental problem that affects the performance of virtually every application based on review corpora. While recent work has explored different aspects of the problem, no effort has been done to view the problem from the attacker's perspective. In this work, we perform an analysis that emulates an actual attack on a real review corpus. We discuss different attack strategies, as well as the various contributing factors that determine the attack's impact. These factors determine, among others, the authenticity of fake review, evaluated based on its linguistic features and its ability to blend in with the rest of the corpus. Our analysis and experimental evaluation provide interesting findings on the nature of fake reviews and the vulnerability of online review-corpora.

Proceedings ArticleDOI
26 Jul 2012
TL;DR: This paper presents an effective survey of SQL Injection attack, detection and prevention techniques.
Abstract: SQL Injection poses a serious security issue over the Internet or over web application. In SQL injection attacks, hackers can take advantage of poorly coded Web application software to introduce malicious code into the organization's systems and network. The vulnerability exists when a Web application do not properly filter or validate the entered data by a user on a Web page. Large Web applications have hundreds of places where users can input data, each of which can provide a SQL injection opportunity. Attacker can steal confidential data of the organization with these attacks resulting loss of market value of the organization. This paper presents an effective survey of SQL Injection attack, detection and prevention techniques.

01 Jan 2012
TL;DR: The main principle behind the working of delay tolerant networks (DTN) is the mobility of the nodes along with their contact sequences for exchanging data.
Abstract: The main principle behind the working of delay tolerant networks (DTN) is the mobility of the nodes along with their contact sequences for exchanging data. Nodes which are a part of the DTN network ...

Proceedings ArticleDOI
25 Oct 2012
TL;DR: A novel approach to detect SQLI attacks based on information theory that detects all known SQLI vulnerabilities and can be a complementary technique to identify unknown vulnerabilities.
Abstract: SQL Injection (SQLI) is a wide spread vulnerability commonly found in web-based programs. Exploitations of SQL injection vulnerabilities lead to harmful consequences such as authentication bypassing and leakage of sensitive personal information. Therefore, SQLI needs to be mitigated to protect end users. In this work, we present a novel approach to detect SQLI attacks based on information theory. We compute the entropy of each query present in a program accessed before program deployment. During program execution time, when an SQL query is invoked, we compute the entropy again to identify any change in the entropy measure for that query. The approach then relies on the assumption that dynamic queries with attack inputs result in increased or decreased level of entropy. In contrast, a dynamic query with benign inputs does not result in any change of entropy value. The proposed framework is validated with three open source PHP applications that have been reported to contain SQLI vulnerabilities. We implement a prototype tool in Java to facilitate the training and detection phase of the proposed approach. The evaluation results indicate that the approach detects all known SQLI vulnerabilities and can be a complementary technique to identify unknown vulnerabilities.

Book
30 Jun 2012
TL;DR: The first revision to NIST SP 800-121, Guide to Bluetooth Security as mentioned in this paper, provides information on the security capabilities of Bluetooth and gives recommendations to organizations employing Bluetooth technologies on securing them effectively.
Abstract: The National Institute of Standards and Technology Special Publication 800-121 Revision 1, Guide to Bluetooth Security is the first revision to NIST SP 800-121, Guide to Bluetooth Security. Bluetooth is an open standard for short-range radio frequency communication. Bluetooth technology is used primarily to establish wireless personal area networks. It has been integrated into many types of business and consumer devices, including cellular phones, personal digital assistants, laptops, automobiles, printers, and headsets. This publication provides information on the security capabilities of Bluetooth and gives recommendations to organizations employing Bluetooth technologies on securing them effectively. Updates in this revision include the latest vulnerability mitigation information for Secure Simple Pairing, introduced in Bluetooth v2.1 + Enhanced Data Rate (EDR), as well as an introduction to and discussion of Bluetooth v3.0 + High Speed and Bluetooth v4.0 security mechanisms and recommendations.~

Proceedings ArticleDOI
12 Aug 2012
TL;DR: The study reveals that network theory can provide a prominent set of techniques for the exploratory analysis of large complex software system, and proposes different network-based quality indicators that address software design, efficiency, reusability, vulnerability, controllability and other.
Abstract: Complex software systems are among most sophisticated human-made systems, yet only little is known about the actual structure of 'good' software. We here study different software systems developed in Java from the perspective of network science. The study reveals that network theory can provide a prominent set of techniques for the exploratory analysis of large complex software system. We further identify several applications in software engineering, and propose different network-based quality indicators that address software design, efficiency, reusability, vulnerability, controllability and other. We also highlight various interesting findings, e.g., software systems are highly vulnerable to processes like bug propagation, however, they are not easily controllable.

Book ChapterDOI
17 Oct 2012
TL;DR: A flaw in the specifications of the Authentication and Key Agreement (AKA) protocols of the Universal Mobile Telecommunications System and Long-Term Evolution (LTE) as well as the specification of the GSM Subscriber Identity Authentication protocol is reported.
Abstract: We report on a deficiency in the specifications of the Authentication and Key Agreement (AKA) protocols of the Universal Mobile Telecommunications System (UMTS) and Long-Term Evolution (LTE) as well as the specification of the GSM Subscriber Identity Authentication protocol, which are all maintained by the 3rd Generation Partnership Program (3GPP), an international consortium of telecommunications standards bodies. The flaw, although found using the computational prover CryptoVerif, is of symbolic nature and could be exploited by both an outside and an inside attacker in order to violate entity authentication properties. An inside attacker may impersonate an honest user during a run of the protocol and apply the session key to use subsequent wireless services on behalf of the honest user.

Book ChapterDOI
27 Feb 2012
TL;DR: A number of web service firms have started to authenticate users via their social knowledge, such as whether they can identify friends from photos, and attacks on such schemes are investigated.
Abstract: A number of web service firms have started to authenticate users via their social knowledge, such as whether they can identify friends from photos. We investigate attacks on such schemes. First, attackers often know a lot about their targets; most people seek to keep sensitive information private from others in their social circle. Against close enemies, social authentication is much less effective. We formally quantify the potential risk of these threats. Second, when photos are used, there is a growing vulnerability to face-recognition algorithms, which are improving all the time. Network analysis can identify hard challenge questions, or tell a social network operator which users could safely use social authentication; but it could make a big difference if photos weren’t shared with friends of friends by default. This poses a dilemma for operators: will they tighten their privacy default settings, or will the improvement in security cost too much revenue?

Journal ArticleDOI
TL;DR: This paper starts with an economic model for a single agent, that determines the optimal amount to invest in protection, and derives conditions to ensure that the incentives of all agents are aligned towards a better security.
Abstract: Malicious softwares or malwares for short have become a major security threat. While originating in criminal behavior, their impact are also influenced by the decisions of legitimate end users. Getting agents in the Internet, and in networks in general, to invest in and deploy security features and protocols is a challenge, in particular because of economic reasons arising from the presence of network externalities. In this paper, we focus on the question of incentive alignment for agents of a large network towards a better security. We start with an economic model for a single agent, that determines the optimal amount to invest in protection. The model takes into account the vulnerability of the agent to a security breach and the potential loss if a security breach occurs. We derive conditions on the quality of the protection to ensure that the optimal amount spent on security is an increasing function of the agent's vulnerability and potential loss. We also show that for a large class of risks, only a small fraction of the expected loss should be invested. Building on these results, we study a network of interconnected agents subject to epidemic risks. We derive conditions to ensure that the incentives of all agents are aligned towards a better security. When agents are strategic, we show that security investments are always socially inefficient due to the network externalities. Moreover alignment of incentives typically implies a coordination problem, leading to an equilibrium with a very high price of anarchy.

Proceedings ArticleDOI
05 May 2012
TL;DR: This project videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles, and found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.
Abstract: Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attack-er. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.