scispace - formally typeset
Search or ask a question

Showing papers on "Vulnerability (computing) published in 2013"


Journal ArticleDOI
01 Nov 2013
TL;DR: Big data is changing the landscape of security tools for network monitoring, security information and event management, and forensics; however, in the eternal arms race of attack and defense, security researchers must keep exploring novel ways to mitigate and contain sophisticated attackers.
Abstract: Big data is changing the landscape of security tools for network monitoring, security information and event management, and forensics; however, in the eternal arms race of attack and defense, security researchers must keep exploring novel ways to mitigate and contain sophisticated attackers.

240 citations


Patent
12 Nov 2013
TL;DR: In this paper, a system, method, and apparatus assesses a risk of one or more assets within an operational technology infrastructure by providing a database containing data relating to the assets.
Abstract: A system, method and apparatus assesses a risk of one or more assets within an operational technology infrastructure by providing a database containing data relating to the one or more assets, calculating a threat score for the one or more assets using one or more processors communicably coupled to the database, calculating a vulnerability score for the one or more assets using the one or more processors, calculating an impact score for the one or more assets using the one or more processors, and determining the risk of the one or more assets based on the threat score, the vulnerability score and the impact score using the one or more processors.

172 citations


Book ChapterDOI
TL;DR: A review of the role of human security in the field of global environmental change and human security can be found in this article, where the authors provide a retrospective and tentative prospective view of this field.
Abstract: This article reviews research on global environmental change and human security, providing retrospective and tentative prospective views of this field. It explains the roles that the concept of human security has played in research on environmental change, as well as the knowledge that it has contributed. It then discusses the Global Environmental Change and Human Security (GECHS) project as an example of how this research has encouraged a more politicized understanding of the problem of global environmental change, drawing attention to the roles of power, agency, and knowledge. Finally, the article considers new research frontiers that have emerged from this field, including research on social transformations as a means of promoting, sustaining, and enhancing human security in response to complex global environmental challenges. The potential contributions of human security approaches to the next generation of global change research are discussed.

160 citations


Journal ArticleDOI
TL;DR: This paper presents a general classification of the existing research works on disaster survivability in optical networks and a survey on relevant works based on that classification and discusses different ways to combat them.

156 citations


Journal ArticleDOI
TL;DR: The theory used for the attack-probability calculations in CySeMoL is a compilation of research results on a number of security domains and covers a range of attacks and countermeasures and is validated on a system level.
Abstract: The cyber security modeling language (CySeMoL) is a modeling language for enterprise-level system architectures coupled to a probabilistic inference engine. If the computer systems of an enterprise are modeled with CySeMoL, this inference engine can assess the probability that attacks on the systems will succeed. The theory used for the attack-probability calculations in CySeMoL is a compilation of research results on a number of security domains and covers a range of attacks and countermeasures. The theory has previously been validated on a component level. In this paper, the theory is also validated on a system level. A test indicates that the reasonableness and correctness of CySeMoL assessments compare with the reasonableness and correctness of the assessments of a security professional. CySeMoL's utility has been tested in case studies.

154 citations


Journal ArticleDOI
TL;DR: This paper addresses the attack modeling using vulnerability of information, communication and electric grid network using graph theory based approach and shows the possible impact on smart grid caused by integrated cyber-physical attack.
Abstract: This paper addresses the attack modeling using vulnerability of information, communication and electric grid network. Vulnerability of electric grid with incomplete information has been analyzed using graph theory based approach. Vulnerability of information and communication (cyber) network has been modeled utilizing concepts of discovery, access, feasibility, communication speed and detection threat. Common attack vector based on vulnerability of cyber and physical system have been utilized to operate breakers associated with generating resources to model aurora-like event. Real time simulations for modified IEEE 14 bus test case system and graph theory analysis for IEEE 118 bus system have been presented. Test case results show the possible impact on smart grid caused by integrated cyber-physical attack.

152 citations


Journal ArticleDOI
TL;DR: A new centrality index is proposed, taking into consideration the maximum flow from the source and sink nodes to the sink nodes, for assessing the network, and the Max-Flow Min-Cut Theorem is used for evaluating the capacity of links.
Abstract: This paper proposes a maximum-flow-based complex network approach for the analysis of the vulnerability of power systems. A new centrality index is proposed, taking into consideration the maximum flow from the source (generator) nodes to the sink (load) nodes, for assessing the network. The Max-Flow Min-Cut Theorem, also known as Ford-Fulkerson Theorem, is used for evaluating the capacity of links. The proposed methodology is then used to identify vulnerable lines of the IEEE 118 bus system and its effectiveness is demonstrated through simulation studies.

148 citations


Journal ArticleDOI
TL;DR: This paper examines the vulnerability of PLCs to intentional firmware modifications in order to obtain a better understanding of the threats posed by PLC firmware modification attacks and the feasibility of these attacks.

133 citations


Proceedings ArticleDOI
13 May 2013
TL;DR: This research shows that when using a prize phishing email, neuroticism is the factor most correlated to responding to this email, in addition to a gender-based difference in the response, which suggests susceptibility to phishing is not due to lack of awareness of the phishing risks and that real-time response tophishing is hard to predict in advance by online users.
Abstract: Recent research has begun to focus on the factors that cause people to respond to phishing attacks as well as affect user behavior on social networks. This study examines the correlation between the Big Five personality traits and email phishing response. Another aspect examined is how these factors relate to users' tendency to share information and protect their privacy on Facebook (which is one of the most popular social networking sites).This research shows that when using a prize phishing email, neuroticism is the factor most correlated to responding to this email, in addition to a gender-based difference in the response. This study also found that people who score high on the openness factor tend to both post more information on Facebook as well as have less strict privacy settings, which may cause them to be susceptible to privacy attacks. In addition, this work detected no correlation between the participants estimate of being vulnerable to phishing attacks and actually being phished, which suggests susceptibility to phishing is not due to lack of awareness of the phishing risks and that real-time response to phishing is hard to predict in advance by online users.The goal of this study is to better understand the traits that contribute to online vulnerability, for the purpose of developing customized user interfaces and secure awareness education, designed to increase users' privacy and security in the future.

122 citations


Journal ArticleDOI
TL;DR: Threats to the stability and security of the content distribution system are analyzed in theory, simulations, and practical experiments, and it is suggested that major architectural refinements are required prior to global ICN deployment in the real world.

113 citations


Journal ArticleDOI
TL;DR: This column presents the latest insights on the technical challenges and opportunities associated with the security of autonomous systems from an embedded computing and sensors perspective.
Abstract: Embedded computing and sensor systems are increasingly becoming an integral part of today's infrastructure. From jet engines to vending machines, our society relies on embedded computing and sensor systems to support numerous applications seamlessly and reliably. This is especially true with respect to autonomous systems such as unmanned aircraft, unmanned ground vehicles, robotics, medical operations, and industrial automation. However, given society's increasing reliance on embedded computing and sensor systems as well as the applications they support, this introduces a new form of vulnerability into this critical infrastructure that is only now beginning to be recognized as a significant threat with potentially serious consequences. This column presents the latest insights on the technical challenges and opportunities associated with the security of autonomous systems from an embedded computing and sensors perspective.

Journal ArticleDOI
TL;DR: The boundaries of applicability of risk-based principles as a means of formalizing discussion of water security are explored and the nature of these interconnections are illustrated with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability ofrisk and the trade-offs in risk among different actors.
Abstract: The concept of water security implies concern about potentially harmful states of coupled human and natural water systems. Those harmful states may be associated with water scarcity (for humans and/or the environment), floods or harmful water quality. The theories and practices of risk analysis and risk management have been developed and elaborated to deal with the uncertain occurrence of harmful events. Yet despite their widespread application in public policy, theories and practices of risk management have well-known limitations, particularly in the context of severe uncertainties and contested values. Here, we seek to explore the boundaries of applicability of risk-based principles as a means of formalizing discussion of water security. Not only do risk concepts have normative appeal, but they also provide an explicit means of addressing the variability that is intrinsic to hydrological, ecological and socio-economic systems. We illustrate the nature of these interconnections with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability of risk and the trade-offs in risk among different actors.

Journal ArticleDOI
TL;DR: This article presents a middleware approach to bridge the gap between system-level vulnerabilities and organization-level security metrics, ultimately contributing to cost-benefit security hardening.

Proceedings Article
14 Aug 2013
TL;DR: This work focuses on systematically explicating implicit assumptions that are necessary for secure use of an SDK by building semantic models that capture both the logic of the SDK and the essential aspects of underlying systems.
Abstract: Most modern applications are empowered by online services, so application developers frequently implement authentication and authorization. Major online providers, such as Facebook and Microsoft, provide SDKs for incorporating authentication services. This paper considers whether those SDKs enable typical developers to build secure apps. Our work focuses on systematically explicating implicit assumptions that are necessary for secure use of an SDK. Understanding these assumptions depends critically on not just the SDK itself, but on the underlying runtime systems. We present a systematic process for identifying critical implicit assumptions by building semantic models that capture both the logic of the SDK and the essential aspects of underlying systems. These semantic models provide the explicit basis for reasoning about the security of an SDK. We use a formal analysis tool, along with the semantic models, to reason about all applications that can be built using the SDK. In particular, we formally check whether the SDK, along with the explicitly captured assumptions, is sufficient to imply the desired security properties. We applied our approach to three widely used authentication/authorization SDKs. Our approach led to the discovery of several implicit assumptions in each SDK, including issues deemed serious enough to receive Facebook bug bounties and change the OAuth 2.0 specification. We verified that many apps constructed with these SDKs (indeed, the majority of apps in our study) are vulnerable to serious exploits because of these implicit assumptions, and we built a prototype testing tool that can detect several of the vulnerability patterns we identified.

Patent
20 Mar 2013
TL;DR: In this paper, the authors propose a method for simulation aided security event management, which comprises: generating attack simulation information that comprises multiple simulation data items of at least one data item type out of vulnerability instances data items, attack step data items and attack simulation scope data items.
Abstract: A method for simulation aided security event management, the method comprises: generating attack simulation information that comprises multiple simulation data items of at least one data item type out of vulnerability instances data items, attack step data items and attack simulation scope data items; wherein the generating of attack simulation information is responsive to a network model, at least one attack starting point and attack action information; identifying security events in response to a correlation between simulation data items and event data; and prioritizing identified security events.

Proceedings ArticleDOI
09 Apr 2013
TL;DR: The proposed taxonomy is capable of representing both conventional cyber-attacks as well as cross-domain attacks on CPS, and can be used to establish a knowledge base about attacks in the literature and foster the quantitative and qualitative analysis of these attacks, necessarily to improve CPS security.
Abstract: The pervasiveness of Cyber-Physical Systems (CPS) in various aspects of the modern society grows rapidly. This makes CPS to increasingly attractive targets for various kinds of attacks. We consider cyber-security as an integral part of CPS security. Additionally, the necessity exists to investigate the CPS-specific aspects which are out of scope of cyber-security. Most importantly, attacks capable to cross the cyber-physical domain boundary should be analyzed. The vulnerability of CPS to such cross-domain attacks has been practically proven by numerous examples, e.g., by the currently most famous Stuxnet attack. In this paper, we propose taxonomy for description of attacks on CPS. The proposed taxonomy is capable of representing both conventional cyber-attacks as well as cross-domain attacks on CPS. Furthermore, based on the proposed taxonomy, we define the attack categorization. Several possible application areas of the proposed taxonomy are extensively discussed. Among others, it can be used to establish a knowledge base about attacks on CPS known in the literature. Furthermore, the proposed description structure will foster the quantitative and qualitative analysis of these attacks, both of which are necessarily to improve CPS security.

Proceedings ArticleDOI
18 Mar 2013
TL;DR: An original approach to automate Model-Based Vulnerability Testing for Web applications based on a mixed modeling of the application under test, which captures some behavioral aspects of the Web application, and includes vulnerability test purposes to drive the test generation algorithm.
Abstract: This paper deals with an original approach to automate Model-Based Vulnerability Testing (MBVT) for Web applications, which aims at improving the accuracy and precision of vulnerability testing Today, Model-Based Testing techniques are mostly used to address functional features The adaptation of such techniques for vulnerability testing defines novel issues in this research domain In this paper, we describe the principles of our approach, which is based on a mixed modeling of the application under test: the specification indeed captures some behavioral aspects of the Web application, and includes vulnerability test purposes to drive the test generation algorithm This approach is illustrated with the widely-used DVWA example

Book ChapterDOI
01 Jan 2013
TL;DR: Vulnerability assessment enables open and publicly available security content and standardizes the transfer of this content across the entire spectrum of information security tools and services.
Abstract: Vulnerability assessment is an information security community standard to promote open and publicly available security content, and to standardize the transfer of this information across security tools and services. Also, vulnerability assessment is an XML specification for exchanging technical details on how to check systems for security-related software flaws, configuration issues, and patches. In addition, vulnerability assessment standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and, reporting the results of the assessment. In this way, vulnerability assessment enables open and publicly available security content and standardizes the transfer of this content across the entire spectrum of information security tools and services. The capabilities and requirements described in this chapter have been derived from the vulnerability assessment process.

Journal ArticleDOI
TL;DR: This approach is named integrated fuzzy flood vulnerability assessment because it combines the watershed-based vulnerability framework with stream-based risk analysis and the fuzzy TOPSIS technique is proposed to address the uncertainty of weights to all criteria and crisp input data of all spatial units.
Abstract: . This study aims to develop a new procedure that combines multi-criteria spatial vulnerability analysis with the traditional linear probabilistic risk approach. This approach is named integrated fuzzy flood vulnerability assessment because it combines the watershed-based vulnerability framework with stream-based risk analysis. The Delphi technique and pressure-state-impact-response framework are introduced to objectively select evaluation criteria, and the fuzzy TOPSIS technique is proposed to address the uncertainty of weights to all criteria and crisp input data of all spatial units. ArcGIS is used to represent the spatial results to all criteria. This framework is applied to the south Han River basin in South Korea. As a result, the flood vulnerability ranking was derived and vulnerability characteristics of all spatial units were compared. This framework can be used to conduct a prefeasibility study for flood mitigation projects when various stakeholders should be included.

Proceedings ArticleDOI
01 Oct 2013
TL;DR: A novel vulnerability analysis determines a circuit's susceptibility to Trojan insertion based on statement hardness analysis as well as observability of circuit signals and the Trojan detectability metric is introduced to quantitatively compare the detectability of behavioral Trojans inserted into different circuits.
Abstract: Considerable attention has been paid to hardware Trojan detection and prevention. However, there is no existing systematic approach to investigate circuit vulnerability to hardware Trojan insertion during development. We present such an approach to investigate circuit vulnerability to Trojan insertion at the behavioral level. This novel vulnerability analysis determines a circuit's susceptibility to Trojan insertion based on statement hardness analysis as well as observability of circuit signals. Further, the Trojan detectability metric is introduced to quantitatively compare the detectability of behavioral Trojans inserted into different circuits. This creates a fair comparison for analyzing the strengths and weaknesses of Trojan detection techniques as well as helping verify trustworthiness of a third party Intellectual Property (IP).

Journal ArticleDOI
TL;DR: A novel memory acquisition technique is presented, based on direct page table manipulation and PCI hardware introspection, without relying on operating system facilities - making it more difficult to subvert, and is evaluated by considering more advanced anti-forensic attacks.

31 May 2013
TL;DR: In an attempt to ascertain Cloud Computing reliability, 11,491 news articles on cloud computing-related outages from 39 news sources between Jan 2008 and Feb 2012 – effectively covering the first five years of cloud computing - were reviewed.
Abstract: In an attempt to ascertain Cloud Computing reliability, 11,491 news articles on cloud computing-related outages from 39 news sources between Jan 2008 and Feb 2012 – effectively covering the first five years of cloud computing - were reviewed.

Journal ArticleDOI
TL;DR: It is shown that the main emerging SSO protocols, namely SAML SSO and OpenID, suffer from an authentication flaw that allows a malicious service provider to hijack a client authentication attempt or force the latter to access a resource without its consent or intention.

Proceedings ArticleDOI
18 May 2013
TL;DR: This paper introduces a new approach to support architecture security analysis using security scenarios and metrics based on formalizing attack scenarios and security metrics signature specification using the Object Constraint Language (OCL).
Abstract: Reviewing software system architecture to pinpoint potential security flaws before proceeding with system development is a critical milestone in secure software development lifecycles. This includes identifying possible attacks or threat scenarios that target the system and may result in breaching of system security. Additionally we may also assess the strength of the system and its security architecture using well-known security metrics such as system attack surface, Compartmentalization, least-privilege, etc. However, existing efforts are limited to specific, predefined security properties or scenarios that are checked either manually or using limited toolsets. We introduce a new approach to support architecture security analysis using security scenarios and metrics. Our approach is based on formalizing attack scenarios and security metrics signature specification using the Object Constraint Language (OCL). Using formal signatures we analyse a target system to locate signature matches (for attack scenarios), or to take measurements (for security metrics). New scenarios and metrics can be incorporated and calculated provided that a formal signature can be specified. Our approach supports defining security metrics and scenarios at architecture, design, and code levels. We have developed a prototype software system architecture security analysis tool. To the best of our knowledge this is the first extensible architecture security risk analysis tool that supports both metric-based and scenario-based architecture security analysis. We have validated our approach by using it to capture and evaluate signatures from the NIST security principals and attack scenarios defined in the CAPEC database.

Journal ArticleDOI
23 May 2013-Chaos
TL;DR: Three frequently used power grid models are selected, including a purely topological model (PTM), a betweennness based model (BBM), and a direct current power flow model (DCPFM), to describe three different dynamical processes on a power grid under both single and multiple component failures.
Abstract: This paper selects three frequently used power grid models, including a purely topological model (PTM), a betweennness based model (BBM), and a direct current power flow model (DCPFM), to describe three different dynamical processes on a power grid under both single and multiple component failures. Each of the dynamical processes is then characterized by both a topology-based and a flow-based vulnerability metrics to compare the three models with each other from the vulnerability perspective. Taking as an example, the IEEE 300 power grid with line capacity set proportional to a tolerance parameter tp, the results show non-linear phenomenon: under single node failures, there exists a critical value of tp = 1.36, above which the three models all produce identical topology-based vulnerability results and more than 85% nodes have identical flow-based vulnerability from any two models; under multiple node failures that each node fails with an identical failure probability fp, there exists a critical fp = 0.56, above which the three models produce almost identical topology-based vulnerability results at any tp ≥ 1, but producing identical flow-based vulnerability results only occurs at fp = 1. In addition, the topology-based vulnerability results can provide a good approximation for the flow-based vulnerability under large fp, and the priority of PTM and BBM to better approach the DCPFM for vulnerability analysis mainly depends on the value of fp. Similar results are also found for other failure types, other system operation parameters, and other power grids.

ReportDOI
04 Feb 2013
TL;DR: The Juliet test suite is introduced that has precisely characterized weaknesses and improved the procedure for characterizing vulnerability locations in the CVE-selected test cases, and several ways in which the released data and analysis are useful are identified.
Abstract: Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose.Abstract The NIST Software Assurance Metrics And Tool Evaluation (SAMATE) project conducted the fourth Static Analysis Tool Exposition (SATE IV) to advance research in static analysis tools that find security defects in source code. The main goals of SATE were to enable empirical research based on large test sets, encourage improvements to tools, and promote broader and more rapid adoption of tools by objectively demonstrating their use on production software. Briefly, eight participating tool makers ran their tools on a set of programs. The programs were four pairs of large code bases selected in regard to entries in the Common Vulnerabilities and Exposures (CVE) dataset and approximately 60 000 synthetic test cases, the Juliet 1.0 test suite. NIST researchers analyzed approximately 700 warnings by hand, matched tool warnings to the relevant CVE entries, and analyzed over 180 000 warnings for Juliet test cases by automated means. The results and experiences were reported at the SATE IV Workshop in McLean, VA, in March, 2012. The tool reports and analysis were made publicly available in January, 2013. SATE is an ongoing research effort with much work still to do. This paper reports our analysis to date which includes much data about weaknesses that occur in software and about tool capabilities. Our analysis is not intended to be used for tool rating or tool selection. This paper also describes the SATE procedure and provides our observations based on the data collected. Based on lessons learned from our experience with previous SATEs, we made the following major changes to the SATE procedure. First, we introduced the Juliet test suite that has precisely characterized weaknesses. Second, we improved the procedure for characterizing vulnerability locations in the CVE-selected test cases. Finally, we provided teams with a virtual machine image containing the test cases properly configured to compile the cases and ready for analysis by tools. This paper identifies several ways in which the released data and analysis are useful. First, the output from running many tools on production software is available for empirical research. Second, our analysis …

Proceedings ArticleDOI
14 Apr 2013
TL;DR: A Correlation ATtack (CAT) is proposed to demonstrate the potential vulnerability of the link signature based security mechanisms in such circumstances as with poor scattering and/or a strong line-of-sight (LOS) component.
Abstract: A fundamental assumption of link signature based security mechanisms is that the wireless signals received at two locations separated by more than half a wavelength are essentially uncorrelated However, it has been observed that in certain circumstances (eg, with poor scattering and/or a strong line-of-sight (LOS) component), this assumption is invalid In this paper, a Correlation ATtack (CAT) is proposed to demonstrate the potential vulnerability of the link signature based security mechanisms in such circumstances Based on statistical inference, CAT explicitly exploits the spatial correlations to reconstruct the legitimate link signature from the observations of multiple adversary receivers deployed in vicinity Our findings are verified through theoretical analysis, well-known channel correlation models, and experiments on USRP platforms and GNURadio

Journal ArticleDOI
TL;DR: The proposed model for managing information-security risks is based on a quantitative analysis of the security risks that enable organizations to introduce optimum security solutions and is designed as a standard procedure to lead the organization from the initial selection of input data to the final recommendations for the selection of the appropriate solutions.
Abstract: :Information-security risk management is becoming an increasingly important process in modern businesses. The proposed model for managing information-security risks is based on a quantitative analysis of the security risks that enable organizations to introduce optimum security solutions. The model is designed as a standard procedure to lead the organization from the initial selection of input data to the final recommendations for the selection of the appropriate solutions, which reduces a certain security risk. In analyzing the security risks, the model quantitatively evaluates the information assets, their vulnerability, and the threats to information assets. The values of the risk parameters are the basis for selecting the appropriate risk treatment and the evaluation of the various security measures that reduce security risks. Economic indicators are determined for each security measure in order to enable a comparison of the various security measures. This includes the possibility of investing...

Journal ArticleDOI
TL;DR: In this article, the structural vulnerability of the interconnected power grid was analyzed from a topological point of view using extended topological method, which incorporates some electrical engineering characteristics into complex network methodology.
Abstract: SUMMARY Power systems as one of the key infrastructures play a crucial role in any country's economy and social life. A large-scale blackout can affect all sectors in a society such as industrial, commercial, residential, and essential public services. However, the frequency of large-scale blackouts across the world is not being reduced, although advanced technology and huge investment have been applied into power systems. Given a single blackout, it is possible to analyze the causes with the traditional engineering methods. What we want to do is not to explain the causes of blackouts but to find what are the most critical elements of the power system to improve the resilience of the system itself. As blackout can happen in different load conditions, we do not want a method that depends on the load/generation level. We want a method independent from these factors: This is the structural perspective. When the interconnection between European and Russian power grids will create the largest interconnected power grid throughout the world in terms of the scale, transmission distance, and involved countries, analyzing the vulnerability of a large-scale power grid will be useful to maintain its reliable and secure operation. To analyze the vulnerability of the interconnected power grid, in this article, we first created the interconnected transmission network between continental Europe and the Commonwealth of Independent States (CIS) and Baltic countries; then, the structural vulnerability of the interconnected power grid was analyzed from a topological point of view using our proposed extended topological method, which incorporates some electrical engineering characteristics into complex network methodology. We found that these power grids of continental Europe, the Baltic states, and the CIS countries can benefit from the interconnection because the interconnected power grid can not only improve the overall network performance of these power grids in the Baltic states and the CIS countries but also increase their structural robustness. Copyright © 2012 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Vulnerability scrying is proposed, a new paradigm for vulnerability discovery prediction based on code properties which uses code properties as its parameters to predict vulnerability discovery.
Abstract: Predicting software vulnerability discovery trends can help improve secure deployment of software applications and facilitate backup provisioning, disaster recovery, diversity planning, and maintenance scheduling. Vulnerability discovery models (VDMs) have been studied in the literature as a means to capture the underlying stochastic process. Based on the VDMs, a few vulnerability prediction schemes have been proposed. Unfortunately, all these schemes suffer from the same weaknesses: they require a large amount of historical vulnerability data from a database (hence they are not applicable to a newly released software application), their precision depends on the amount of training data, and they have significant amount of error in their estimates. In this work, we propose vulnerability scrying, a new paradigm for vulnerability discovery prediction based on code properties. Using compiler-based static analysis of a codebase, we extract code properties such as code complexity (cyclomatic complexity), and more importantly code quality (compliance with secure coding rules), from the source code of a software application. Then we propose a stochastic model which uses code properties as its parameters to predict vulnerability discovery. We have studied the impact of code properties on the vulnerability discovery trends by performing static analysis on the source code of four real-world software applications. We have used our scheme to predict vulnerability discovery in three other software applications. The results show that even though we use no historical data in our prediction, vulnerability scrying can predict vulnerability discovery with better precision and less divergence over time.