scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Dependable and Secure Computing in 2014"


Journal ArticleDOI
TL;DR: This work proposes a user-collaborative privacy-preserving approach for LBSs, and develops a novel epidemic model to capture the, possibly time-dependent, dynamics of information propagation among users.
Abstract: Location-aware smartphones support various location-based services (LBSs): users query the LBS server and learn on the fly about their surroundings. However, such queries give away private information, enabling the LBS to track users. We address this problem by proposing a user-collaborative privacy-preserving approach for LBSs. Our solution does not require changing the LBS server architecture and does not assume third party servers; yet, it significantly improves users’ location privacy. The gain stems from the collaboration of mobile devices: they keep their context information in a buffer and pass it to others seeking such information. Thus, a user remains hidden from the server, unless all the collaborative peers in the vicinity lack the sought information. We evaluate our scheme against the Bayesian localization attacks that allow for strong adversaries who can incorporate prior knowledge in their attacks. We develop a novel epidemic model to capture the, possibly time-dependent, dynamics of information propagation among users. Used in the Bayesian inference framework, this model helps analyze the effects of various parameters, such as users’ querying rates and the lifetime of context information, on users’ location privacy. The results show that our scheme hides a high fraction of location-based queries, thus significantly enhancing users’ location privacy. Our simulations with real mobility traces corroborate our model-based findings. Finally, our implementation on mobile platforms indicates that it is lightweight and the cost of collaboration is negligible.

170 citations


Journal ArticleDOI
TL;DR: A novel security metric, k-zero day safety, is proposed that counts how many unknown vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower.
Abstract: By enabling a direct comparison of different security solutions with respect to their relative effectiveness, a network security metric may provide quantifiable evidences to assist security practitioners in securing computer networks. However, research on security metrics has been hindered by difficulties in handling zero-day attacks exploiting unknown vulnerabilities. In fact, the security risk of unknown vulnerabilities has been considered as something unmeasurable due to the less predictable nature of software flaws. This causes a major difficulty to security metrics, because a more secure configuration would be of little value if it were equally susceptible to zero-day attacks. In this paper, we propose a novel security metric, k-zero day safety, to address this issue. Instead of attempting to rank unknown vulnerabilities, our metric counts how many such vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower. We formally define the metric, analyze the complexity of computing the metric, devise heuristic algorithms for intractable cases, and finally demonstrate through case studies that applying the metric to existing network security practices may generate actionable knowledge.

149 citations


Journal ArticleDOI
TL;DR: This paper presents a secure generic multi-factor authentication protocol to speed up the whole authentication process and investigates several issues in stand-alone authentication and shows how to add it on multi-Factor authentication protocols in an efficient and generic way.
Abstract: In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.

128 citations


Journal ArticleDOI
TL;DR: This paper proposes the first known client-server data classification protocol using support vector machine that performs PP classification for both two-class and multi-class problems and exploits properties of Pailler homomorphic encryption and secure two-party computation.
Abstract: Emerging cloud computing infrastructure replaces traditional outsourcing techniques and provides flexible services to clients at different locations via Internet. This leads to the requirement for data classification to be performed by potentially untrusted servers in the cloud. Within this context, classifier built by the server can be utilized by clients in order to classify their own data samples over the cloud. In this paper, we study a privacy-preserving (PP) data classification technique where the server is unable to learn any knowledge about clients' input data samples while the server side classifier is also kept secret from the clients during the classification process. More specifically, to the best of our knowledge, we propose the first known client-server data classification protocol using support vector machine. The proposed protocol performs PP classification for both two-class and multi-class problems. The protocol exploits properties of Pailler homomorphic encryption and secure two-party computation. At the core of our protocol lies an efficient, novel protocol for securely obtaining the sign of Pailler encrypted numbers.

120 citations


Journal ArticleDOI
TL;DR: An efficient protocol to obtain the Sum aggregate is proposed, which employs an additive homomorphic encryption and a novel key management technique to support large plaintext space and also extends the sum aggregation protocol to get the Min aggregate of time-series data.
Abstract: The proliferation and ever-increasing capabilities of mobile devices such as smart phones give rise to a variety of mobile sensing applications. This paper studies how an untrusted aggregator in mobile sensing can periodically obtain desired statistics over the data contributed by multiple mobile users, without compromising the privacy of each user. Although there are some existing works in this area, they either require bidirectional communications between the aggregator and mobile users in every aggregation period, or have high-computation overhead and cannot support large plaintext spaces. Also, they do not consider the Min aggregate, which is quite useful in mobile sensing. To address these problems, we propose an efficient protocol to obtain the Sum aggregate, which employs an additive homomorphic encryption and a novel key management technique to support large plaintext space. We also extend the sum aggregation protocol to obtain the Min aggregate of time-series data. To deal with dynamic joins and leaves of mobile users, we propose a scheme that utilizes the redundancy in security to reduce the communication cost for each join and leave. Evaluations show that our protocols are orders of magnitude faster than existing solutions, and it has much lower communication overhead.

116 citations


Journal ArticleDOI
TL;DR: This work exploits the fact that RDTs can naturally fit into a parallel and fully distributed architecture, and develops protocols to implement privacy-preserving R DTs that enable general and efficient distributed privacy- Preserving knowledge discovery.
Abstract: Distributed data is ubiquitous in modern information driven applications. With multiple sources of data, the natural challenge is to determine how to collaborate effectively across proprietary organizational boundaries while maximizing the utility of collected information. Since using only local data gives suboptimal utility, techniques for privacy-preserving collaborative knowledge discovery must be developed. Existing cryptography-based work for privacy-preserving data mining is still too slow to be effective for large scale data sets to face today’s big data challenge. Previous work on random decision trees (RDT) shows that it is possible to generate equivalent and accurate models with much smaller cost. We exploit the fact that RDTs can naturally fit into a parallel and fully distributed architecture, and develop protocols to implement privacy-preserving RDTs that enable general and efficient distributed privacy-preserving knowledge discovery.

115 citations


Journal ArticleDOI
TL;DR: This research proposes a similarity search of malware to detect these variants using novel distance metrics using a distance metric based on the distance between feature vectors of string-based signatures, and implements the distance metrics in a complete malware variant detection system.
Abstract: Static detection of malware variants plays an important role in system security and control flow has been shown as an effective characteristic that represents polymorphic malware. In our research, we propose a similarity search of malware to detect these variants using novel distance metrics. We describe a malware signature by the set of control flowgraphs the malware contains. We use a distance metric based on the distance between feature vectors of string-based signatures. The feature vector is a decomposition of the set of graphs into either fixed size k-subgraphs, or q-gram strings of the high-level source after decompilation. We use this distance metric to perform pre-filtering. We also propose a more effective but less computationally efficient distance metric based on the minimum matching distance. The minimum matching distance uses the string edit distances between programs’ decompiled flowgraphs, and the linear sum assignment problem to construct a minimum sum weight matching between two sets of graphs. We implement the distance metrics in a complete malware variant detection system. The evaluation shows that our approach is highly effective in terms of a limited false positive rate and our system detects more malware variants when compared to the detection rates of other algorithms.

103 citations


Journal ArticleDOI
TL;DR: This paper presents MOSES, a policy-based framework for enforcing software isolation of applications and data on the Android platform, and runs a thorough set of experiments to confirm the feasibility of the proposal.
Abstract: Smartphones are very effective tools for increasing the productivity of business users. With their increasing computational power and storage capacity, smartphones allow end users to perform several tasks and be always updated while on the move. Companies are willing to support employee-owned smartphones because of the increase in productivity of their employees. However, security concerns about data sharing, leakage and loss have hindered the adoption of smartphones for corporate use. In this paper we present MOSES, a policy-based framework for enforcing software isolation of applications and data on the Android platform. In MOSES, it is possible to define distinct Security Profiles within a single smartphone. Each security profile is associated with a set of policies that control the access to applications and data. Profiles are not predefined or hardcoded, they can be specified and applied at any time. One of the main characteristics of MOSES is the dynamic switching from one security profile to another. We run a thorough set of experiments using our full implementation of MOSES. The results of the experiments confirm the feasibility of our proposal.

74 citations


Journal ArticleDOI
TL;DR: A two-party algorithm for differentially private data release for vertically partitioned data between two parties in the semihonest adversary model is presented and Experimental results on real-life data suggest that the proposed algorithm can effectively preserve information for a data mining task.
Abstract: Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among the existing privacy models, ϵ-differential privacy provides one of the strongest privacy guarantees. In this paper, we address the problem of private data publishing, where different attributes for the same set of individuals are held by two parties. In particular, we present an algorithm for differentially private data release for vertically partitioned data between two parties in the semihonest adversary model. To achieve this, we first present a two-party protocol for the exponential mechanism. This protocol can be used as a subprotocol by any other algorithm that requires the exponential mechanism in a distributed setting. Furthermore, we propose a two-party algorithm that releases differentially private data in a secure way according to the definition of secure multiparty computation. Experimental results on real-life data suggest that the proposed algorithm can effectively preserve information for a data mining task.

73 citations


Journal ArticleDOI
TL;DR: Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement.
Abstract: In this paper we propose a methodology and a prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. In addition to the generic methodology, the paper describes the implementation of the Vulnerability & Attack Injector Tool (VAIT) that allows the automation of the entire process. We used this tool to run a set of experiments that demonstrate the feasibility and the effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an intrusion detection system for SQL Injection attacks and the assessment of the effectiveness of two top commercial web application vulnerability scanners. Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement.

71 citations


Journal ArticleDOI
TL;DR: A novel difference equation based analytical model is derived by introducing a new concept of virtual infected user that can precisely present the repetitious spreading process caused by reinfection and self-start and effectively overcome the associated computational challenges.
Abstract: Due to the critical security threats imposed by email-based malware in recent years, modeling the propagation dynamics of email malware becomes a fundamental technique for predicting its potential damages and developing effective countermeasures. Compared to earlier versions of email malware, modern email malware exhibits two new features, reinfection and self-start. Reinfection refers to the malware behavior that modern email malware sends out malware copies whenever any healthy or infected recipients open the malicious attachment. Self-start refers to the behavior that malware starts to spread whenever compromised computers restart or certain files are visited. In the literature, several models are proposed for email malware propagation, but they did not take into account the above two features and cannot accurately model the propagation dynamics of modern email malware. To address this problem, we derive a novel difference equation based analytical model by introducing a new concept of virtual infected user. The proposed model can precisely present the repetitious spreading process caused by reinfection and self-start and effectively overcome the associated computational challenges. We perform comprehensive empirical and theoretical study to validate the proposed analytical model. The results show our model greatly outperforms previous models in terms of estimation accuracy.

Journal ArticleDOI
TL;DR: The problem of improper information leakage due to data dependencies is identified, a formulation of the problem based on a natural graphical modeling is provided, and an approach to tackle it in an efficient and scalable way is presented.
Abstract: Fragmentation has been recently proposed as a promising approach to protect the confidentiality of sensitive associations whenever data need to undergo external release or storage. By splitting attributes among different fragments, fragmentation guarantees confidentiality of the associations among these attributes under the assumption that such associations cannot be reconstructed by re-combining the fragments. We note that the requirement that fragments do not have attributes in common, imposed by previous proposals, is only a necessary, but not sufficient, condition to ensure that information in different fragments cannot be recombined as dependencies may exist among data enabling some form of linkability. In this paper, we identify the problem of improper information leakage due to data dependencies, provide a formulation of the problem based on a natural graphical modeling, and present an approach to tackle it in an efficient and scalable way.

Journal ArticleDOI
TL;DR: The results show that the inclusion of risk-score information has significant positive effects in the selection process and can also lead to more curiosity about security-related information.
Abstract: The popularity and advanced functionality of mobile devices has made them attractive targets for malicious and intrusive applications (apps). Although strong security measures are in place for most mobile systems, the area where these systems often fail is the reliance on the user to make decisions that impact the security of a device. As our prime example, Android relies on users to understand the permissions that an app is requesting and to base the installation decision on the list of permissions. Previous research has shown that this reliance on users is ineffective, as most users do not understand or consider the permission information. We propose a solution that leverages a method to assign a risk score to each app and display a summary of that information to users. Results from four experiments are reported in which we examine the effects of introducing summary risk information and how best to convey such information to a user. Our results show that the inclusion of risk-score information has significant positive effects in the selection process and can also lead to more curiosity about security-related information.

Journal ArticleDOI
TL;DR: A method for security reassurance of software increments is proposed and demonstrated through a simple case study, security engineering activities are integrated into the agile software development process and the method is used to ensure producing acceptably secure software increments at the end of each iteration.
Abstract: The agile software development approach makes developing secure software challenging. Existing approaches for extending the agile development process, which enables incremental and iterative software development, fall short of providing a method for efficiently ensuring the security of the software increments produced at the end of each iteration. This article (a) proposes a method for security reassurance of software increments and demonstrates it through a simple case study, (b) integrates security engineering activities into the agile software development process and uses the security reassurance method to ensure producing acceptably secure—by the business owner—software increments at the end of each iteration, and (c) discusses the compliance of the proposed method with the agile values and its ability to produce secure software increments.

Journal ArticleDOI
TL;DR: Experimental results conducted using real-world data sets show that a wide range of techniques to generate both risk signals and risk scores that are based on heuristics as well as principled machine learning techniques can effectively identify malware as very risky.
Abstract: One of Android’s main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “stand-alone” fashion and in a way that requires too much technical knowledge and time to distill useful information. We discuss the desired properties of risk signals and relative risk scores for Android apps in order to generate another metric that users can utilize when choosing apps. We present a wide range of techniques to generate both risk signals and risk scores that are based on heuristics as well as principled machine learning techniques. Experimental results conducted using real-world data sets show that these methods can effectively identify malware as very risky, are simple to understand, and easy to use.

Journal ArticleDOI
TL;DR: This paper develops an online detection scheme for SIP flooding attacks, by integrating a novel three-dimensional sketch design with the Hellinger distance (HD) detection technique, and designs a scheme to control the distribution of the normal traffic over the sketch.
Abstract: The session initiation protocol (SIP) is widely used for controlling multimedia communication sessions over the Internet Protocol (IP). Effectively detecting a flooding attack to the SIP proxy server is critical to ensure robust multimedia communications over the Internet. The existing flooding detection schemes are inefficient in detecting low-rate flooding from dynamic background traffic, or may even totally fail when flooding is launched in a multi-attribute manner by simultaneously manipulating different types of SIP messages. In this paper, we develop an online detection scheme for SIP flooding attacks, by integrating a novel three-dimensional sketch design with the Hellinger distance (HD) detection technique. In our sketch design, each SIP attribute is associated with a two-dimensional sketch hash table, which summarizes the incoming SIP messages into a probability distribution over the sketch table. The evolution of the probability distribution can then be monitored through HD analysis for flooding attack detection. Our three-dimensional design offers the benefit of high detection accuracy even for low-rate flooding, robust performance under multi-attribute flooding, and the capability of selectively discarding the offending SIP messages to prevent the attacks from bringing damages to the network. Furthermore, we design a scheme to control the distribution of the normal traffic over the sketch. Such a design ensures our detection scheme’s effectiveness even under the severe distributed denial of service (DDoS) scenario, where attackers can flood over all the sketch table entries. In this paper, we not only theoretically analyze the performance of the proposed detection techniques, but also resort to extensive computer simulations to thoroughly examine the performance.

Journal ArticleDOI
TL;DR: This work designs a system called Mobiflage, which enables PDE on mobile devices by hiding encrypted volumes within random data in a devices free storage space, and leverages certain Ext4 file system mechanisms and uses an adjusted data-block allocator.
Abstract: Data confidentiality can be effectively preserved through encryption. In certain situations, this is inadequate, as users may be coerced into disclosing their decryption keys. Steganographic techniques and deniable encryption algorithms have been devised to hide the very existence of encrypted data. We examine the feasibility and efficacy of deniable encryption for mobile devices. To address obstacles that can compromise plausibly deniable encryption (PDE) in a mobile environment, we design a system called Mobiflage. Mobiflage enables PDE on mobile devices by hiding encrypted volumes within random data in a devices free storage space. We leverage lessons learned from deniable encryption in the desktop environment, and design new countermeasures for threats specific to mobile systems. We provide two implementations for the Android OS, to assess the feasibility and performance of Mobiflage on different hardware profiles. MF-SD is designed for use on devices with FAT32 removable SD cards. Our MF-MTP variant supports devices that instead share a single internal partition for both apps and user accessible data. MF-MTP leverages certain Ext4 file system mechanisms and uses an adjusted data-block allocator. These new techniques for soring hidden volumes in Ext4 file systems can also be applied to other file systems to enable deniable encryption for desktop OSes and other mobile platforms.

Journal ArticleDOI
TL;DR: A field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS is presented, which analyzes the source code of security patches of widely used Web applications written in weak and strong typed languages.
Abstract: Most web applications have critical bugs (faults) affecting their security, which makes them vulnerable to attacks by hackers and organized crime. To prevent these security problems from occurring it is of utmost importance to understand the typical software faults. This paper contributes to this body of knowledge by presenting a field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS. It analyzes the source code of security patches of widely used web applications written in weak and strong typed languages. Results show that only a small subset of software fault types, affecting a restricted collection of statements, is related to security. To understand how these vulnerabilities are really exploited by hackers, this paper also presents an analysis of the source code of the scripts used to attack them. The outcomes of this study can be used to train software developers and code inspectors in the detection of such faults and are also the foundation for the research of realistic vulnerability and attack injectors that can be used to assess security mechanisms, such as intrusion detection systems, vulnerability scanners, and static code analyzers.

Journal ArticleDOI
TL;DR: This paper presents HyperCheck, a hardware-assisted tampering detection framework designed to protect the integrity of hypervisors and operating systems and measures that HyperCheck can communicate the entire static code of Xen hypervisor and CPU register states in less than 90 million CPU cycles, or 90 ms on a 1 GHz CPU.
Abstract: The advent of cloud computing and inexpensive multi-core desktop architectures has lead to the widespread adoption of virtualization technologies. Furthermore, security researchers embraced virtual machine monitors (VMMs) as a new mechanism to guarantee deep isolation of untrusted software components, which coupled with their popularity promoted VMMs as a prime target for exploitation. In this paper, we present HyperCheck, a hardware-assisted tampering detection framework designed to protect the integrity of hypervisors and operating systems. Our approach leverages System Management Mode (SMM), a CPU mode in x86 architecture, to transparently and securely acquire and transmit the full state of a protected machine to a remote server. We have implement two prototypes based on our framework design: HyperCheck-I and HyperCheck-II, that vary in their security assumptions and OS code dependence. In our experiments, we are able to identify rootkits that target the integrity of both hypervisors and operating systems. We show that HyperCheck can defend against attacks that attempt to evade our system. In terms of performance, we measured that HyperCheck can communicate the entire static code of Xen hypervisor and CPU register states in less than 90 million CPU cycles, or 90 ms on a 1 GHz CPU.

Journal ArticleDOI
TL;DR: This work presents a new kind of denial of service attack based on properly crafted SIM-less devices that, without any kind of authentication and by exploiting some specific features and performance bottlenecks of the UMTS network attachment process, are potentially capable of introducing significant service degradation.
Abstract: One of the fundamental security elements in cellular networks is the authentication procedure performed by means of the Subscriber Identity Module that is required to grant access to network services and hence protect the network from unauthorized usage. Nonetheless, in this work we present a new kind of denial of service attack based on properly crafted SIM-less devices that, without any kind of authentication and by exploiting some specific features and performance bottlenecks of the UMTS network attachment process, are potentially capable of introducing significant service degradation up to disrupting large sections of the cellular network coverage. The knowledge of this attack can be exploited by several applications both in security and in network equipment manufacturing sectors.

Journal ArticleDOI
TL;DR: This paper proposes a statistical traffic pattern discovery (STPD) system that intends to find out the sources and destinations of captured packets and discover the end-to-end communication relations.
Abstract: Many anonymity enhancing techniques have been proposed based on packet encryption to protect the communication anonymity of mobile ad hoc networks (MANETs). However, in this paper, we show that MANETs are still vulnerable under passive statistical traffic analysis attacks. To demonstrate how to discover the communication patterns without decrypting the captured packets, we present a novel statistical traffic pattern discovery system (STARS). STARS works passively to perform traffic analysis based on statistical characteristics of captured raw traffic. STARS is capable of discovering the sources, the destinations, and the end-to-end communication relations. Empirical studies demonstrate that STARS achieves good accuracy in disclosing the hidden traffic patterns.

Journal ArticleDOI
TL;DR: Analysis of cyberintrusions across more than 260,000 computer systems over a period of almost three years shows that the assumption of a Poisson process model might be unoptimal - the log-normal distribution is a significantly better fit in terms of modeling both the number of detected intrusions and the time between intrusions.
Abstract: A frequent assumption in the domain of cybersecurity is that cyberintrusions follow the properties of a Poisson process, i.e., that the number of intrusions is well modeled by a Poisson distribution and that the time between intrusions is exponentially distributed. This paper studies this property by analyzing all cyberintrusions that have been detected across more than 260,000 computer systems over a period of almost three years. The results show that the assumption of a Poisson process model might be unoptimal - the log-normal distribution is a significantly better fit in terms of modeling both the number of detected intrusions and the time between intrusions, and the Pareto distribution is a significantly better fit in terms of modeling the time to first intrusion. The paper also analyzes whether time to compromise (TTC) increase for each successful intrusion of a computer system. The results regarding this property suggest that time to compromise decrease along the number of intrusions of a system.

Journal ArticleDOI
TL;DR: This work designs and implements an attack that defeats instances of such a CAPTCHA (NuCaptcha) representing the state-of-the-art, involving dynamic text strings called codewords, and shows that the automated approach can decode these captchas faster than humans can, and can do so at a relatively low cost.
Abstract: We explore the robustness and usability of moving-image object recognition (video) captchas, designing and implementing automated attacks based on computer vision techniques. Our approach is suitable for broad classes of moving-image captchas involving rigid objects. We first present an attack that defeats instances of such a captcha (NuCaptcha) representing the state-of-the-art, involving dynamic text strings called codewords. We then consider design modifications to mitigate the attacks (e.g., overlapping characters more closely, randomly changing the font of individual characters, or even randomly varying the number of characters in the codeword). We implement the modified captchas and test if designs modified for greater robustness maintain usability. Our lab-based studies show that the modified captchas fail to offer viable usability, even when the captcha strength is reduced below acceptable targets. Worse yet, our GPU-based implementation shows that our automated approach can decode these captchas faster than humans can, and we can do so at a relatively low cost of roughly 50 cents per 1,000 captchas solved based on Amazon EC2 rates circa 2012. To further demonstrate the challenges in designing usable captchas, we also implement and test another variant of moving text strings using the known emerging images concept. This variant is resilient to our attacks and also offers similar usability to commercially available approaches. We explain why fundamental elements of the emerging images idea resist our current attack where others fail.

Journal ArticleDOI
TL;DR: The properties of Verifiable multilateration are studied as a noncooperative two-player game where the first player employs a number of verifiers to do VM computations and the second player controls a malicious node.
Abstract: Most applications of wireless sensor networks (WSNs) rely on data about the positions of sensor nodes, which are not necessarily known beforehand. Several localization approaches have been proposed but most of them omit to consider that WSNs could be deployed in adversarial settings, where hostile nodes under the control of an attacker coexist with faithful ones. Verifiable multilateration (VM) was proposed to cope with this problem by leveraging on a set of trusted landmark nodes that act as verifiers. Although VM is able to recognize reliable localization measures, it allows for regions of undecided positions that can amount to the 40 percent of the monitored area. We studied the properties of VM as a noncooperative two-player game where the first player employs a number of verifiers to do VM computations and the second player controls a malicious node. The verifiers aim at securely localizing malicious nodes, while malicious nodes strive to masquerade as unknown and to pretend false positions. Thanks to game theory, the potentialities of VM are analyzed with the aim of improving the defender's strategy. We found that the best placement for verifiers is an equilateral triangle with edge equal to the power range R, and maximum deception in the undecided region is approximately 0.27R. Moreover, we characterizedain terms of the probability of choosing an unknown node to examine furtherathe strategies of the players.

Journal ArticleDOI
TL;DR: A new private query, which searches for documents from streaming data on the basis of keyword frequency, such that the frequency of a keyword is required to be higher or lower than a given threshold, is considered.
Abstract: Private searching on streaming data is a process to dispatch to a public server a program, which searches streaming sources of data without revealing searching criteria and then sends back a buffer containing the findings. From an Abelian group homomorphic encryption, the searching criteria can be constructed by only simple combinations of keywords, for example, disjunction of keywords. The recent breakthrough in fully homomorphic encryption has allowed us to construct arbitrary searching criteria theoretically. In this paper, we consider a new private query, which searches for documents from streaming data on the basis of keyword frequency, such that the frequency of a keyword is required to be higher or lower than a given threshold. This form of query can help us in finding more relevant documents. Based on the state of the art fully homomorphic encryption techniques, we give disjunctive, conjunctive, and complement constructions for private threshold queries based on keyword frequency. Combining the basic constructions, we further present a generic construction for arbitrary private threshold queries based on keyword frequency. Our protocols are semantically secure as long as the underlying fully homomorphic encryption scheme is semantically secure.

Journal ArticleDOI
TL;DR: This paper revisited a method to identify computers by their clocks skew computed from TCP timestamps and validated that the original method is suitable for the computer identification but discovered that Linux hosts running NTP had become immune to the identification.
Abstract: In this paper we revisited a method to identify computers by their clocks skew computed from TCP timestamps. We introduced our own tool to compute clock skew of computers in a network. We validated that the original method is suitable for the computer identification but we also discovered that Linux hosts running NTP had become immune to the identification.

Journal ArticleDOI
TL;DR: A rule- based access control policy language, a rule-based administrative policy model that controls addition and removal of facts and rules, and an abductive analysis algorithm for user-permission reachability are presented.
Abstract: In large organizations, access control policies are managed by multiple users (administrators). An administrative policy specifies how each user in an enterprise may change the policy. Fully understanding the consequences of an administrative policy in an enterprise system can be difficult, because of the scale and complexity of the access control policy and the administrative policy, and because sequences of changes by different users may interact in unexpected ways. Administrative policy analysis helps by answering questions such as user-permission reachability, which asks whether specified users can together change the policy in a way that achieves a specified goal, namely, granting a specified permission to a specified user. This paper presents a rule-based access control policy language, a rule-based administrative policy model that controls addition and removal of facts and rules, and an abductive analysis algorithm for user-permission reachability. Abductive analysis means that the algorithm can analyze policy rules even if the facts initially in the policy (e.g., information about users) are unavailable. The algorithm does this by computing minimal sets of facts that, if present in the initial policy, imply reachability of the goal.

Journal ArticleDOI
TL;DR: CipherXRay is a novel binary analysis framework that can automatically identify and recover the cryptographic operations and transient secrets from the execution of potentially obfuscated binary executables and demonstrates that current software implementations of cryptographic algorithms hardly achieve any secrecy if their execution can be monitored.
Abstract: Malwares are becoming increasingly stealthy, more and more malwares are using cryptographic algorithms (eg, packing, encrypting C&C communication) to protect themselves from being analyzed The use of cryptographic algorithms and truly transient cryptographic secrets inside the malware binary imposes a key obstacle to effective malware analysis and defense To enable more effective malware analysis, forensics, and reverse engineering, we have developed CipherXRay - a novel binary analysis framework that can automatically identify and recover the cryptographic operations and transient secrets from the execution of potentially obfuscated binary executables Based on the avalanche effect of cryptographic functions, CipherXRay is able to accurately pinpoint the boundary of cryptographic operation and recover truly transient cryptographic secrets that only exist in memory for one instant in between multiple nested cryptographic operations CipherXRay can further identify certain operation modes (eg, ECB, CBC, CFB) of the identified block cipher and tell whether the identified block cipher operation is encryption or decryption in certain cases We have empirically validated CipherXRay with OpenSSL, popular password safe KeePassX, the ciphers used by malware Stuxnet, Kraken and Agobot, and a number of third party softwares with built-in compression and checksum CipherXRay is able to identify various cryptographic operations and recover cryptographic secrets that exist in memory for only a few microseconds Our results demonstrate that current software implementations of cryptographic algorithms hardly achieve any secrecy if their execution can be monitored

Journal ArticleDOI
TL;DR: A lightweight secure application authentication framework in which user-level applications are required to present proofs at runtime to be authenticated to the kernel, and a system call monitoring framework for preventing unauthorized use or access of system resources.
Abstract: This paper points out the need in modern operating system kernels for a process authentication mechanism, where a process of a user-level application proves its identity to the kernel. Process authentication is different from process identification. Identification is a way to describe a principal; PIDs or process names are identifiers for processes in an OS environment. However, the information such as process names or executable paths that is conventionally used by OS to identify a process is not reliable. As a result, malware may impersonate other processes, thus violating system assurance. We propose a lightweight secure application authentication framework in which user-level applications are required to present proofs at runtime to be authenticated to the kernel. To demonstrate the application of process authentication, we develop a system call monitoring framework for preventing unauthorized use or access of system resources. It verifies the identity of processes before completing the requested system calls. We implement and evaluate a prototype of our monitoring architecture in Linux. The results from our extensive performance evaluation show that our prototype incurs reasonably low overhead, indicating the feasibility of our approach for cryptographically authenticating applications and their processes in the operating system.

Journal ArticleDOI
TL;DR: This paper studies a file service system that undergoes periodic data backups, and investigates metrics concerning system availability, data loss and rejection of user requests, using a variety of model types to construct a set of analytical models that capture the operational details of the system.
Abstract: In modern IT systems, data backup and restore operations are essential for providing protection against data loss from both natural and man-made incidents. On the other hand, data backup and restore operations can be resource-intensive and lead to performance degradation, or may require the system to be offline entirely. Therefore, it is important to properly choose backup and restore techniques and policies to ensure adequate data protection while minimizing the impact on system availability and performance. In this paper, we present an analytical modeling approach for such a purpose. We study a file service system that undergoes periodic data backups, and investigate metrics concerning system availability, data loss and rejection of user requests. To obtain the metrics, we combine a variety of model types, including Markov chains, queuing networks and Stochastic Reward Nets, to construct a set of analytical models that capture the operational details of the system. We then compute the metrics of interest under different backup/restore techniques, policies, and workload scenarios. The numerical results allow us to compare the effects of different backup/restore techniques and policies in terms of the tradeoff between protective power and impact on system performance and availability.