scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Information and System Security in 2011"


Journal ArticleDOI
TL;DR: In this article, a new class of attacks, called false data injection attacks, against state estimation in electric power grids is presented and analyzed, under the assumption that the attacker can access the current power system configuration information and manipulate the measurements of meters at physically protected locations such as substations.
Abstract: A power grid is a complex system connecting electric power generators to consumers through power transmission and distribution networks across a large geographical area. System monitoring is necessary to ensure the reliable operation of power grids, and state estimation is used in system monitoring to best estimate the power grid state through analysis of meter measurements and power system models. Various techniques have been developed to detect and identify bad measurements, including interacting bad measurements introduced by arbitrary, nonrandom causes. At first glance, it seems that these techniques can also defeat malicious measurements injected by attackers.In this article, we expose an unknown vulnerability of existing bad measurement detection algorithms by presenting and analyzing a new class of attacks, called false data injection attacks, against state estimation in electric power grids. Under the assumption that the attacker can access the current power system configuration information and manipulate the measurements of meters at physically protected locations such as substations, such attacks can introduce arbitrary errors into certain state variables without being detected by existing algorithms. Moreover, we look at two scenarios, where the attacker is either constrained to specific meters or limited in the resources required to compromise meters. We show that the attacker can systematically and efficiently construct attack vectors in both scenarios to change the results of state estimation in arbitrary ways. We also extend these attacks to generalized false data injection attacks, which can further increase the impact by exploiting measurement errors typically tolerated in state estimation. We demonstrate the success of these attacks through simulation using IEEE test systems, and also discuss the practicality of these attacks and the real-world constraints that limit their effectiveness.

2,064 citations


Journal ArticleDOI
TL;DR: A layered anti-phishing solution that aims at exploiting the expressiveness of a rich set of features with machine learning to achieve a high true positive rate (TP) on novel phish, and limiting the FP to a low level via filtering algorithms.
Abstract: Phishing is a plague in cyberspace. Typically, phish detection methods either use human-verified URL blacklists or exploit Web page features via machine learning techniques. However, the former is frail in terms of new phish, and the latter suffers from the scarcity of effective features and the high false positive rate (FP). To alleviate those problems, we propose a layered anti-phishing solution that aims at (1) exploiting the expressiveness of a rich set of features with machine learning to achieve a high true positive rate (TP) on novel phish, and (2) limiting the FP to a low level via filtering algorithms.Specifically, we proposed CANTINA+, the most comprehensive feature-based approach in the literature including eight novel features, which exploits the HTML Document Object Model (DOM), search engines and third party services with machine learning techniques to detect phish. Moreover, we designed two filters to help reduce FP and achieve runtime speedup. The first is a near-duplicate phish detector that uses hashing to catch highly similar phish. The second is a login form filter, which directly classifies Web pages with no identified login form as legitimate.We extensively evaluated CANTINA+ with two methods on a diverse spectrum of corpora with 8118 phish and 4883 legitimate Web pages. In the randomized evaluation, CANTINA+ achieved over 92p TP on unique testing phish and over 99p TP on near-duplicate testing phish, and about 0.4p FP with 10p training phish. In the time-based evaluation, CANTINA+ also achieved over 92p TP on unique testing phish, over 99p TP on near-duplicate testing phish, and about 1.4p FP under 20p training phish with a two-week sliding window. Capable of achieving 0.4p FP and over 92p TP, our CANTINA+ has been demonstrated to be a competitive anti-phishing solution.

462 citations


Journal ArticleDOI
TL;DR: A model for provable data possession (PDP) that can be used for remote data checking: A client that has stored data at an untrusted server can verify that the server possesses the original data without retrieving it.
Abstract: We introduce a model for provable data possession (PDP) that can be used for remote data checking: A client that has stored data at an untrusted server can verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking is lightweight and supports large data sets in distributed storage systems. The model is also robust in that it incorporates mechanisms for mitigating arbitrary amounts of data corruption.We present two provably-secure PDP schemes that are more efficient than previous solutions. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. We then propose a generic transformation that adds robustness to any remote data checking scheme based on spot checking. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation. Finally, we conduct an in-depth experimental evaluation to study the tradeoffs in performance, security, and space overheads when adding robustness to a remote data checking scheme.

382 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a differentially private continual counter that outputs at every time step the approximate number of 1's seen thus far, with error that is only poly-log in the number of time steps.
Abstract: We ask the question: how can Web sites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow Web sites to continually give top-k and hot items suggestions while preserving users’ privacy.

367 citations


Journal ArticleDOI
Ninghui Li1
TL;DR: This issue of TISSEC includes extended versions of articles selected from the program of the 13th ACM Symposium on Access Control Models and Technologies (SACMAT 2008), which took place from June 11 to 13, 2008 in Estes Park, CO.
Abstract: This issue of TISSEC includes extended versions of articles selected from the program of the 13th ACM Symposium on Access Control Models and Technologies (SACMAT 2008), which took place from June 11 to 13, 2008 in Estes Park, CO. SACMAT is a successful series of symposiums that continue the tradition, first established by the ACM Workshop on Role-Based Access Control, of being the premier forum for the presentation of research results and experience reports on leading-edge issues of access control, including models, systems, applications, and theory. The missions of the symposium are to share novel access control solutions that fulfill the needs of heterogeneous applications and environments and to identify new directions for future research and development. SACMAT gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of access control. The articles in this special issue were invited for submission from the 20 articles presented at SACMAT 2008. These were selected from 79 submissions from authors in 24 countries in Africa, Asia, Australia, Europe, North America, and South America. All the journal submissions went through an additional thorough review process to further ensure their quality. The first article “Detecting and Resolving Policy Misconfigurations in Access-Control Systems” by Lujo Bauer, Scott Garriss, and Michael K. Reiter applies association rule mining to the history of accesses to predict changes to access-control policies that are likely to be consistent with users’ intention, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses. As instituting these changes requires the consent of the appropriate administrator, the article also introduces techniques to automatically determine from whom to seek consent and to minimize the costs of doing so. The proposed techniques are evaluated using data from a deployed access-control system. The second article “Authorization Recycling in Hierarchical RBAC Systems” by Qiang Wei, Jason Crampton, Konstantin Beznosov, and Matei Ripeanu introduces and evaluates the mechanisms for authorization “recycling” in RBAC enterprise systems. These mechanisms cache and reuse previous authorization decisions to help address the problem that the policy decision point in distributed applications tends to become a single point of failure and a performance bottleneck. The algorithms that support these mechanisms allow making precise and approximate authorization decisions, thereby masking possible failures of the authorization server and reducing its load. I would like to thank all the authors for submitting their research results in this special issue and to all the reviewers for their insightful comments. I am also grateful to Gene Tsudik, editor-in-chief, for his guidance and help throughout this process.

112 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply association rule mining to the history of accesses to predict changes to access control policies that are likely to be consistent with users' intentions, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses.
Abstract: Access-control policy misconfigurations that cause requests to be erroneously denied can result in wasted time, user frustration, and, in the context of particular applications (e.g., health care), very severe consequences. In this article we apply association rule mining to the history of accesses to predict changes to access-control policies that are likely to be consistent with users' intentions, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses. Instituting these changes requires the consent of the appropriate administrator, of course, and so a primary contribution of our work is how to automatically determine from whom to seek consent and how to minimize the costs of doing so. We show using data from a deployed access-control system that our methods can reduce the number of accesses that would have incurred costly time-of-access delays by 43p, and can correctly predict 58p of the intended policy. These gains are achieved without impacting the total amount of time users spend interacting with the system.

80 citations


Journal ArticleDOI
TL;DR: Nexus Authorization Logic (NAL) provides a principled basis for specifying and reasoning about credentials and authorization policies and extends prior access control logics that are based on “says” and “speaks for” operators.
Abstract: Nexus Authorization Logic (NAL) provides a principled basis for specifying and reasoning about credentials and authorization policies. It extends prior access control logics that are based on “says” and “speaks for” operators. NAL enables authorization of access requests to depend on (i) the source or pedigree of the requester, (ii) the outcome of any mechanized analysis of the requester, or (iii) the use of trusted software to encapsulate or modify the requester. To illustrate the convenience and expressive power of this approach to authorization, a suite of document-viewer applications was implemented to run on the Nexus operating system. One of the viewers enforces policies that concern the integrity of excerpts that a document contains; another viewer enforces confidentiality policies specified by labels tagging blocks of text.

77 citations


Journal ArticleDOI
TL;DR: A lightweight RFID authentication protocol that supports forward and backward security and uses a pseudorandom number generator (PRNG) that is shared with the backend Server.
Abstract: We propose a lightweight RFID authentication protocol that supports forward and backward security. The only cryptographic mechanism that this protocol uses is a pseudorandom number generator (PRNG) that is shared with the backend Server. Authentication is achieved by exchanging a few numbers (3 or 5) drawn from the PRNG. The lookup time is constant, and the protocol can be easily adapted to prevent online man-in-the-middle relay attacks. Security is proven in the UC security framework.

52 citations


Journal ArticleDOI
TL;DR: An expressive access-control policy language PBel is defined, having composition operators based on the operators of Belnap logic, and domain-specific and generic extensions of the policy language are discussed.
Abstract: Access control to IT systems increasingly relies on the ability to compose policies. Hence there is benefit in any framework for policy composition that is intuitive, formal (and so “analyzable” and “implementable”), expressive, independent of specific application domains, and yet able to be extended to create domain-specific instances. Here we develop such a framework based on Belnap logic. An access-control policy is interpreted as a four-valued predicate that maps access requests to either grant, deny, conflict, or unspecified -- the four values of the Belnap bilattice. We define an expressive access-control policy language PBel, having composition operators based on the operators of Belnap logic. Natural orderings on policies are obtained by lifting the truth and information orderings of the Belnap bilattice. These orderings lead to a query language in which policy analyses, for example, conflict freedom, can be specified. Policy analysis is supported through a reduction of the validity of policy queries to the validity of propositional formulas on predicates over access requests. We evaluate our approach through firewall policy and RBAC policy examples, and discuss domain-specific and generic extensions of our policy language.

52 citations


Journal ArticleDOI
TL;DR: This model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time and results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder.
Abstract: Traditional security protocols are mainly concerned with authentication and key establishment and rely on predistributed keys and properties of cryptographic operators. In contrast, new application areas are emerging that establish and rely on properties of the physical world. Examples include protocols for secure localization, distance bounding, and secure time synchronization.We present a formal model for modeling and reasoning about such physical security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance between nodes. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder. We have formalized our model in Isabelle/HOL and have used it to verify protocols for authenticated ranging, distance bounding, broadcast authentication based on delayed key disclosure, and time synchronization.

46 citations


Journal ArticleDOI
TL;DR: A hierarchy of privacy notions that covers multiple anonymity and unlinkability variants is presented, based on the idea of indistinguishability between two worlds, which provides new insights into the relation between, and the fundamental structure of, different privacy notions.
Abstract: This article presents a hierarchy of privacy notions that covers multiple anonymity and unlinkability variants. The underlying definitions, which are based on the idea of indistinguishability between two worlds, provide new insights into the relation between, and the fundamental structure of, different privacy notions. We furthermore place previous privacy definitions concerning group signature, anonymous communication, and secret voting systems in the context of our hierarchy; this renders these traditionally disconnected notions comparable.

Journal ArticleDOI
TL;DR: This research presents implementations of a variety of different PAD algorithms, some based on Merkle tree-style data structures and others based on individually signed “tuple” statements (with and without RSA accumulators), and concludes that MerKle tree PADs are preferable in cases with frequent updates, while tuple-based PAD's are preferable with higher query rates.
Abstract: Authenticated dictionaries are a widely discussed paradigm to enable verifiable integrity for data storage on untrusted servers, such as today’s widely used “cloud computing” resources, allowing a server to provide a “proof,” typically in the form of a slice through a cryptographic data structure, that the results of any given query are the correct answer, including that the absence of a query result is correct. Persistent authenticated dictionaries (PADs) further allow queries against older versions of the structure. This research presents implementations of a variety of different PAD algorithms, some based on Merkle tree-style data structures and others based on individually signed “tuple” statements (with and without RSA accumulators). We present system throughput benchmarks, indicating costs in terms of time, storage, and bandwidth as well as considering how much money would be required given standard cloud computing costs. We conclude that Merkle tree PADs are preferable in cases with frequent updates, while tuple-based PADs are preferable with higher query rates. For Merkle tree PADs, red-black trees outperform treaps and skiplists. Applying Sarnak-Tarjan’s versioned node strategy, with a cache of old hashes at every node, to red-black trees yields the fastest Merkle tree PAD implementation, notably using half the memory of the more commonly used mutation-free path copying strategy. For tuple PADs, although we designed and implemented an algorithm using RSA accumulators that offers constant update size, constant storage per update, constant proof size, and sublinear computation per update, we found that RSA accumulators are so expensive that they are never worthwhile. We find that other optimizations in the literature for tuple PADs are more cost-effective.

Journal ArticleDOI
TL;DR: A lightweight scheme is proposed, DART, that uses time-based authentication in combination with random linear transformations to defend against pollution attacks to improve system performance and propose EDART, which enhances DART with an optimistic forwarding scheme.
Abstract: Recent studies have shown that network coding can provide significant benefits to network protocols, such as increased throughput, reduced network congestion, higher reliability, and lower power consumption. The core principle of network coding is that intermediate nodes actively mix input packets to produce output packets. This mixing subjects network coding systems to a severe security threat, known as a pollution attack, where attacker nodes inject corrupted packets into the network. Corrupted packets propagate in an epidemic manner, depleting network resources and significantly decreasing throughput. Pollution attacks are particularly dangerous in wireless networks, where attackers can easily inject packets or compromise devices due to the increased network vulnerability.In this article, we address pollution attacks against network coding systems in wireless mesh networks. We demonstrate that previous solutions are impractical in wireless networks, incurring an unacceptable high degradation of throughput. We propose a lightweight scheme, DART, that uses time-based authentication in combination with random linear transformations to defend against pollution attacks. We further improve system performance and propose EDART, which enhances DART with an optimistic forwarding scheme. We also propose efficient attacker identification schemes for both DART and EDART that enable quick attacker isolation and the selection of attacker-free paths, achieving additional performance improvement. A detailed security analysis shows that the probability of a polluted packet passing our verification procedure is very low (less than 0.002p in typical settings). Performance results using the well-known MORE protocol and realistic link quality measurements from the Roofnet experimental testbed show that our schemes improve system performance over 20 times compared with previous solutions.

Journal ArticleDOI
TL;DR: Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing.
Abstract: Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated inputs are rejected at the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. This article presents TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel features: (1) TaintScope is a checksum-aware fuzzing tool. It can identify checksum fields in inputs, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. Furthermore, it can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. (2) TaintScope is a taint-based fuzzing tool working at the x86 binary level. Based on fine-grained dynamic taint tracing, TaintScope identifies the “hot bytes” in a well-formed input that are used in security-sensitive operations (e.g., invoking system/library calls), and then focuses on modifying such bytes with random or boundary values. (3) TaintScope is also a symbolic-execution-based fuzzing tool. It can symbolically evaluate a trace, reason about all possible values that can execute the trace, and then detect potential vulnerabilities on the trace.We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 30 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Flash Player, Google Picasa, and Microsoft Paint. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Vendor patches have been released or are in preparation based on our reports.

Journal ArticleDOI
TL;DR: Results indicate that a functionality-based approach has significant potential in terms of enabling end users with limited expertise to defend themselves against insecure and malicious software.
Abstract: Protecting end users from security threats is an extremely difficult, but increasingly critical, problem. Traditional security models that focused on separating users from each other have proven ineffective in an environment of widespread software vulnerabilities and rampant malware. However, alternative approaches that provide more finely grained security generally require greater expertise than typical end users can reasonably be expected to have, and consequently have had limited success.The functionality-based application confinement (FBAC) model is designed to allow end users with limited expertise to assign applications hierarchical and parameterised policy abstractions based upon the functionalities each program is intended to perform. To validate the feasibility of this approach and assess the usability of existing mechanisms, a usability study was conducted comparing an implementation of the FBAC model with the widely used Linux-based SELinux and AppArmor security schemes. The results showed that the functionality-based mechanism enabled end users to effectively control the privileges of their applications with far greater success than widely used alternatives. In particular, policies created using FBAC were more likely to be enforced and exhibited significantly lower risk exposure, while not interfering with the ability of the application to perform its intended task. In addition to the success of the functionality-based approach, the usability study also highlighted a number of limitations and problems with existing mechanisms. These results indicate that a functionality-based approach has significant potential in terms of enabling end users with limited expertise to defend themselves against insecure and malicious software.

Journal ArticleDOI
TL;DR: The specification serves as a reference point that is the first step in deriving authorization-system component specifications from which a programmer with little security expertise could implement a high-assurance enforcement system for the specified policy.
Abstract: Group-Centric Secure Information Sharing (g-SIS) envisions bringing users and objects together in a group to facilitate agile sharing of information brought in from external sources as well as creation of new information within the group. We expect g-SIS to be orthogonal and complementary to authorization systems deployed within participating organizations. The metaphors “secure meeting room” and “subscription service” characterize the g-SIS approach.The focus of this article is on developing the foundations of isolated g-SIS models. Groups are isolated in the sense that membership of a user or an object in a group does not affect their authorizations in other groups. Present contributions include the following: formal specification of core properties that at once help to characterize the family of g-SIS models and provide a “sanity check” for full policy specifications; informal discussion of policy design decisions that differentiate g-SIS policies from one another with respect to the authorization semantics of group operations; formalization and verification of a specific member of the family of g-SIS models; demonstration that the core properties are logically consistent and mutually independent; and identification of several directions for future extensions.The formalized specification is highly abstract. Besides certain well-formedness requirements that specify, for instance, a user cannot leave a group unless she is a member, it constrains only whether user-level read and write operations are authorized and it does so solely in terms of the history of group operations; join and leave for users and add, create, and remove for objects. This makes temporal logic one of the few formalisms in which the specification can be clearly and concisely expressed. The specification serves as a reference point that is the first step in deriving authorization-system component specifications from which a programmer with little security expertise could implement a high-assurance enforcement system for the specified policy.

Journal ArticleDOI
TL;DR: A new attack on network coordinate systems, Frog-Boiling, is described that is more disruptive than the previously known attacks and systems that attempt to reject “bad” inputs by statistical means or reputation cannot be used to secure a network coordinate system.
Abstract: A network coordinate system assigns Euclidean “virtual” coordinates to every node in a network to allow easy estimation of network latency between pairs of nodes that have never contacted each other. These systems have been implemented in a variety of applications, most notably the popular Vuze BitTorrent client. Zage and Nita-Rotaru (at CCS 2007) and independently, Kaafar et al. (at SIGCOMM 2007), demonstrated that several widely-cited network coordinate systems are prone to simple attacks, and proposed mechanisms to defeat these attacks using outlier detection to filter out adversarial inputs. Kaafar et al. goes a step further and requires that a fraction of the network is trusted. More recently, Sherr et al. (at USENIX ATC 2009) proposed Veracity, a distributed reputation system to secure network coordinate systems. We describe a new attack on network coordinate systems, Frog-Boiling, that defeats all of these defenses. Thus, even a system with trusted entities is still vulnerable to attacks. Moreover, having witnesses vouch for your coordinates as in Veracity does not prevent our attack. Finally, we demonstrate empirically that the Frog-Boiling attack is more disruptive than the previously known attacks: systems that attempt to reject “bad” inputs by statistical means or reputation cannot be used to secure a network coordinate system.

Journal ArticleDOI
TL;DR: This article introduces and evaluates the mechanisms for authorization “recycling” in RBAC enterprise systems and demonstrates that authorization recycling can improve the performance of distributed-access control mechanisms.
Abstract: As distributed applications increase in size and complexity, traditional authorization architectures based on a dedicated authorization server become increasingly fragile because this decision point represents a single point of failure and a performance bottleneck. Authorization caching, which enables the reuse of previous authorization decisions, is one technique that has been used to address these challenges.This article introduces and evaluates the mechanisms for authorization “recycling” in RBAC enterprise systems. The algorithms that support these mechanisms allow making precise and approximate authorization decisions, thereby masking possible failures of the authorization server and reducing its load. We evaluate these algorithms analytically as well as using simulation and a prototype implementation. Our evaluation results demonstrate that authorization recycling can improve the performance of distributed-access control mechanisms.

Journal ArticleDOI
TL;DR: New Jersey’s protocols for the use of tamper-evident seals have been not at all effective, and the more general problem of seals in democratic elections is discussed.
Abstract: Tamper-evident seals are used by many states’ election officials on voting machines and ballot boxes, either to protect the computer and software from fraudulent modification or to protect paper ballots from fraudulent substitution or stuffing. Physical tamper-indicating seals can usually be easily defeated, given they way they are typically made and used; and the effectiveness of seals depends on the protocol for their application and inspection. The legitimacy of our elections may therefore depend on whether a particular state’s use of seals is effective to prevent, deter, or detect election fraud. This paper is a case study of the use of seals on voting machines by the State of New Jersey. I conclude that New Jersey’s protocols for the use of tamper-evident seals have been not at all effective. I conclude with a discussion of the more general problem of seals in democratic elections.

Journal ArticleDOI
TL;DR: Garm, a tool that uses binary rewriting to implement this technique on arbitrary binaries, can successfully trace the provenance of data across executions of multiple applications and enforce data access policies on the application's executions.
Abstract: We present a new technique that can trace data provenance and enforce data access policies across multiple applications and machines. We have developed Garm, a tool that uses binary rewriting to implement this technique on arbitrary binaries. Users can use Garm to attach access policies to data and Garm enforces the policy on all accesses to the data (and any derived data) across all applications and executions. Garm uses static analysis to generate optimized instrumentation that traces the provenance of an application's state and the policies that apply to this state. Garm monitors the interactions of the application with the underlying operating system to enforce policies. Conceptually, Garm combines trusted computing support from the underlying operating system with a stream cipher to ensure that data protected by an access policy cannot be accessed outside of Garm's policy enforcement mechanisms. We have evaluated Garm with several common Linux applications. We found that Garm can successfully trace the provenance of data across executions of multiple applications and enforce data access policies on the application's executions.

Journal ArticleDOI
TL;DR: This article defines enforcement schemes for interval-based access control policies for which it is possible, in almost all cases, to obtain exact values for the schemes' complexity, thereby subsuming a substantial body of work in the literature.
Abstract: The enforcement of access control policies using cryptography has received considerable attention in recent years and the security of such enforcement schemes is increasingly well understood. Recent work in the area has considered the efficient enforcement of temporal and geo-spatial access control policies, and asymptotic results for the time and space complexity of efficient enforcement schemes have been obtained. However, for practical purposes, it is useful to have explicit bounds for the complexity of enforcement schemes.In this article we consider interval-based access control policies, of which temporal and geo-spatial access control policies are special cases. We define enforcement schemes for interval-based access control policies for which it is possible, in almost all cases, to obtain exact values for the schemes' complexity, thereby subsuming a substantial body of work in the literature. Moreover, our enforcement schemes are more practical than existing schemes, in the sense that they operate in the same way as standard cryptographic enforcement schemes, unlike other efficient schemes in the literature. The main difference between our approach and earlier work is that we develop techniques that are specific to the cryptographic enforcement of interval-based access control policies, rather than applying generic techniques that give rise to complex constructions and asymptotic bounds.

Journal ArticleDOI
TL;DR: The Information Flow Enhanced Discretionary Access Control (IFEDAC) model is proposed, a model that implements the discretionary policy in DAC with the dynamic information flow techniques in MAC achieving the best of both worlds.
Abstract: Discretionary Access Control (DAC) is the primary access control mechanism in today’s major operating systems. It is, however, vulnerable to Trojan Horse attacks and attacks exploiting buggy software. We propose to combine the discretionary policy in DAC with the dynamic information flow techniques in MAC, therefore achieving the best of both worlds, that is, the DAC’s easy-to-use discretionary policy specification and MAC’s defense against threats caused by Trojan Horses and buggy programs. We propose the Information Flow Enhanced Discretionary Access Control (IFEDAC) model that implements this design philosophy. We describe our design of IFEDAC, and discuss its relationship with the Usable Mandatory Integrity Protection (UMIP) model proposed earlier by us. In addition, we analyze their security property and their relationships with other protection systems. We also describe our implementations of IFEDAC in Linux and the evaluation results and deployment experiences of the systems.

Journal ArticleDOI
TL;DR: In this article, the authors propose a stateful anonymous credential system that allows the provider to implement nontrivial, real-world access controls on oblivious protocols conducted with anonymous users.
Abstract: The use of privacy-enhancing cryptographic protocols, such as anonymous credentials and oblivious transfer, could have a detrimental effect on the ability of providers to effectively implement access controls on their content. In this article, we propose a stateful anonymous credential system that allows the provider to implement nontrivial, real-world access controls on oblivious protocols conducted with anonymous users. Our system models the behavior of users as a state machine and embeds that state within an anonymous credential to restrict access to resources based on the state information. The use of state machine models of user behavior allows the provider to restrict the users' actions according to a wide variety of access control models without learning anything about the users' identities or actions. Our system is secure in the standard model under basic assumptions and, after an initial setup phase, each transaction requires only constant time. As a concrete example, we show how to implement the Brewer--Nash (Chinese Wall) and Bell-La Padula (Multilevel Security) access control models within our credential system. Furthermore, we combine our credential system with an adaptive oblivious transfer scheme to create a privacy-friendly oblivious database with strong access controls.

Journal ArticleDOI
TL;DR: A new hash chain construction, the Duplex Hash Chain, is proposed, which allows us to achieve bit-by-bit authentication that is robust to low bit error rates and well suited for wireless broadcast communications characterized by low packet losses such as in satellite networks.
Abstract: We present a novel video stream authentication scheme which combines signature amortization by means of hash chains and an advanced watermarking technique. We propose a new hash chain construction, the Duplex Hash Chain, which allows us to achieve bit-by-bit authentication that is robust to low bit error rates. This construction is well suited for wireless broadcast communications characterized by low packet losses such as in satellite networks. Moreover, neither hardware upgrades nor specific end-user equipment are needed to enjoy the authentication services. The computation overhead experienced on the receiver only sums to two hashes per block of pictures and one digital signature verification for the whole received stream. This overhead introduces a provably negligible decrease in video quality. A thorough analysis of the proposed solution is provided in conjunction with extensive simulations.

Journal ArticleDOI
TL;DR: This article introduces a technique, guaranteeing access pattern privacy against a computationally bounded adversary, in outsourced data storage, with communication and computation overheads orders of magnitude better than existing approaches, and shows that on off-the-shelf hardware, large data sets can be queried obliviously faster than in existing work.
Abstract: In this article we introduce a technique, guaranteeing access pattern privacy against a computationally bounded adversary, in outsourced data storage, with communication and computation overheads orders of magnitude better than existing approaches. In the presence of a small amount of temporary storage (enough to store O(√n log n) items and IDs, where n is the number of items in the database), we can achieve access pattern privacy with computational complexity of less than O(log2n) per query (as compared to, for instance, O(log4n) for existing approaches).We achieve these novel results by applying new insights based on probabilistic analyses of data shuffling algorithms to Oblivious RAM, allowing us to significantly improve its asymptotic complexity. This results in a protocol crossing the boundary between theory and practice and becoming generally applicable for access pattern privacy. We show that on off-the-shelf hardware, large data sets can be queried obliviously orders of magnitude faster than in existing work.

Journal ArticleDOI
TL;DR: This article considers the problem of translating existing access control policies defined over source databases in a manner that allows the original semantics to be observed while becoming applicable across the entire data federation, and provides an algorithm for automating the translation.
Abstract: Data federations provide seamless access to multiple heterogeneous and autonomous data sources pertaining to a large organization. As each source database defines its own access control policies for a set of local identities, enforcing such policies across the federation becomes a challenge. In this article, we first consider the problem of translating existing access control policies defined over source databases in a manner that allows the original semantics to be observed while becoming applicable across the entire data federation. We show that such a translation is always possible, and provide an algorithm for automating the translation. We show that verifying whether a translated policy obeys the semantics of the original access control policy defined over a source database is intractable, even under restrictive scenarios. We then describe a practical algorithmic framework for translating relational access control policies into their XML equivalent, expressed in the eXtensible Access Control Markup Language. Finally, we examine the difficulty of minimizing translated policies, and contribute a minimization algorithm applicable to nonrecursive translated policies.