scispace - formally typeset
Search or ask a question
Author

Dorothy E. Denning

Bio: Dorothy E. Denning is an academic researcher from Naval Postgraduate School. The author has contributed to research in topics: Encryption & Cryptography. The author has an hindex of 37, co-authored 125 publications receiving 15120 citations. Previous affiliations of Dorothy E. Denning include SRI International & Purdue University.


Papers
More filters
Proceedings ArticleDOI
07 Apr 1986
TL;DR: Basic view concepts for a multilevelsecure relational database model that addresses the above issues is described.
Abstract: Because views on relational database systems mathematically define arbitrary sets of stored and derived data, they have been proposed as a way of handling context- and contenbdependent classification, dynamic classification, inference, aggregation, and sanitization in multilevel database systems. This paper describes basic view concepts for a multilevelsecure relational database model that addresses the above issues. The model treats stored and derived data uniformly within the database schema. All data in the database is classified according to views called classification constraints, which specify security levels for related data. In addition, views called aggregation constraints specifies classifications for aggregates that are classified higher than the constituent elements. All data accesses are confined to a third set of views called access views, which higher than their declared filter out all data classified view level.

73 citations

Proceedings ArticleDOI
01 Apr 1984
TL;DR: The filter attaches a security classification label to each data record, computes an unforgeable cryptographic checksum over the record (including the label), and stores the checksum in the database.
Abstract: The 1982 ilr Force Summer Study on Multilevel Data Management Security recommended several approaches to designing a multilevel secure database system. One of the approaches uses an untrusted database system to manage the data, and an isolated trusted filter to enforce security.The filter attaches a security classification label to each data record, computes an unforgeable cryptographic checksum over the record (including the label), and stores the checksum in the database.The checksum protects against modification to the data and its classification label.This paper discusses the implementation, security, and limitations of the approach.

72 citations

Journal ArticleDOI
TL;DR: The paper examines the impact of Stuxnet on various domains of action where cyber-attacks play a role, including state-level conflict, terrorism, activism, crime, and pranks, and whether such attacks would be consistent with other trends in the domain.
Abstract: This paper considers the impact of Stuxnet on cyber-attacks and cyber-defense. It first reviews trends in cyber-weapons and how Stuxnet fits into these trends. Because Stuxnet targeted an industrial control system in order to wreak physical damage, the focus is on weapons that target systems of that type and produce physical effects. The paper then examines the impact of Stuxnet on various domains of action where cyber-attacks play a role, including state-level conflict, terrorism, activism, crime, and pranks. For each domain, it considers the potential for new types of cyber-attacks, especially attacks against industrial control systems, and whether such attacks would be consistent with other trends in the domain. Finally, the paper considers the impact of Stuxnet on cyber-defense.

72 citations

Proceedings ArticleDOI
22 Apr 1985
TL;DR: The problem of user inference can be solved with the concept of a commutative filter that ensures that the result returned to a user is equivalent to one that would have been obtained had the query been posed against an authorized view of the database.
Abstract: Disclosure of classified data in multilevel database systems is threatened by direct user access, user inference, Trojan Horse release, and Trojan Horse leaks. Earlier work showed how the problems of direct user access and Trojan Horse release can be solved by using a trusted filter and cryptographic checksums, but left the problems of inference and leaks open. We now show how the problem of user inference can be solved with the concept of a commutative filter that ensures that the result returned to a user is equivalent to one that would have been obtained had the query been posed against an authorized view of the database. The technique allows query selections, some projections, query optimization, and subquery handling to be performed by the database system. It does not solve the Trojan Horse leakage problem.

71 citations

01 Jan 2006
TL;DR: A model for understanding, comparing, and developing methods of deceptive hiding is introduced in terms of how it defeats the underlying processes that an adversary uses to discover the hidden thing.
Abstract: : Deception offers one means of hiding things from an adversary This paper introduces a model for understanding, comparing, and developing methods of deceptive hiding The model characterizes deceptive hiding in terms of how it defeats the underlying processes that an adversary uses to discover the hidden thing An adversary's process of discovery can take three forms: direct observation (sensing and recognizing), investigation (evidence collection and hypothesis formation), and learning from other people or agents Deceptive hiding works by defeating one or more elements of these processes The model is applied to computer security; it also is applicable to other domains

69 citations


Cited by
More filters
Book
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Journal ArticleDOI
TL;DR: This survey tries to provide a structured and comprehensive overview of the research on anomaly detection by grouping existing techniques into different categories based on the underlying approach adopted by each technique.
Abstract: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.

9,627 citations

Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations

Patent
30 Sep 2010
TL;DR: In this article, the authors proposed a secure content distribution method for a configurable general-purpose electronic commercial transaction/distribution control system, which includes a process for encapsulating digital information in one or more digital containers, a process of encrypting at least a portion of digital information, a protocol for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container, and a process that delivering one or multiple digital containers to a digital information user.
Abstract: PROBLEM TO BE SOLVED: To solve the problem, wherein it is impossible for an electronic content information provider to provide commercially secure and effective method, for a configurable general-purpose electronic commercial transaction/distribution control system. SOLUTION: In this system, having at least one protected processing environment for safely controlling at least one portion of decoding of digital information, a secure content distribution method comprises a process for encapsulating digital information in one or more digital containers; a process for encrypting at least a portion of digital information; a process for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container; a process for delivering one or more digital containers to a digital information user; and a process for using a protected processing environment, for safely controlling at least a portion of the decoding of the digital information. COPYRIGHT: (C)2006,JPO&NCIPI

7,643 citations

Book ChapterDOI
04 Mar 2006
TL;DR: In this article, the authors show that for several particular applications substantially less noise is needed than was previously understood to be the case, and also show the separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
Abstract: We continue a line of research initiated in [10,11]on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = ∑ig(xi), where xi denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.

6,211 citations