scispace - formally typeset
Search or ask a question
Author

Dorothy E. Denning

Bio: Dorothy E. Denning is an academic researcher from Naval Postgraduate School. The author has contributed to research in topics: Encryption & Cryptography. The author has an hindex of 37, co-authored 125 publications receiving 15120 citations. Previous affiliations of Dorothy E. Denning include SRI International & Purdue University.


Papers
More filters
Patent
16 Nov 2001
TL;DR: Geo-Encryption as mentioned in this paper is a method in which plaintext data is first encrypted using a data encrypting key that is generated at the time of encryption, and then encrypted (or locked) using a key encrypted key and information derived from the location of the intended receiver.
Abstract: Access to digital data is controlled by encrypting the data in such a manner that, in a single digital data acquisition step, it can be decrypted only at a specified location, within a specific time frame, and with a secret key. Data encrypted in such a manner is said to be geo-encrypted. This geo-encryption process comprises a method in which plaintext data is first encrypted using a data encrypting key that is generated at the time of encryption. The data encrypting key is then encrypted (or locked) using a key encrypting key and information derived from the location of the intended receiver. The encrypted data encrypting key is then transmitted to the receiver along with the ciphertext data. The receiver both must be at the correct location and must have a copy of the corresponding key decrypting key in order to derive the location information and decrypt the data encrypting key. After the data encrypting key is decrypted (or unlocked), it is used to decrypt the ciphertext. If an attempt is made to decrypt the data encrypting key at an incorrect location or using an incorrect secret key, the decryption will fail. If the sender so elects, access to digital data also can be controlled by encrypting it in such a manner that it must traverse a specific route from the sender to the recipient in order to enable decryption of the data. Key management can be handled using either private-key or public-key cryptography. If private-key cryptography is used, the sender can manage the secret key decrypting keys required for decryption in a secure manner that is transparent to the recipient. As a consequence of its ability to manipulate the secret keys, the sender of encrypted data retains the ability to control access to its plaintext even after its initial transmission.

232 citations

Journal ArticleDOI
TL;DR: The model is formulated in two layers, one corresponding to a reference monitor that enforces mandatory security, and an extension of the standard relational model defining multilevel relations and formalizing policies for labeling new and derived data, data consistency, and discretionary security.
Abstract: A multilevel database is intended to provide the security needed for database systems that contain data at a variety of classifications and serve a set of users having different clearances. A formal security model for such a system is described. The model is formulated in two layers, one corresponding to a reference monitor that enforces mandatory security, and the second an extension of the standard relational model defining multilevel relations and formalizing policies for labeling new and derived data, data consistency, and discretionary security. The model also defines application-independent properties for entity integrity, referential integrity, and polyinstantiation integrity. >

231 citations

Journal ArticleDOI
TL;DR: A taxonomy for key escrow encryption systems is presented, providing a structure for describing and categorizing the escrow mechanisms of complete systems as well as various design options.
Abstract: decrypt ciphertext with the help of information supplied by one or more trusted parties holding special data recovery keys. The data recovery keys are not normally the same as those used to encrypt and decrypt the data, but rather provide a means of determining the data encryption/decryption keys. The term key escrow is used to refer to the safeguarding of these data recovery keys. Other terms used include key archive, key backup, and data recovery system. This article presents a taxonomy for key escrow encryption systems, providing a structure for describing and categorizing the escrow mechanisms of complete systems as well as various design options. Table 1 applies the taxonomy to several key escrow products or proposals. The sidebar, “Glossary and Sources,” identifies key terms, commercial products, and proposed systems.

217 citations

Journal ArticleDOI
TL;DR: The general nature of controls of each type are described, the kinds of problems they can and cannot solve, and their inherent limitations and weaknesses are described.
Abstract: The rising abuse of computers and increasing threat to personal privacy through data banks have stimulated much interest in the technical safeguards for data. There are four kinds of safeguards, each related to but distinct from the others. Access controls regulate which users may enter the system and subsequently which data sets an active user may read or write. Flow controls regulate the dissemination of values among the data sets accessible to a user. Inference controls protect statistical databases by preventing questioners from deducing confidential information by posing carefully designed sequences of statistical queries and correlating the responses. Statistical data banks are much less secure than most people believe. Data encryption attempts to prevent unauthorized disclosure of confidential information in transit or in storage. This paper describes the general nature of controls of each type, the kinds of problems they can and cannot solve, and their inherent limitations and weaknesses. The paper is intended for a general audience with little background in the area.

217 citations

Journal ArticleDOI
TL;DR: A new inference control, called random sample queries, is proposed for safeguarding confidential data in on-line statistical databases that deals directly with the basic principle of compromise by making it impossible for a questioner to control precisely the formation of query sets.
Abstract: A new inference control, called random sample queries, is proposed for safeguarding confidential data in on-line statistical databases. The random sample queries control deals directly with the basic principle of compromise by making it impossible for a questioner to control precisely the formation of query sets. Queries for relative frequencies and averages are computed using random samples drawn from the query sets. The sampling strategy permits the release of accurate and timely statistics and can be implemented at very low cost. Analysis shows the relative error in the statistics decreases as the query set size increases; in contrast, the effort required to compromise increases with the query set size due to large absolute errors. Experiments performed on a simulated database support the analysis.

212 citations


Cited by
More filters
Book
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Journal ArticleDOI
TL;DR: This survey tries to provide a structured and comprehensive overview of the research on anomaly detection by grouping existing techniques into different categories based on the underlying approach adopted by each technique.
Abstract: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.

9,627 citations

Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations

Patent
30 Sep 2010
TL;DR: In this article, the authors proposed a secure content distribution method for a configurable general-purpose electronic commercial transaction/distribution control system, which includes a process for encapsulating digital information in one or more digital containers, a process of encrypting at least a portion of digital information, a protocol for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container, and a process that delivering one or multiple digital containers to a digital information user.
Abstract: PROBLEM TO BE SOLVED: To solve the problem, wherein it is impossible for an electronic content information provider to provide commercially secure and effective method, for a configurable general-purpose electronic commercial transaction/distribution control system. SOLUTION: In this system, having at least one protected processing environment for safely controlling at least one portion of decoding of digital information, a secure content distribution method comprises a process for encapsulating digital information in one or more digital containers; a process for encrypting at least a portion of digital information; a process for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container; a process for delivering one or more digital containers to a digital information user; and a process for using a protected processing environment, for safely controlling at least a portion of the decoding of the digital information. COPYRIGHT: (C)2006,JPO&NCIPI

7,643 citations

Book ChapterDOI
04 Mar 2006
TL;DR: In this article, the authors show that for several particular applications substantially less noise is needed than was previously understood to be the case, and also show the separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
Abstract: We continue a line of research initiated in [10,11]on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = ∑ig(xi), where xi denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.

6,211 citations