scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2002"


Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations


Journal ArticleDOI
TL;DR: This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity and shows that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.
Abstract: Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.

1,765 citations


Proceedings ArticleDOI
03 Jun 2002
TL;DR: The paper explores an algebraic framework to split the query to minimize the computation at the client site, and explores techniques to execute SQL queries over encrypted data.
Abstract: Rapid advances in networking and Internet technologies have fueled the emergence of the "software as a service" model for enterprise computing. Successful examples of commercially viable software services include rent-a-spreadsheet, electronic mail services, general storage services, disaster protection services. "Database as a Service" model provides users power to create, store, modify, and retrieve data from anywhere in the world, as long as they have access to the Internet. It introduces several challenges, an important issue being data privacy. It is in this context that we specifically address the issue of data privacy.There are two main privacy issues. First, the owner of the data needs to be assured that the data stored on the service-provider site is protected against data thefts from outsiders. Second, data needs to be protected even from the service providers, if the providers themselves cannot be trusted. In this paper, we focus on the second challenge. Specifically, we explore techniques to execute SQL queries over encrypted data. Our strategy is to process as much of the query as possible at the service providers' site, without having to decrypt the data. Decryption and the remainder of the query processing are performed at the client site. The paper explores an algebraic framework to split the query to minimize the computation at the client site. Results of experiments validating our approach are also presented.

1,351 citations


Journal ArticleDOI
TL;DR: The findings indicate that consumers' ratings of trustworthiness of Web merchants did not parallel experts' evaluation of sites' use of the trust indices, and privacy and security features were of lesser importance than pleasure features when considering consumers' intention to purchase.
Abstract: While the growth of business-to-consumer electronic commerce seems phenomenal in recent years, several studies suggest that a large number of individuals using the Internet have serious privacy concerns, and that winning public trust is the primary hurdle to continued growth in e-commerce. This research investigated the relative importance, when purchasing goods and services over the Web, of four common trust indices (i.e. (1) third party privacy seals, (2) privacy statements, (3) third party security seals, and (4) security features). The results indicate consumers valued security features significantly more than the three other trust indices. We also investigated the relationship between these trust indices and the consumer's perceptions of a marketer's trustworthiness. The findings indicate that consumers' ratings of trustworthiness of Web merchants did not parallel experts' evaluation of sites' use of the trust indices. This study also examined the extent to which consumers are willing to provide private information to electronic and land merchants. The results revealed that when making the decision to provide private information, consumers rely on their perceptions of trustworthiness irrespective of whether the merchant is electronic only or land and electronic. Finally, we investigated the relative importance of three types of Web attributes: security, privacy and pleasure features (convenience, ease of use, cosmetics). Privacy and security features were of lesser importance than pleasure features when considering consumers' intention to purchase. A discussion of the implications of these results and an agenda for future research are provided.

1,195 citations


Journal ArticleDOI
TL;DR: This study examines the factor structure of the concern for information privacy (CFIP) instrument and suggests that each dimension of this instrument is reliable and distinct and that CFIP may be more parsimoniously represented as a higher-order factor structure rather than a correlated set of first-order factors.
Abstract: The arrival of the ?information age? holds great promise in terms of providing organizations with access to a wealth of information stores. However, the free exchange of electronic information also brings the threat of providing easy, and many times unwanted, access to personal information. Given the potential backlash of consumers, it is imperative that both researchers and practitioners understand the nature of consumers' concern for information privacy and accurately model the construct within evolving research and business contexts. Drawing upon a sample of 355 consumers and working within the framework of confirmatory factor analysis, this study examines the factor structure of the concern for information privacy (CFIP) instrument posited by Smith et al. (1996). Consistent with prior findings, the results suggest that each dimension of this instrument is reliable and distinct. However, the results also suggest that CFIP may be more parsimoniously represented as a higher-order factor structure rather than a correlated set of first-order factors. The implication of these results is that each dimension of CFIP as well as the supra dimension derived from the associations among dimensions are important in capturing CFIP and associating the construct to other important antecedents and consequences.

717 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: A novel paradigm for data management in which a third party service provider hosts "database as a service", providing its customers with seamless mechanisms to create, store, and access their databases at the host site is explored.
Abstract: We explore a novel paradigm for data management in which a third party service provider hosts "database as a service", providing its customers with seamless mechanisms to create, store, and access their databases at the host site. Such a model alleviates the need for organizations to purchase expensive hardware and software, deal with software upgrades, and hire professionals for administrative and maintenance tasks which are taken over by the service provider. We have developed and deployed a database service on the Internet, called NetDB2, which is in constant use. In a sense, a data management model supported by NetDB2 provides an effective mechanism for organizations to purchase data management as a service, thereby freeing them to concentrate on their core businesses. Among the primary challenges introduced by "database as a service" are the additional overhead of remote access to data, an infrastructure to guarantee data privacy, and user interface design for such a service. These issues are investigated. We identify data privacy as a particularly vital problem and propose alternative solutions based on data encryption. The paper is meant as a challenge for the database community to explore a rich set of research issues that arise in developing such a service.

707 citations


Book ChapterDOI
20 Aug 2002
TL;DR: This work presents a scheme, based on probabilistic distortion of user data, that can simultaneously provide a high degree of privacy to the user and retain a high level of accuracy in the mining results.
Abstract: Data mining services require accurate input data for their results to be meaningful, but privacy concerns may influence users to provide spurious information. We investigate here, with respect to mining association rules, whether users can be encouraged to provide correct information by ensuring that the mining process cannot, with any reasonable degree of certainty, violate their privacy. We present a scheme, based on probabilistic distortion of user data, that can simultaneously provide a high degree of privacy to the user and retain a high level of accuracy in the mining results. The performance of the scheme is validated against representative real and synthetic datasets.

518 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This work describes an algorithm whereby a community of users can compute a public "aggregate" of their data that does not expose individual users' data, and uses homomorphic encryption to allow sums of encrypted vectors to be computed and decrypted without exposing individual data.
Abstract: Server-based collaborative filtering systems have been very successful in e-commerce and in direct recommendation applications. In future, they have many potential applications in ubiquitous computing settings. But today's schemes have problems such as loss of privacy, favoring retail monopolies, and with hampering diffusion of innovations. We propose an alternative model in which users control all of their log data. We describe an algorithm whereby a community of users can compute a public "aggregate" of their data that does not expose individual users' data. The aggregate allows personalized recommendations to be computed by members of the community, or by outsiders. The numerical algorithm is fast, robust and accurate. Our method reduces the collaborative filtering task to an iterative calculation of the aggregate requiring only addition of vectors of user data. Then we use homomorphic encryption to allow sums of encrypted vectors to be computed and decrypted without exposing individual data. We give verification schemes for all parties in the computation. Our system can be implemented with untrusted servers, or with additional infrastructure, as a fully peer-to-peer (P2P) system.

498 citations


Journal ArticleDOI
TL;DR: Freenet is a distributed information storage system designed to address information privacy and survivability concerns and implements strategies to protect data integrity and prevent privacy leaks, and provide for graceful degradation and redundant data availability in the latter.
Abstract: Freenet is a distributed information storage system designed to address information privacy and survivability concerns. Freenet operates as a self-organizing P2P network that pools unused disk space across potentially hundreds of thousands of desktop computers to create a collaborative virtual file system. Freenet employs a completely decentralized architecture. Given that the P2P environment is inherently untrustworthy and unreliable, we must assume that participants could operate maliciously or fail without warning at any time. Therefore, Freenet implements strategies to protect data integrity and prevent privacy leaks in the former instance, and provide for graceful degradation and redundant data availability in the latter. The system is also designed to adapt to usage patterns, automatically replicating and deleting files to make the most effective use of available storage in response to demand.

447 citations


Journal ArticleDOI
TL;DR: In this article, Li et al. examined several key mechanisms that can help increase customers' trust of e-commerce and decrease privacy concerns, including characteristic-based, transaction process-based and institution-based trust production.

432 citations


Proceedings ArticleDOI
Marc Langheinrich1
29 Sep 2002
TL;DR: In this paper, the authors introduce a privacy awareness system targeted at ubiquitous computing environments that allows data collectors to both announce and implement data usage policies, as well as providing data subjects with technical means to keep track of their personal information as it is stored, used, and possibly removed from the system.
Abstract: Protecting personal privacy is going to be a prime concern for the deployment of ubiquitous computing systems in the real world. With daunting Orwellian visions looming, it is easy to conclude that tamper-proof technical protection mechanisms such as strong anonymization and encryption are the only solutions to such privacy threats. However, we argue that such perfect protection for personal information will hardly be achievable, and propose instead to build systems that help others respect our personal privacy, enable us to be aware of our own privacy, and to rely on social and legal norms to protect us from the few wrongdoers. We introduce a privacy awareness system targeted at ubiquitous computing environments that allows data collectors to both announce and implement data usage policies, as well as providing data subjects with technical means to keep track of their personal information as it is stored, used, and possibly removed from the system. Even though such a system cannot guarantee our privacy, we believe that it can create a sense of accountability in a world of invisible services that we will be comfortable living in and interacting with.

Proceedings ArticleDOI
12 May 2002
TL;DR: This work investigates the identifiability of World Wide Web traffic based on this unconcealed information in a large sample of Web pages, and shows that it suffices to identify a significant fraction of them quite reliably.
Abstract: Encryption is often proposed as a tool for protecting the privacy of World Wide Web browsing. However, encryption-particularly as typically implemented in, or in concert with popular Web browsers-does not hide all information about the encrypted plaintext. Specifically, HTTP object count and sizes are often revealed (or at least incompletely concealed). We investigate the identifiability of World Wide Web traffic based on this unconcealed information in a large sample of Web pages, and show that it suffices to identify a significant fraction of them quite reliably. We also suggest some possible countermeasures against the exposure of this kind of information and experimentally evaluate their effectiveness.

Journal ArticleDOI
TL;DR: Results indicate that the vast majority of online users are pragmatic when it comes to privacy, and analysis of the data suggested that online users can be segmented into four distinct groups, representing differing levels of privacy concern.
Abstract: Traditional typologies of consumer privacy concern suggest that consumers fall intro three distinct groups: One-fourth of consumers are not concerned about privacy, one-fourth are highly concerned, and half are pragmatic, in that their concerns about privacy depend on the situation presented. This study examines online users to determine whether types of privacy concern online mirror the offline environment. An e-mail survey of online users examined perceived privacy concerns of 15 different situations involving collection and usage of personally identifiable information. Results indicate that the vast majority of online users are pragmatic when it comes to privacy. Further analysis of the data suggested that online users can be segmented into four distinct groups, representing differing levels of privacy concern. Distinct demographic differences were seen. Persons with higher levels of education are more concerned about their privacy online than persons with less education. Additionally, persons over the age of 45 years tended to be either not at all concerned about privacy or highly concerned about privacy. Younger persons tended to be more pragmatic. Content and policy implications are provided.

Journal Article
TL;DR: The Department of Health and Human Services modifies certain standards in the Rule entitled "Standards for Privacy of Individually Identifiable Health Information'' to maintain strong protections for the privacy of individually identifiable health information while clarifying certain of the Privacy Rule's provisions.
Abstract: The Department of Health and Human Services ("HHS'' or "Department'') modifies certain standards in the Rule entitled "Standards for Privacy of Individually Identifiable Health Information'' ("Privacy Rule''). The Privacy Rule implements the privacy requirements of the Administrative Simplification subtitle of the Health Insurance Portability and Accountability Act of 1996. The purpose of these modifications is to maintain strong protections for the privacy of individually identifiable health information while clarifying certain of the Privacy Rule's provisions, addressing the unintended negative effects of the Privacy Rule on health care quality or access to health care, and relieving unintended administrative burdens created by the Privacy Rule.

Book
03 Oct 2002
TL;DR: This chapter discusses the development of the P3P Specification, which aims to provide a simple, scalable, and efficient way to develop and manage privacy policies for mobile devices.
Abstract: Foreword Preface Part I. Privacy and P3P 1. Introduction to P3P How P3P Works P3P-Enabling a Web Site Why Web Sites Adopt P3P 2. The Online Privacy Landscape Online Privacy Concerns Fair Information Practice Principles Privacy Laws Privacy Seals Chief Privacy Officers Privacy-Related Organizations 3. Privacy Technology Encryption Tools Anonymity and Pseudonymity Tools Filters Identity-Management Tools Other Tools 4. P3P History The Origin of the Idea The Internet Privacy Working Group W3C Launches the P3P Project The Evolving P3P Specification The Patent Issue Feedback from Europe Finishing the Specification Legal Implications Criticism Part II. P3P-Enabling Your Web Site 5. Overview and Options P3P-Enabled Web Site Components P3P Deployment Steps Creating a Privacy Policy Analyzing the Use of Cookies and Third-Party Content One Policy or Many? Generating a P3P Policy and Policy Reference File Helping User Agents Find Your Policy Reference File Combination Files Compact Policies The Safe Zone Testing Your Web Site 6. P3P Policy Syntax XML Syntax General Assertions Data-Specific Assertions The P3P Extension Mechanism The Policy File 7. Creating P3P Policies Gathering Information About Your Site's Data Practices Turning the Information You Gathered into a P3P Policy Writing a Compact Policy Avoiding Common Pitfalls 8. Creating and Referencing Policy Reference Files Creating a Policy Reference File Referencing a Policy Reference File P3P Policies in Policy Reference Files Changing Your P3P Policy or Policy Reference File Avoiding Common Pitfalls 9. Data Schemas Sets, Elements, and Structures Fixed and Variable Categories P3P Base Data Schema Writing a P3P Data Schema 10. P3P-Enabled Web Site Examples Simple Sites Third-Party Agents Third Parties with Their Own Policies Examples From Real Web Sites Part III. P3P Software and Design 11. P3P Vocabulary Design Issues Rating Systems and Vocabularies P3P Vocabulary Terms What's Not in the P3P Vocabulary 12. P3P User Agents and Other Tools P3P User Agents Other Types of P3P Tools P3P Specification Compliance Requirements 13. A P3P Preference Exchange Language (APPEL) APPEL Goals APPEL Evaluator Engines Writing APPEL Rule Sets Processing APPEL Rules Other Privacy Preference Languages 14. User Interface Case Studies Privacy Preference Settings User Agent Behavior Accessibility Privacy Part IV. Appendixes A. P3P Policy and Policy Reference File Syntax Quick Reference B. Configuring Web Servers to Include P3P Headers C. P3P in IE6 D. How to Create a Customized Privacy Import File for IE6 E. P3P Guiding Principles Index

Proceedings ArticleDOI
24 Feb 2002
TL;DR: New metrics are introduced in order to demonstrate how security issues can be taken into consideration in the general framework of association rule mining, and it is shown that the complexity of the new heuristics is similar to that of the original algorithms.
Abstract: The current trend in the application space towards systems of loosely coupled and dynamically bound components that enables just-in-time integration jeopardizes the security of information that is shared between the broker, the requester, and the provider at runtime. In particular, new advances in data mining and knowledge discovery that allow for the extraction of hidden knowledge in an enormous amount of data, impose new threats on the seamless integration of information. We consider the problem of building privacy preserving algorithms for one category of data mining techniques, association rule mining. We introduce new metrics in order to demonstrate how security issues can be taken into consideration in the general framework of association rule mining, and we show that the complexity of the new heuristics is similar to that of the original algorithms.

Journal ArticleDOI
TL;DR: In this article, the authors use a simple economic model to explore the conventional wisdom that privacy will continue to erode, until it essentially disappears, under the assumption that there is no government intervention and privacy is left to free-market forces.
Abstract: The World Wide Web has significantly reduced the costs of obtaining information about individuals, resulting in a widespread perception by consumers that their privacy is being eroded. The conventional wisdom among the technological cognoscenti seems to be that privacy will continue to erode, until it essentially disappears. The authors use a simple economic model to explore this conventional wisdom, under the assumption that there is no government intervention and privacy is left to free-market forces. They find support for the assertion that, under those conditions, the amount of privacy will decline over time and that privacy will be increasingly expensive to maintain. The authors conclude that a market for privacy will emerge, enabling customers to purchase a certain degree of privacy, no matter how easy it becomes for companies to obtain information, but the overall amount of privacy and privacy-based customer utility will continue to erode.

Journal ArticleDOI
TL;DR: In this article, the extent to which consumers are concerned with how their personal information is collected and used, their awareness and knowledge of data collection practices using discount (loyalty) cards, the relationship between demographics and privacy concerns, and relationship between privacy concerns and purchase behaviors.
Abstract: Consumers are becoming increasingly concerned about the privacy of their personal information and information about their purchase behaviors. The current study examines the extent to which consumers are concerned with how their personal information is collected and used, their awareness and knowledge of data collection practices using discount (loyalty) cards, the relationship between demographics and privacy concerns, and the relationship between privacy concerns and purchase behaviors. Results from a telephone survey of 480 consumers suggest that even though consumers are concerned about how personal information is collected and used, very few consumers are aware of how discount (loyalty) cards are used to collect personal level purchase data. Results also suggest that concerns about the use of personal information vary by demographic market segments, and that privacy concerns are significantly related to consumers’ purchasing behaviors on the Internet.

Proceedings Article
01 Jan 2002
TL;DR: It is found that economic incentives do affect individuals’ preferences over Websites with differing privacy policies, but cost-benefit trade-offs did not vary with personal characteristics including gender, contextual knowledge, individualism, and trust propensity.
Abstract: Concern over information privacy is widespread and rising. However, prior research is silent about the value of information privacy and the benefit of privacy protection. We conducted a conjoint analysis to explore individuals’ trade-offs between the benefits and costs of providing personal information to Websites. We find that economic incentives (monetary reward and future convenience) do affect individuals’ preferences over Websites with differing privacy policies. For instance, the disallowance of secondary use of personal information is worth between $39.83 and $49.78. Surprisingly, we find that cost-benefit trade-offs did not vary with personal characteristics including gender, contextual knowledge, individualism, and trust propensity.

Proceedings ArticleDOI
02 Jul 2002
TL;DR: This work presents a protocol, which preserves the privacy of users and keeps their communication anonymous, and creates a "mist" that conceals users from the system and other users.
Abstract: Ubiquitous computing is poised to revolutionize the way we compute and interact with each other. However, unless privacy concerns are taken into account early in the design process, we will end up creating a very effective distributed surveillance system, which would be a dream come true for electronic stalkers and "big brothers". We present a protocol, which preserves the privacy of users and keeps their communication anonymous. In effect, we create a "mist" that conceals users from the system and other users. Yet, users will still be able to enjoy seamless interaction with services and other entities that wander within the ubiquitous computing environment.

Proceedings ArticleDOI
Günter Karjoth1, Matthias Schunter1
24 Jun 2002
TL;DR: A privacy policy model that protects personal data from privacy violations by means of enforcing enterprise-wide privacy, policies by extending Jajodia et al.'s flexible authorization framework with grantors and obligations is described.
Abstract: Privacy is an increasing concern in the marketplace. Although enterprises promise sound privacy practices to their customers, there is no technical mechanism to enforce them internally In this paper we describe a privacy policy model that protects personal data from privacy violations by means of enforcing enterprise-wide privacy, policies. By extending Jajodia et al.'s flexible authorization framework (FAF) with grantors and obligations, we create a privacy control language that includes user consent, obligations, and distributed administration. Conditions impose restrictions on the use of the collected data, such as modeling guardian consent and options. Access decisions are extended with obligations, which list a set of activities that must be executed together with the access request. Grantors allow to define a separation of duty between the security officer and the privacy officer.

Proceedings ArticleDOI
28 Sep 2002
TL;DR: This paper proposes a new framework for data protection that is built on the foundation of privacy and security technologies, and provides secure environments for protected execution, which is essential to limiting data access to specific purposes.
Abstract: Automotive telematics may be defined as the information-intensive applications that are being enabled for vehicles by a combination of telecommunications and computing technology. Telematics by its nature requires the capture of sensor data, storage and exchange of data to obtain remote services. In order for automotive telematics to grow to its full potential, telematics data must be protected. Data protection must include privacy and security for end-users, service providers and application providers. In this paper, we propose a new framework for data protection that is built on the foundation of privacy and security technologies. The privacy technology enables users and service providers to define flexible data model and policy models. The security technology provides traditional capabilities such as encryption, authentication, non-repudiation. In addition, it provides secure environments for protected execution, which is essential to limiting data access to specific purposes.

Journal ArticleDOI
TL;DR: A theoretical model for privacy control in context-aware systems based on a core abstraction of information spaces based on Ravi Sandhu's four-layer OM-AM (objectives, models, architectures, and mechanisms) idea is described.
Abstract: Significant complexity issues challenge designers of context-aware systems with privacy control. Information spaces provide a way to organize information, resources, and services around important privacy-relevant contextual factors. In this article, we describe a theoretical model for privacy control in context-aware systems based on a core abstraction of information spaces. We have previously focused on deriving socially based privacy objectives in pervasive computing environments. Building on Ravi Sandhu's four-layer OM-AM (objectives, models, architectures, and mechanisms) idea, we aim to use information spaces to construct a model for privacy control that supports our socially based privacy objectives. We also discuss how we can introduce decentralization, a desirable property for many pervasive computing systems, into our information space model, using unified privacy tagging.

Journal Article
TL;DR: A summary of the need for protection of personal health information and an overview of the provisions of this legislative foundation for protecting personal health records--the HIPAA Privacy Rule are presented.
Abstract: When it enacted The Health Insurance Portability and Accountability Act of 1996, Congress mandated establishment of privacy regulations covering individual health information. Title II of HIPAA, the Privacy Rule that became effective on April 14, 2001, offers Americans the first-ever set of comprehensive protections against the unintended and/or inappropriate disclosure of personal health information. Provisions of the Privacy Rule and its associated regulations include patient control over the use of health information, patient rights to information on the disclosure policies of the health-care provider, patient rights to review and amend one's medical information, standards for limiting the scope of data disclosed to other health-care providers, and penalties for noncompliance with the law. This paper presents a summary of the need for protection of personal health information and an overview of the provisions of this legislative foundation for protecting personal health records--the HIPAA Privacy Rule.

Book
16 May 2002
TL;DR: The European Court of Human Rights has ruled that the right to privacy should be guaranteed in all circumstances, not just in the cases of exceptional circumstances.
Abstract: The phenomenon of the New Genetics raises complex social problems, particularly those of privacy. This book offers ethical and legal perspectives on the questions of a right to know and not to know genetic information from the standpoint of individuals, their relatives, employers, insurers and the state. Graeme Laurie provides a unique definition of privacy, including a concept of property rights in the person, and argues for stronger legal protection of privacy in the shadow of developments in human genetics. He challenges the role and the limits of established principles in medical law and ethics, including respect for patient autonomy and confidentiality. This book will interest lawyers, philosophers and doctors concerned both with genetic information and issues of privacy; it will also interest genetic counsellors, researchers, and policy makers worldwide for its practical stance on dilemmas in modern genetic medicine.

Book ChapterDOI
14 Apr 2002
TL;DR: The Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data, is described, which introduces a viable separation of duty between the three "administrators" of a privacy system.
Abstract: Enterprises collect a large amount of personal data about their customers. Even though enterprises promise privacy to their customers using privacy statements or P3P, there is no methodology to enforce these promises throughout and across multiple enterprises. This article describes the Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data. Its comprehensive privacy-specific access control language expresses restrictions on the access to personal data, possibly shared between multiple enterprises. E-P3P separates the enterprise-specific deployment policy from the privacy policy that covers the complete life cycle of collected data. E-P3P introduces a viable separation of duty between the three "administrators" of a privacy system: The privacy officer designs and deploys privacy policies, the security officer designs access control policies, and the customers can give consent while selecting opt-in and opt-out choices.

Proceedings ArticleDOI
21 Nov 2002
TL;DR: The Platform for Enterprise Privacy Practices (E-P3P) defines a fine-grained privacy policy model that enables enterprises to keep their promises and prevent accidental privacy violations.
Abstract: Enterprises collect large amounts of personal data from their customers. To ease privacy concerns, enterprises publish privacy statements that outline how data is used and shared. The Platform for Enterprise Privacy Practices (E-P3P) defines a fine-grained privacy policy model. A Chief Privacy Officer can use E-P3P to formalize the desired enterprise-internal handling of collected data. A particular data user is then allowed to use certain collected data for a given purpose if and only if the E-P3P authorization engine allows this request based on the applicable E-P3P policy. By enforcing such formalized privacy practices, E-P3P enables enterprises to keep their promises and prevent accidental privacy violations.

Journal ArticleDOI
TL;DR: This study compares a subset of equivalent individual-level web-site data for the 1998, 1999, 2000, and 2001 web surveys to assess whether organizations post online privacy disclosures and whether these disclosures represent the U.S. definition of fair information practices.
Abstract: In the United States, Congress has had a long-standing interest in consumer privacy and the extent to which company practices are based on fair information practices. Previously, public policy was largely informed by anecdotal evidence about the effectiveness of industry self-regulatory programs. However, the Internet has made it possible to unobtrusively sample web sites and their privacy disclosures in a way that is not feasible in the offline world. Beginning in 1998, the Federal Trade Commission relied upon a series of three surveys of web sites to assess whether organizations post online privacy disclosures and whether these disclosures represent the U.S. definition of fair information practices. While each year's survey has provided an important snapshot of U.S. web-site practices, there has been no longitudinal analysis of the multiyear trends. This study compares a subset of equivalent individual-level web-site data for the 1998, 1999, 2000, and 2001 web surveys. Implications for using this type o...

Patent
30 Aug 2002
TL;DR: In this paper, the authors present a method for securely enforcing a privacy policy between two enterprises, comprising of creating a message at a first enterprise, wherein the message includes a request for data concerning a third party and privacy policy of the first enterprise; sending the message to a second enterprise; and running a privacy rules engine at the second enterprise to compare the privacy policy with a set of privacy rules for the third party.
Abstract: The invention includes various systems, architectures, frameworks and methodologies that can securely enforce a privacy policy. A method is include for securely guaranteeing a privacy policy between two enterprises, comprising: creating a message at a first enterprise, wherein the message includes a request for data concerning a third party and a privacy policy of the first enterprise; signing and certifying the message that the first enterprise has a tamper-proof system with a privacy rules engine and that the privacy policy of the first entity will be enforced by the privacy rules engine of the first enterprise; sending the message to a second enterprise; and running a privacy rules engine at the second enterprise to compare the privacy policy of the first enterprise with a set of privacy rules for the third party.

Proceedings ArticleDOI
09 Sep 2002
TL;DR: A privacy goal taxonomy is introduced and the analysis of 23 Internet privacy policies for companies in three health care industries: pharmaceutical, health insurance and online drugstores is reported.
Abstract: Privacy has recently become a prominent issue in the context of electronic commerce websites. Increasingly, Privacy policies posted on such websites are receiving considerable attention from the government and consumers. We have used goal-mining, to extract pre-requirements goals from post-requirements text artifacts, as a technique for analyzing privacy policies. The identified goals are useful for analyzing implicit internal conflicts within privacy policies and conflicts with the corresponding websites and their manner of operation. These goals can be used to reconstruct the implicit requirements met by the privacy policies. This paper interrelates privacy policy and requirements for websites; it introduces a privacy goal taxonomy and reports the analysis of 23 Internet privacy policies for companies in three health care industries: pharmaceutical, health insurance and online drugstores. The evaluated taxonomy provides a valuable framework for requirements engineering practitioners, policy makers and regulatory bodies, and also benefits website users.