scispace - formally typeset
Search or ask a question

Showing papers on "Privacy software published in 2002"


Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations


Journal ArticleDOI
TL;DR: This paper presents some components of a toolkit of components that can be combined for specific privacy-preserving data mining applications, and shows how they can be used to solve several Privacy preserving data mining problems.
Abstract: Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems.

961 citations


Proceedings ArticleDOI
23 Jul 2002
TL;DR: This paper addresses the important issue of preserving the anonymity of the individuals or entities during the data dissemination process by the use of generalizations and suppressions on the potentially identifying portions of the data.
Abstract: Data on individuals and entities are being collected widely. These data can contain information that explicitly identifies the individual (e.g., social security number). Data can also contain other kinds of personal information (e.g., date of birth, zip code, gender) that are potentially identifying when linked with other available data sets. Data are often shared for business or legal reasons. This paper addresses the important issue of preserving the anonymity of the individuals or entities during the data dissemination process. We explore preserving the anonymity by the use of generalizations and suppressions on the potentially identifying portions of the data. We extend earlier works in this area along various dimensions. First, satisfying privacy constraints is considered in conjunction with the usage for the data being disseminated. This allows us to optimize the process of preserving privacy for the specified usage. In particular, we investigate the privacy transformation in the context of data mining applications like building classification and regression models. Second, our work improves on previous approaches by allowing more flexible generalizations for the data. Lastly, this is combined with a more thorough exploration of the solution space using the genetic algorithm framework. These extensions allow us to transform the data so that they are more useful for their intended purpose while satisfying the privacy constraints.

833 citations


Proceedings ArticleDOI
11 Aug 2002
TL;DR: A new method for collaborative filtering which protects the privacy of individual data is described, based on a probabilistic factor analysis model, which has other advantages in speed and storage over previous algorithms.
Abstract: Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.

546 citations


Book ChapterDOI
20 Aug 2002
TL;DR: This work presents a scheme, based on probabilistic distortion of user data, that can simultaneously provide a high degree of privacy to the user and retain a high level of accuracy in the mining results.
Abstract: Data mining services require accurate input data for their results to be meaningful, but privacy concerns may influence users to provide spurious information. We investigate here, with respect to mining association rules, whether users can be encouraged to provide correct information by ensuring that the mining process cannot, with any reasonable degree of certainty, violate their privacy. We present a scheme, based on probabilistic distortion of user data, that can simultaneously provide a high degree of privacy to the user and retain a high level of accuracy in the mining results. The performance of the scheme is validated against representative real and synthetic datasets.

518 citations


Proceedings ArticleDOI
Marc Langheinrich1
29 Sep 2002
TL;DR: In this paper, the authors introduce a privacy awareness system targeted at ubiquitous computing environments that allows data collectors to both announce and implement data usage policies, as well as providing data subjects with technical means to keep track of their personal information as it is stored, used, and possibly removed from the system.
Abstract: Protecting personal privacy is going to be a prime concern for the deployment of ubiquitous computing systems in the real world. With daunting Orwellian visions looming, it is easy to conclude that tamper-proof technical protection mechanisms such as strong anonymization and encryption are the only solutions to such privacy threats. However, we argue that such perfect protection for personal information will hardly be achievable, and propose instead to build systems that help others respect our personal privacy, enable us to be aware of our own privacy, and to rely on social and legal norms to protect us from the few wrongdoers. We introduce a privacy awareness system targeted at ubiquitous computing environments that allows data collectors to both announce and implement data usage policies, as well as providing data subjects with technical means to keep track of their personal information as it is stored, used, and possibly removed from the system. Even though such a system cannot guarantee our privacy, we believe that it can create a sense of accountability in a world of invisible services that we will be comfortable living in and interacting with.

427 citations


Book
03 Oct 2002
TL;DR: This chapter discusses the development of the P3P Specification, which aims to provide a simple, scalable, and efficient way to develop and manage privacy policies for mobile devices.
Abstract: Foreword Preface Part I. Privacy and P3P 1. Introduction to P3P How P3P Works P3P-Enabling a Web Site Why Web Sites Adopt P3P 2. The Online Privacy Landscape Online Privacy Concerns Fair Information Practice Principles Privacy Laws Privacy Seals Chief Privacy Officers Privacy-Related Organizations 3. Privacy Technology Encryption Tools Anonymity and Pseudonymity Tools Filters Identity-Management Tools Other Tools 4. P3P History The Origin of the Idea The Internet Privacy Working Group W3C Launches the P3P Project The Evolving P3P Specification The Patent Issue Feedback from Europe Finishing the Specification Legal Implications Criticism Part II. P3P-Enabling Your Web Site 5. Overview and Options P3P-Enabled Web Site Components P3P Deployment Steps Creating a Privacy Policy Analyzing the Use of Cookies and Third-Party Content One Policy or Many? Generating a P3P Policy and Policy Reference File Helping User Agents Find Your Policy Reference File Combination Files Compact Policies The Safe Zone Testing Your Web Site 6. P3P Policy Syntax XML Syntax General Assertions Data-Specific Assertions The P3P Extension Mechanism The Policy File 7. Creating P3P Policies Gathering Information About Your Site's Data Practices Turning the Information You Gathered into a P3P Policy Writing a Compact Policy Avoiding Common Pitfalls 8. Creating and Referencing Policy Reference Files Creating a Policy Reference File Referencing a Policy Reference File P3P Policies in Policy Reference Files Changing Your P3P Policy or Policy Reference File Avoiding Common Pitfalls 9. Data Schemas Sets, Elements, and Structures Fixed and Variable Categories P3P Base Data Schema Writing a P3P Data Schema 10. P3P-Enabled Web Site Examples Simple Sites Third-Party Agents Third Parties with Their Own Policies Examples From Real Web Sites Part III. P3P Software and Design 11. P3P Vocabulary Design Issues Rating Systems and Vocabularies P3P Vocabulary Terms What's Not in the P3P Vocabulary 12. P3P User Agents and Other Tools P3P User Agents Other Types of P3P Tools P3P Specification Compliance Requirements 13. A P3P Preference Exchange Language (APPEL) APPEL Goals APPEL Evaluator Engines Writing APPEL Rule Sets Processing APPEL Rules Other Privacy Preference Languages 14. User Interface Case Studies Privacy Preference Settings User Agent Behavior Accessibility Privacy Part IV. Appendixes A. P3P Policy and Policy Reference File Syntax Quick Reference B. Configuring Web Servers to Include P3P Headers C. P3P in IE6 D. How to Create a Customized Privacy Import File for IE6 E. P3P Guiding Principles Index

279 citations


Journal ArticleDOI
TL;DR: In this article, the authors use a simple economic model to explore the conventional wisdom that privacy will continue to erode, until it essentially disappears, under the assumption that there is no government intervention and privacy is left to free-market forces.
Abstract: The World Wide Web has significantly reduced the costs of obtaining information about individuals, resulting in a widespread perception by consumers that their privacy is being eroded. The conventional wisdom among the technological cognoscenti seems to be that privacy will continue to erode, until it essentially disappears. The authors use a simple economic model to explore this conventional wisdom, under the assumption that there is no government intervention and privacy is left to free-market forces. They find support for the assertion that, under those conditions, the amount of privacy will decline over time and that privacy will be increasingly expensive to maintain. The authors conclude that a market for privacy will emerge, enabling customers to purchase a certain degree of privacy, no matter how easy it becomes for companies to obtain information, but the overall amount of privacy and privacy-based customer utility will continue to erode.

247 citations


Proceedings Article
01 Jan 2002
TL;DR: It is found that economic incentives do affect individuals’ preferences over Websites with differing privacy policies, but cost-benefit trade-offs did not vary with personal characteristics including gender, contextual knowledge, individualism, and trust propensity.
Abstract: Concern over information privacy is widespread and rising. However, prior research is silent about the value of information privacy and the benefit of privacy protection. We conducted a conjoint analysis to explore individuals’ trade-offs between the benefits and costs of providing personal information to Websites. We find that economic incentives (monetary reward and future convenience) do affect individuals’ preferences over Websites with differing privacy policies. For instance, the disallowance of secondary use of personal information is worth between $39.83 and $49.78. Surprisingly, we find that cost-benefit trade-offs did not vary with personal characteristics including gender, contextual knowledge, individualism, and trust propensity.

218 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: A novel feature of P/sup 5/ is that it allows individual participants to trade-off degree of anonymity for communication efficiency, and hence can be used to scalably implement large anonymous groups.
Abstract: We present a protocol for anonymous communication over the Internet. Our protocol, called P/sup 5/ (peer-to-peer personal privacy protocol) provides sender-, receiver-, and sender-receiver anonymity. P/sup 5/ is designed to be implemented over current Internet protocols, and does not require any special infrastructure support. A novel feature of P/sup 5/ is that it allows individual participants to trade-off degree of anonymity for communication efficiency, and hence can be used to scalably implement large anonymous groups. We present a description of P/sup 5/, an analysis of its anonymity and communication efficiency, and evaluate its performance using detailed packet-level simulations.

213 citations


Proceedings ArticleDOI
Günter Karjoth1, Matthias Schunter1
24 Jun 2002
TL;DR: A privacy policy model that protects personal data from privacy violations by means of enforcing enterprise-wide privacy, policies by extending Jajodia et al.'s flexible authorization framework with grantors and obligations is described.
Abstract: Privacy is an increasing concern in the marketplace. Although enterprises promise sound privacy practices to their customers, there is no technical mechanism to enforce them internally In this paper we describe a privacy policy model that protects personal data from privacy violations by means of enforcing enterprise-wide privacy, policies. By extending Jajodia et al.'s flexible authorization framework (FAF) with grantors and obligations, we create a privacy control language that includes user consent, obligations, and distributed administration. Conditions impose restrictions on the use of the collected data, such as modeling guardian consent and options. Access decisions are extended with obligations, which list a set of activities that must be executed together with the access request. Grantors allow to define a separation of duty between the security officer and the privacy officer.

Proceedings ArticleDOI
28 Sep 2002
TL;DR: This paper proposes a new framework for data protection that is built on the foundation of privacy and security technologies, and provides secure environments for protected execution, which is essential to limiting data access to specific purposes.
Abstract: Automotive telematics may be defined as the information-intensive applications that are being enabled for vehicles by a combination of telecommunications and computing technology. Telematics by its nature requires the capture of sensor data, storage and exchange of data to obtain remote services. In order for automotive telematics to grow to its full potential, telematics data must be protected. Data protection must include privacy and security for end-users, service providers and application providers. In this paper, we propose a new framework for data protection that is built on the foundation of privacy and security technologies. The privacy technology enables users and service providers to define flexible data model and policy models. The security technology provides traditional capabilities such as encryption, authentication, non-repudiation. In addition, it provides secure environments for protected execution, which is essential to limiting data access to specific purposes.

Journal ArticleDOI
TL;DR: A theoretical model for privacy control in context-aware systems based on a core abstraction of information spaces based on Ravi Sandhu's four-layer OM-AM (objectives, models, architectures, and mechanisms) idea is described.
Abstract: Significant complexity issues challenge designers of context-aware systems with privacy control. Information spaces provide a way to organize information, resources, and services around important privacy-relevant contextual factors. In this article, we describe a theoretical model for privacy control in context-aware systems based on a core abstraction of information spaces. We have previously focused on deriving socially based privacy objectives in pervasive computing environments. Building on Ravi Sandhu's four-layer OM-AM (objectives, models, architectures, and mechanisms) idea, we aim to use information spaces to construct a model for privacy control that supports our socially based privacy objectives. We also discuss how we can introduce decentralization, a desirable property for many pervasive computing systems, into our information space model, using unified privacy tagging.

01 Jan 2002
TL;DR: This paper provides a framework and metrics for discussing the meaning of privacy preserving data mining, as a foundation for further research in this field.
Abstract: Privacy preserving data mining – getting valid data mining results without learning the underlying data values – has been receiving attention in the research community and beyond. It is unclear what privacy preserving means. This paper provides a framework and metrics for discussing the meaning of privacy preserving data mining, as a foundation for further research in this field.

Book ChapterDOI
14 Apr 2002
TL;DR: The Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data, is described, which introduces a viable separation of duty between the three "administrators" of a privacy system.
Abstract: Enterprises collect a large amount of personal data about their customers. Even though enterprises promise privacy to their customers using privacy statements or P3P, there is no methodology to enforce these promises throughout and across multiple enterprises. This article describes the Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data. Its comprehensive privacy-specific access control language expresses restrictions on the access to personal data, possibly shared between multiple enterprises. E-P3P separates the enterprise-specific deployment policy from the privacy policy that covers the complete life cycle of collected data. E-P3P introduces a viable separation of duty between the three "administrators" of a privacy system: The privacy officer designs and deploys privacy policies, the security officer designs access control policies, and the customers can give consent while selecting opt-in and opt-out choices.

Proceedings ArticleDOI
21 Nov 2002
TL;DR: The Platform for Enterprise Privacy Practices (E-P3P) defines a fine-grained privacy policy model that enables enterprises to keep their promises and prevent accidental privacy violations.
Abstract: Enterprises collect large amounts of personal data from their customers. To ease privacy concerns, enterprises publish privacy statements that outline how data is used and shared. The Platform for Enterprise Privacy Practices (E-P3P) defines a fine-grained privacy policy model. A Chief Privacy Officer can use E-P3P to formalize the desired enterprise-internal handling of collected data. A particular data user is then allowed to use certain collected data for a given purpose if and only if the E-P3P authorization engine allows this request based on the applicable E-P3P policy. By enforcing such formalized privacy practices, E-P3P enables enterprises to keep their promises and prevent accidental privacy violations.

01 Nov 2002
TL;DR: In this article, the authors define new mechanisms for the Session Initiation Protocol (SIP) in support of privacy and define a new "privacy service" logical role for intermediaries to answer some privacy requirements that user agents cannot satisfy themselves.
Abstract: This document defines new mechanisms for the Session Initiation Protocol (SIP) in support of privacy. Specifically, guidelines are provided for the creation of messages that do not divulge personal identity information. A new "privacy service" logical role for intermediaries is defined to answer some privacy requirements that user agents cannot satisfy themselves. Finally, means are presented by which a user can request particular functions from a privacy service.

Patent
30 Aug 2002
TL;DR: In this paper, the authors present a method for securely enforcing a privacy policy between two enterprises, comprising of creating a message at a first enterprise, wherein the message includes a request for data concerning a third party and privacy policy of the first enterprise; sending the message to a second enterprise; and running a privacy rules engine at the second enterprise to compare the privacy policy with a set of privacy rules for the third party.
Abstract: The invention includes various systems, architectures, frameworks and methodologies that can securely enforce a privacy policy. A method is include for securely guaranteeing a privacy policy between two enterprises, comprising: creating a message at a first enterprise, wherein the message includes a request for data concerning a third party and a privacy policy of the first enterprise; signing and certifying the message that the first enterprise has a tamper-proof system with a privacy rules engine and that the privacy policy of the first entity will be enforced by the privacy rules engine of the first enterprise; sending the message to a second enterprise; and running a privacy rules engine at the second enterprise to compare the privacy policy of the first enterprise with a set of privacy rules for the third party.

Proceedings ArticleDOI
09 Sep 2002
TL;DR: A privacy goal taxonomy is introduced and the analysis of 23 Internet privacy policies for companies in three health care industries: pharmaceutical, health insurance and online drugstores is reported.
Abstract: Privacy has recently become a prominent issue in the context of electronic commerce websites. Increasingly, Privacy policies posted on such websites are receiving considerable attention from the government and consumers. We have used goal-mining, to extract pre-requirements goals from post-requirements text artifacts, as a technique for analyzing privacy policies. The identified goals are useful for analyzing implicit internal conflicts within privacy policies and conflicts with the corresponding websites and their manner of operation. These goals can be used to reconstruct the implicit requirements met by the privacy policies. This paper interrelates privacy policy and requirements for websites; it introduces a privacy goal taxonomy and reports the analysis of 23 Internet privacy policies for companies in three health care industries: pharmaceutical, health insurance and online drugstores. The evaluated taxonomy provides a valuable framework for requirements engineering practitioners, policy makers and regulatory bodies, and also benefits website users.

Proceedings ArticleDOI
21 Nov 2002
TL;DR: It is found that a large proportion of AT&T Privacy Bird users began reading privacy policies more often and being more proactive about protecting their privacy as a result of using this software.
Abstract: The Platform for Privacy Preferences (P3P), developed by the World Wide Web Consortium (W3C), provides a standard computer-readable format for privacy policies and a protocol that enables web browsers to read and process privacy policies automatically. P3P enables machine-readable privacy policies that can be retrieved automatically by web browsers and other user agent tools that can display symbols, prompt users, or take other appropriate actions. We developed the AT&T Privacy Bird as a P3P user agent that can compare P3P policies against a user's privacy preferences. Since P3P was adopted as a W3C recommendation in April 2002, little work has been done to study how it is being used and, especially, its impact on users. Many questions have been raised about whether and how Internet users will make use of P3P, and how to build P3P user agents that will prove most useful to end users. In this paper we first provide a brief introduction to P3P and the AT&T Privacy Bird. Then we discuss a survey of AT&T Privacy Bird users that we conducted in August 2002. We found that a large proportion of AT&T Privacy Bird users began reading privacy policies more often and being more proactive about protecting their privacy as a result of using this software. Unfortunately, the usefulness of P3P user agents is severely limited by the number of web sites that have implemented P3P. Our survey results also suggest that if it becomes easier to compare privacy policy across e-commerce web sites, a significant group of consumers would likely use this information in their purchase decisions.

Proceedings ArticleDOI
08 Nov 2002
TL;DR: A new technical approach for preserving privacy in government Web services is proposed based on digital privacy credentials, data filters and mobile privacy preserving agents, aimed at establishing the feasibility and provable reliability of technology-based privacy preserving solutions for Web service infrastructures.
Abstract: Web services are increasingly being adopted as a viable means to access Web-based applications. This has been enabled by the tremendous standardization effort to describe, advertise, discover, and invoke Web services. Digital government (DG) is a major application domain for Web services. It aims at improving government-citizen interactions using information and communication technologies. Government agencies collect, store, process, and share information about millions of citizens who have different preferences regarding their privacy. This naturally raises a number of legal and technical issues that must be addressed to preserve citizens' privacy through the control of the information flow amongst different entities (users, Web services, DBMSs). Solutions addressing this issue are still in their infancy. They consist, essentially, of enforcing privacy by law or by self-regulation. In this paper, we propose a new technical approach for preserving privacy in government Web services. Our design is based on digital privacy credentials, data filters and mobile privacy preserving agents. This work aims at establishing the feasibility and provable reliability of technology-based privacy preserving solutions for Web service infrastructures.

Patent
28 Jun 2002
TL;DR: In this article, a method and system that provides an intuitive user interface and related components for making Internet users aware of Internet cookie-related privacy issues, and enabling users to control Internet privacy through automatic cookie handling.
Abstract: A method and system that provide an intuitive user interface and related components for making Internet users aware of Internet cookie-related privacy issues, and enabling users to control Internet privacy through automatic cookie handling. Default privacy settings for handling cookies are provided, and through the user interface, the privacy settings may be customized to a user's liking. Further, through the user interface, for each individual site that forms a page of content, the site's privacy policy may be reviewed and/or the privacy controlled by specifying how cookies from that site are to be handled. To make users aware, the user interface provides an active alert on a first instance of a retrieved web site's content that fails to include satisfactory privacy information, and thereafter, provides a distinctive passive alert to allow the user selective access to privacy information, per-site cookie handling and cookie handling settings.

Journal Article
TL;DR: The privacy policies of health Web sites are not easily understood by most individuals in the United States and do not serve to inform users of their rights.
Abstract: Objective Most individuals would like to maintain the privacy of their medical information on the World Wide Web (WWW). In response, commercial interests and other sites post privacy policies that are designed to inform users of how their information will be used. However, it is not known if these statements are comprehensible to most WWW users. The purpose of this study was to determine the reading level of privacy statements on Internet health Web sites and to determine whether these statements can inform users of their rights. Study design This was a descriptive study. Eighty Internet health sites were examined and the readability of their privacy policies was determined. The selected sample included the top 25 Internet health sites as well as other sites that a user might encounter while researching a common problem such as high blood pressure. Sixty percent of the sites were commercial (.com), 17.5% were organizations (.org), 8.8% were from the United Kingdom (.uk), 3.8% were United States governmental (.gov), and 2.5% were educational (.edu). Outcomes measured The readability level of the privacy policies was calculated using the Flesch, the Fry, and the SMOG readability levels. Results Of the 80 Internet health Web sites studied, 30% (including 23% of the commercial Web sites) had no privacy policy posted. The average readability level of the remaining sites required 2 years of college level education to comprehend, and no Web site had a privacy policy that was comprehensible by most English-speaking individuals in the United States. Conclusions The privacy policies of health Web sites are not easily understood by most individuals in the United States and do not serve to inform users of their rights. Possible remedies include rewriting policies to make them comprehensible and protecting online health information by using legal statutes or standardized insignias indicating compliance with a set of privacy standards (eg, "Health on the Net" [HON] http://www.hon.ch).

Journal ArticleDOI
TL;DR: A model of the factors that determine agent acceptance is proposed, based on earlier work on user attitudes toward e-commerce transactions, in which feelings of trust and perceptions of risk combine in opposite directions to determine a user's final acceptance of an agent technology.
Abstract: As agents become more active and sophisticated, the implications of their actions become more serious. With today's GUIs, user and software errors can often be easily fixed or undone. An agent performing actions on behalf of a user could make errors that are very difficult to "undo", and, depending on the agent's complexity, it might not be clear what went wrong. Moreover, for agents to operate effectively and truly act on their users' behalf, they might need confidential or sensitive information. This includes financial details and personal contact information. Thus, along with the excitement about agents and what they can do, there is concern about the resulting security and privacy issues. It is not enough to assume that well-designed software agents will provide the security and privacy users need; assurances and assumptions about security and privacy need to be made explicit. This article proposes a model of the factors that determine agent acceptance, based on earlier work on user attitudes toward e-commerce transactions, in which feelings of trust and perceptions of risk combine in opposite directions to determine a user's final acceptance of an agent technology.

Journal ArticleDOI
TL;DR: In this article, the authors draw out the differences between the physical world and the digital world as those differences affect privacy and explore how the concept of the commons might help us to understand social and economic relationships in cyberspace.
Abstract: This article seeks to broaden our understanding of online privacy in three ways: first, by drawing out the differences between the physical world and the digital world as those differences affect privacy; second, by exploring how the concept of the 'commons' might help us to understand social and economic relationships in cyberspace; and third, by analysing two contrasting views of privacy: privacy as a private or individual good and privacy as a common good. In order to analyse similarities and differences in privacy in the physical world and the online world, each is assessed in three ways: the obvious level of privacy available; the possibility of modifying that level of privacy to create or choose more or less privacy for oneself; and the degree to which the, prior or contemporaneous, privacy decisions of others affect the amount of privacy that is available to all. Applying an analysis based on the 'tragedy of the commons', the article concludes that at least part of cyberspace can be conceived as a ...

Journal ArticleDOI
TL;DR: The threats to privacy that can occur through data mining are described and the privacy problem is viewed as a variation of the inference problem in databases.
Abstract: In this paper, we describe the threats to privacy that can occur through data mining and then view the privacy problem as a variation of the inference problem in databases.

Patent
05 Apr 2002
TL;DR: In this paper, a method for creating a structured privacy policy is described, comprising the steps of accessing a database containing data to be privatized, determining for specified data how that data is to be shared, and generating an XML-based document describing how the data are to be used.
Abstract: A method for creating a structured privacy policy the method comprising the steps of accessing a database containing data to be privatized; determining for specified data how that data is to be shared; and generating an XML based document describing how the data is to be shared, the document defining the privacy policy.

Proceedings Article
16 Oct 2002
TL;DR: A framework to model the way agents interact with each other to achieve their goals based on the i* framework is presented and it is shown how one can model privacy concerns for each agent and the different alternatives for operationalizing it.
Abstract: Privacy may be interpreted in different ways in different contexts, and may be achieved by means of different mechanisms It is also frequently intertwined with security concerns However, other requirements such as functionality, usability and reliability, must also be addressed since they often compete among each other While the understanding of technical mechanisms for addressing privacy has been growing, systematic approaches are needed to guide software engineers to elicit, model and reason about privacy requirements and to address them during design In a networked world, multi-agent systems have been emerging as a new approach Each agent may have his own goals and beliefs and social relationships with each other Each agent may have his own perspective concerning privacy Perspectives from different agents may conflict with each other Moreover, they may conflict with other requirements such as availability and performance In this paper we present a framework to model the way agents interact with each other to achieve their goals The framework uses a catalogue to guide the software engineer through alternatives for achieving privacy Each alternative will be modeled showing how it contributes to privacy as well as to other requirements within this agent or in other agents The approach is based on the i* framework Privacy is modeled as a special type of goal We show how one can model privacy concerns for each agent and the different alternatives for operationalizing it An example in the health care domain is used to illustrate

Proceedings ArticleDOI
02 Sep 2002
TL;DR: The IBM Enterprise Privacy Architecture (EPA) is a methodology for enterprises to provide an enhanced and well-defined level of privacy to their customers.
Abstract: The IBM Enterprise Privacy Architecture (EPA) is a methodology for enterprises to provide an enhanced and well-defined level of privacy to their customers. EPA is structured in four building blocks. The privacy regulation analysis identifies and structures the applicable regulations. The management reference model enables an enterprise to define and enforce an enterprise privacy strategy and the resulting privacy practices. The privacy agreement framework is a methodology for privacy-enabling business process re-engineering. It outputs a detailed model of the privacy-relevant players and activities as well as the privacy policies that govern these activities. The technical reference architecture defines the technology needed for implementing the identified practices.

Journal ArticleDOI
TL;DR: An examination of the privacy model underlying P3P, the U.S. political context regarding privacy, and the technical components of the protocol is presented with an eye towards distilling lessons for developers of future social protocols.
Abstract: As a "social protocol" aimed at providing a technological means to address concerns over Internet privacy, the Platform for Privacy Preferences (P3P) has been controversial since its announcement in 1997. In the U.S., critics have decried P3P as an industry attempt to avoid meaningful privacy legislation, while developers have portrayed the proposal as a tool for helping users make informed decisions about the impact of their Web surfing choices. This dispute touches upon the privacy model underlying P3P, the U.S. political context regarding privacy, and the technical components of the protocol. This article presents an examination of these factors, with an eye towards distilling lessons for developers of future social protocols.