scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2004"


Journal ArticleDOI
TL;DR: A brief overview of the field of biometrics is given and some of its advantages, disadvantages, strengths, limitations, and related privacy concerns are summarized.
Abstract: A wide variety of systems requires reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user and no one else. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. In the absence of robust personal recognition schemes, these systems are vulnerable to the wiles of an impostor. Biometric recognition, or, simply, biometrics, refers to the automatic recognition of individuals based on their physiological and/or behavioral characteristics. By using biometrics, it is possible to confirm or establish an individual's identity based on "who she is", rather than by "what she possesses" (e.g., an ID card) or "what she remembers" (e.g., a password). We give a brief overview of the field of biometrics and summarize some of its advantages, disadvantages, strengths, limitations, and related privacy concerns.

4,678 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that the second-order IUIPC factor, which consists of three first-order dimensions--namely, collection, control, and awareness--exhibited desirable psychometric properties in the context of online privacy.
Abstract: The lack of consumer confidence in information privacy has been identified as a major problem hampering the growth of e-commerce. Despite the importance of understanding the nature of online consumers' concerns for information privacy, this topic has received little attention in the information systems community. To fill the gap in the literature, this article focuses on three distinct, yet closely related, issues. First, drawing on social contract theory, we offer a theoretical framework on the dimensionality of Internet users' information privacy concerns (IUIPC). Second, we attempt to operationalize the multidimensional notion of IUIPC using a second-order construct, and we develop a scale for it. Third, we propose and test a causal model on the relationship between IUIPC and behavioral intention toward releasing personal information at the request of a marketer. We conducted two separate field surveys and collected data from 742 household respondents in one-on-one, face-to-face interviews. The results of this study indicate that the second-order IUIPC factor, which consists of three first-order dimensions--namely, collection, control, and awareness--exhibited desirable psychometric properties in the context of online privacy. In addition, we found that the causal model centering on IUIPC fits the data satisfactorily and explains a large amount of variance in behavioral intention, suggesting that the proposed model will serve as a useful tool for analyzing online consumers' reactions to various privacy threats on the Internet.

2,597 citations


Journal ArticleDOI
TL;DR: In this paper, the authors address secure mining of association rules over horizontally partitioned data. And they incorporate cryptographic techniques to minimize the information shared, while adding little overhead to the mining task.
Abstract: Data mining can extract important knowledge from large data collections ut sometimes these collections are split among various parties. Privacy concerns may prevent the parties from directly sharing the data and some types of information about the data. We address secure mining of association rules over horizontally partitioned data. The methods incorporate cryptographic techniques to minimize the information shared, while adding little overhead to the mining task.

986 citations


Journal ArticleDOI
01 May 2004
TL;DR: Road safety, traffic management, and driver convenience continue to improve, in large part thanks to appropriate usage of information technology, but this evolution has deep implications for security and privacy, which the research community has overlooked so far.
Abstract: Road safety, traffic management, and driver convenience continue to improve, in large part thanks to appropriate usage of information technology. But this evolution has deep implications for security and privacy, which the research community has overlooked so far.

796 citations


Proceedings ArticleDOI
14 Mar 2004
TL;DR: This work introduces a simple scheme relying on one way hash-functions that greatly enhances location privacy by changing traceable identifiers on every read getting by with only a single, unreliable message exchange.
Abstract: Radio-frequency identification devices (RFID) may emerge as one of the most pervasive computing technologies in history. On the one hand, with tags affixed to consumer items as well as letters, packets or vehicles costs in the supply chain can be greatly reduced and new applications introduced. On the other hand, unique means of identification in each tag like serial numbers enable effortless traceability of persons and goods. But data protection and privacy are worthwhile civil liberties. We introduce a simple scheme relying on one way hash-functions that greatly enhances location privacy by changing traceable identifiers on every read getting by with only a single, unreliable message exchange. Thereby the scheme is safe from many threats like eavesdropping, message interception, spoofing, and replay attacks.

568 citations


Proceedings ArticleDOI
14 Mar 2004
TL;DR: A method, called the mix zone, developed to enhance user privacy in location-based services is refined, the mathematical model is improved, and a method of providing feedback to users is developed.
Abstract: Privacy of personal location information is becoming an increasingly important issue. We refine a method, called the mix zone, developed to enhance user privacy in location-based services. We improve the mathematical model, examine and minimise computational complexity and develop a method of providing feedback to users.

540 citations


Journal ArticleDOI
TL;DR: This work investigates confidentiality issues of a broad category of rules, the association rules, and presents three strategies and five algorithms for hiding a group of associationrules, which is characterized as sensitive.
Abstract: Large repositories of data contain sensitive information that must be protected against unauthorized access. The protection of the confidentiality of this information has been a long-term goal for the database security research community and for the government statistical agencies. Recent advances in data mining and machine learning algorithms have increased the disclosure risks that one may encounter when releasing data to outside parties. A key problem, and still not sufficiently investigated, is the need to balance the confidentiality of the disclosed data with the legitimate needs of the data users. Every disclosure limitation method affects, in some way, and modifies true data values and relationships. We investigate confidentiality issues of a broad category of rules, the association rules. In particular, we present three strategies and five algorithms for hiding a group of association rules, which is characterized as sensitive. One rule is characterized as sensitive if its disclosure risk is above a certain privacy threshold. Sometimes, sensitive rules should not be disclosed to the public since, among other things, they may be used for inferring sensitive data, or they may provide business competitors with an advantage. We also perform an evaluation study of the hiding algorithms in order to analyze their time complexity and the impact that they have in the original database.

530 citations


Journal ArticleDOI
TL;DR: It is found that reading privacy notices is related to concern for privacy, positive perceptions about notice comprehension, and higher levels of trust in the notice, suggesting that effective privacy notices serve an important function in addressing risk issues related to e-commerce.

521 citations


Journal ArticleDOI
TL;DR: The results of exploratory factor analysis and regression analysis suggest that the relationship among the hypothesized antecedents and privacy concerns may be one that is more complex than is captured in the hypothesized model, in light of the strong theoretical justification for the role of information control in the extant literature on information privacy.
Abstract: This research focuses on the development and validation of an instrument to measure the privacy concerns of individuals who use the Internet and two antecedents, perceived vulnerability and perceived ability to control information. The results of exploratory factor analysis support the validity of the measures developed. In addition, the regression analysis results of a model including the three constructs provide strong support for the relationship between perceived vulnerability and privacy concerns, but only moderate support for the relationship between perceived ability to control information and privacy concerns. The latter unexpected results suggest that the relationship among the hypothesized antecedents and privacy concerns may be one that is more complex than is captured in the hypothesized model, in light of the strong theoretical justification for the role of information control in the extant literature on information privacy.

492 citations


Book ChapterDOI
31 Aug 2004
TL;DR: This paper analyzes the data partitioning (bucketization) technique and algorithmically develops this technique to build privacy-preserving indices on sensitive attributes of a relational table and develops a novel algorithm for achieving the desired balance between privacy and utility of the index.
Abstract: Database outsourcing is an emerging data management paradigm which has the potential to transform the IT operations of corporations. In this paper we address privacy threats in database outsourcing scenarios where trust in the service provider is limited. Specifically, we analyze the data partitioning (bucketization) technique and algorithmically develop this technique to build privacy-preserving indices on sensitive attributes of a relational table. Such indices enable an untrusted server to evaluate obfuscated range queries with minimal information leakage. We analyze the worst-case scenario of inference attacks that can potentially lead to breach of privacy (e.g., estimating the value of a data element within a small error margin) and identify statistical measures of data privacy in the context of these attacks. We also investigate precise privacy guarantees of data partitioning which form the basic building blocks of our index. We then develop a model for the fundamental privacy-utility tradeoff and design a novel algorithm for achieving the desired balance between privacy and utility (accuracy of range query evaluation) of the index.

481 citations


Journal ArticleDOI
TL;DR: This article examined three possible explanations for differences in Internet privacy concerns revealed by national regulation: (1) These differences reflect and are related to differences in cultural values described by other research, (2) these differences reflect differences in internet experience; or (3) they reflect the differences in the desires of political institutions without reflecting underlying differences in privacy preferences.
Abstract: We examine three possible explanations for differences in Internet privacy concerns revealed by national regulation: (1) These differences reflect and are related to differences in cultural values described by other research; (2) these differences reflect differences in Internet experience; or (3) they reflect differences in the desires of political institutions without reflecting underlying differences in privacy preferences. Using a sample of Internet users from 38 countries matched against the Internet population of the United States, we find support for (1) and (2), suggesting the need for localized privacy policies. Privacy concerns decline with Internet experience. Controlling for experience, cultural values were associated with differences in privacy concerns. These cultural differences are mediated by regulatory differences, although new cultural differences emerge when differences in regulation are harmonized. Differences in regulation reflect but also shape country differences. Consumers in countries with sectoral regulation have less desire for more privacy regulation.

Book ChapterDOI
02 Dec 2004
TL;DR: This work shows that two of the private scalar product protocols, one of which was proposed in a leading data mining conference, are insecure and describes a provably private Scalar product protocol that is based on homomorphic encryption and can be used on massive datasets.
Abstract: In mining and integrating data from multiple sources, there are many privacy and security issues. In several different contexts, the security of the full privacy-preserving data mining protocol depends on the security of the underlying private scalar product protocol. We show that two of the private scalar product protocols, one of which was proposed in a leading data mining conference, are insecure. We then describe a provably private scalar product protocol that is based on homomorphic encryption and improve its efficiency so that it can also be used on massive datasets.

Proceedings ArticleDOI
25 Apr 2004
TL;DR: This paper evaluates the usability of online privacy policies, as well as the practice of posting them, and determines that significant changes need to be made to current practice to meet regulatory and usability requirements.
Abstract: Studies have repeatedly shown that users are increasingly concerned about their privacy when they go online. In response to both public interest and regulatory pressures, privacy policies have become almost ubiquitous. An estimated 77% of websites now post a privacy policy. These policies differ greatly from site to site, and often address issues that are different from those that users care about. They are in most cases the users' only source of information.This paper evaluates the usability of online privacy policies, as well as the practice of posting them. We analyze 64 current privacy policies, their accessibility, writing, content and evolution over time. We examine how well these policies meet user needs and how they can be improved. We determine that significant changes need to be made to current practice to meet regulatory and usability requirements.


Book ChapterDOI
14 Mar 2004
TL;DR: A new and flexible approach for privacy preserving data mining which does not require new problem-specific algorithms, since it maps the original data set into a new anonymized data set, including the correlations among the different dimensions is developed.
Abstract: In recent years, privacy preserving data mining has become an important problem because of the large amount of personal data which is tracked by many business applications. In many cases, users are unwilling to provide personal information unless the privacy of sensitive information is guaranteed. In this paper, we propose a new framework for privacy preserving data mining of multi-dimensional data. Previous work for privacy preserving data mining uses a perturbation approach which reconstructs data distributions in order to perform the mining. Such an approach treats each dimension independently and therefore ignores the correlations between the different dimensions. In addition, it requires the development of a new distribution based algorithm for each data mining problem, since it does not use the multi-dimensional records, but uses aggregate distributions of the data as input. This leads to a fundamental re-design of data mining algorithms. In this paper, we will develop a new and flexible approach for privacy preserving data mining which does not require new problem-specific algorithms, since it maps the original data set into a new anonymized data set. This anonymized data closely matches the characteristics of the original data including the correlations among the different dimensions. We present empirical results illustrating the effectiveness of the method.

Book ChapterDOI
15 Aug 2004
TL;DR: Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.
Abstract: In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.

Proceedings ArticleDOI
01 Nov 2004
TL;DR: This paper investigates data mining as a technique for masking data, therefore, termed data mining based privacy protection, and adapts an iterative bottom-up generalization from data mining to generalize the data.
Abstract: The well-known privacy-preserved data mining modifies existing data mining techniques to randomized data. In this paper, we investigate data mining as a technique for masking data, therefore, termed data mining based privacy protection. This approach incorporates partially the requirement of a targeted data mining task into the process of masking data so that essential structure is preserved in the masked data. The idea is simple but novel: we explore the data generalization concept from data mining as a way to hide detailed information, rather than discover trends and patterns. Once the data is masked, standard data mining techniques can be applied without modification. Our work demonstrated another positive use of data mining technology: not only can it discover useful patterns, but also mask private information. We consider the following privacy problem: a data holder wants to release a version of data for building classification models, but wants to protect against linking the released data to an external source for inferring sensitive information. We adapt an iterative bottom-up generalization from data mining to generalize the data. The generalized data remains useful to classification but becomes difficult to link to other sources. The generalization space is specified by a hierarchical structure of generalizations. A key is identifying the best generalization to climb up the hierarchy at each iteration. Enumerating all candidate generalizations is impractical. We present a scalable solution that examines at most one generalization in each iteration for each attribute involved in the linking.

Journal Article
TL;DR: In this paper, Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy, and they proved that unless the total number of queries is sublinear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.
Abstract: In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless. As databases grow increasingly large, the possibility of being able to query only a sub-linear number of times becomes realistic. We further investigate this situation, generalizing the previous work in two important directions: multi-attribute databases (previous work dealt only with single-attribute databases) and vertically partitioned databases, in which different subsets of attributes are stored in different databases. In addition, we show how to use our techniques for datamining on published noisy statistics.

Book
01 Jan 2004
TL;DR: In this article, Kafka and Orwell discuss the problems of information privacy law and the limits of market-based solutions in the context of digital databases, and propose a framework for the protection of privacy in computer databases.
Abstract: Acknowledgments1 Introduction I Computer Databases2 The Rise of the Digital Dossier 3 Kafka and Orwell: Reconceptualizing Information Privacy 4 The Problems of Information Privacy Law 5 The Limits of Market-Based Solutions 6 Architecture and the Protection of Privacy II Public Records7 The Problem of Public Records8 Access and Aggregation: Rethinking Privacy and Transparency III Government Access9 Government Information Gathering 10 The Fourth Amendment, Records, and Privacy11 Reconstructing the Architecture 12 Conclusion Notes IndexAbout the Author Contents

Journal ArticleDOI
TL;DR: A new smart meeting room system called EasyMeeting explores the use of multi-agent systems, Semantic Web ontologies, reasoning, and declarative policies for security and privacy.
Abstract: A new smart meeting room system called EasyMeeting explores the use of multi-agent systems, Semantic Web ontologies, reasoning, and declarative policies for security and privacy. Building on an earlier pervasive computing system, EasyMeeting provides relevant services and information to meeting participants based on their situational needs. The system also exploits the context-aware support provided by the Context Broker Architecture (Cobra). Cobra's intelligent broker agent maintains a shared context model for all computing entities in the space and enforces user-defined privacy policies.

Journal ArticleDOI
01 Mar 2004
TL;DR: The authors investigate disclosure-control algorithms that hide users' positions in sensitive areas and withhold path information that indicates which areas they have visited.
Abstract: Although some users might willingly subscribe to location-tracking services, few would be comfortable having their location known in all situations. The authors investigate disclosure-control algorithms that hide users' positions in sensitive areas and withhold path information that indicates which areas they have visited.

Journal ArticleDOI
TL;DR: In this paper, the authors examine online behaviors that increase or reduce risk of online identity theft and suggest that consumers need to be vigilant of new threats, such as the placement of cookies, hacking into hard drives, intercepting transactions, and observing online behavior via spyware.
Abstract: This article examines online behaviors that increase or reduce risk of online identity theft. The authors report results from three consumer surveys that indicate the propensity to protect oneself from online identity theft varies by population. The authors then examine attitudinal, behavioral, and demographic antecedents that predict the tendency to protect one's privacy and identity online. Implications and suggestions for managers, public policy makers, and consumers related to protecting online privacy and identity theft are provided. ********** Identity theft, defined as the appropriation of someone else's personal or financial identity to commit fraud or theft, is one of the fastest growing crimes in the United States (Federal Trade Commission 2001) and is increasingly affecting consumers' online transactions. In the discussion of identity theft, the Internet represents an important research context. Because of its ability to accumulate and disseminate vast amounts of information electronically, the Internet may make theft of personal or financial identity easier. Indeed, online transactions pose several new threats that consumers need to be vigilant of, such as the placement of cookies, hacking into hard drives, intercepting transactions, and observing online behavior via spyware (Cohen 2001). Online identity theft through the use of computers does not necessarily have real space analogs as exemplifed by techniques of IP spoofing and page jacking (Katyal 2001). Recent instances of online identity theft appearing in the popular press include a teenager who used e-mail and a bogus Web page to gain access to individuals' credit card data and steal thousands of dollars from consumers (New York Times 2003), and cyber-thieves who were able to access tens of thousands of personal credit reports online (Salkever 2002). The purpose of this article, as depicted in Figure 1, is to explore the extent to which consumers are controlling their information online and whether privacy attitudes, offline data behaviors, online experience and consumer background predict the level of online protection practiced. There is an explicit link being made by privacy advocates that suggests controlling one's information is a step toward protecting oneself from identity theft (Cohen 2001; Federal Trade Commission 2001). To evaluate the level of customer protection, we analyze survey results of consumer online behaviors, many of which are depicted in Figure 1, and investigate their relationship to antecedent conditions suggested in the literature. [FIGURE 1 OMITTED] In particular, we address the following research questions: What is the relationship between offline data protection practices and online protection behavior? What is the relationship between online shopping behaviors and online protection behavior? What is the relationship between privacy attitudes and online protection behavior? What is the relationship between demographics and online protection behavior? The remainder of this article is organized in four sections. We begin in the first section by reviewing the risks consumers face online and the steps they can take to minimize their risk of privacy invasion and identity theft. In the second section, we describe three surveys of consumers' online behaviors related to online privacy and identity theft. We discuss the results in the third section and implications for managers, public policy makers, and consumers in the fourth and final section. ONLINE PRIVACY AND IDENTITY THEFT While identity theft has caught the government's, businesses', and the public's attention (Hemphill 2001; Milne 2003), the empirical scholarly literature in this area is limited to the closely related issue of online privacy. Research has measured consumers' concern for online privacy (Sheehan and Hoy 2000), their ability to opt out of online relationships (Milne and Rohm 2000), and the extent to which businesses have implemented fair information practices through the posting of their online privacy notices (Culnan 2000; Miyazaki and Fernandez 2001; Milne and Culnan 2002). …

Journal ArticleDOI
TL;DR: This paper studies the erosion of privacy when genomic data, either pseudonymous or data believed to be anonymous, are released into a distributed healthcare environment and develops algorithms that link genomic data to named individuals in publicly available records by leveraging unique features in patient-location visit patterns.

Book ChapterDOI
31 Aug 2004
TL;DR: Through a comprehensive set of performance experiments, it is shown that the cost of privacy enforcement is small, and scalable to large databases.
Abstract: We present a practical and efficient approach to incorporating privacy policy enforcement into an existing application and database environment, and we explore some of the semantic tradeoffs introduced by enforcing these privacy policy rules at cell-level granularity. Through a comprehensive set of performance experiments, we show that the cost of privacy enforcement is small, and scalable to large databases.

01 Jan 2004
TL;DR: The thesis of this paper is that what really motivates commercial organizations (even though they often do not realize it clearly themselves) is the growing incentive to price discriminate, coupled with the increasing ability to price discrimination.
Abstract: The rapid erosion of privacy poses numerous puzzles. Why is it occurring, and why do people care about it? This paper proposes an explanation for many of these puzzles in terms of the increasing importance of price discrimination. Privacy appears to be declining largely in order to facilitate difierential pricing, which ofiers greater social and economic gains than auctions or shopping agents. The thesis of this paper is that what really motivates commercial organizations (even though they often do not realize it clearly themselves) is the growing incentive to price discriminate, coupled with the increasing ability to price discriminate. It is the same incentive that has led to the airline yield management system, with a complex and constantly changing array of prices. It is also the same incentive that led railroads to invent a variety of price and quality difierentiation schemes in the 19th century. Privacy intrusions serve to provide the information that allows sellers to determine buyers' willingness to pay. They also allow monitoring of usage, to ensure that arbitrage is not used to bypass discriminatory pricing.Economically, price discrimination is usually regarded as desirable, since it often increases the efficiency of the economy. That is why it is frequently promoted by governments, either through explicit mandates or through indirect means. On the other hand, price discrimination often arouses strong opposition from the public.There is no easy resolution to the conflict between sellers; incentives to price discriminate and buyers' resistance to such measures. The continuing tension between these two factors will have important consequences for the nature of the economy. It will also determine which technologies will be adopted widely. Governments will likely play an increasing role in controlling pricing, although their roles will continue to be ambiguous. Sellers are likely to rely to an even greater extent on techniques such as bundling that will allow them to extract more consumer surplus and also to conceal the extent of price discrimination. Micropayments and auctions are likely to play a smaller role than is often expected. In general, because of strong conflicting pressures, privacy is likely to prove an intractable problem that will be prominent on the the public agenda for the foreseeable future.

Proceedings ArticleDOI
16 Nov 2004
TL;DR: Detailed analysis shows that ASR can achieve both anonymity and security properties, as defined in the requirements, of the routing protocol in mobile ad-hoc networks.
Abstract: Although there are a large number of papers on secure routing in mobile ad-hoc networks, only a few consider the anonymity issue. We define more strict requirements on the anonymity and security properties of the routing protocol, and notice that previous research works only provide weak location privacy and route anonymity, and are vulnerable to specific attacks. Therefore, we propose the anonymous secure routing (ASR) protocol that can provide additional properties on anonymity, i.e. identity anonymity and strong location privacy, and at the same time ensure the security of discovered routes against various passive and active attacks. Detailed analysis shows that ASR can achieve both anonymity and security properties, as defined in the requirements, of the routing protocol in mobile ad-hoc networks.

Proceedings ArticleDOI
14 Mar 2004
TL;DR: In this paper, the authors describe a new architecture called context broker architecture (CobrA) that exploits semantic Web technologies for supporting pervasive context-aware systems and describe the use of CoBrA, its associated ontologies, and its privacy protection mechanism in an intelligent meeting room prototype.
Abstract: This document describes a new architecture that exploits semantic Web technologies for supporting pervasive context-aware systems. This architecture called context broker architecture (CobrA) differs from other architectures in using the Web ontology language OWL for modelling ontologies of context and for supporting context reasoning. Central to our architecture is a broker agent that maintains a shared model of context for all computing entities in the space and enforces the privacy policies defined by the users when sharing their contextual information. We describe the use of CoBrA, its associated ontologies, and its privacy protection mechanism in an intelligent meeting room prototype.

Journal ArticleDOI
01 Jul 2004
TL;DR: In this work, ontologies are proposed for modeling the high-level security requirements and capabilities of Web services and clients and helps to match a client's request with appropriate services-those based on security criteria as well as functional descriptions.
Abstract: Web services will soon handle users' private information. They'll need to provide privacy guarantees to prevent this delicate information from ending up in the wrong hands. More generally, Web services will need to reason about their users' policies that specify who can access private information and under what conditions. These requirements are even more stringent for semantic Web services that exploit the semantic Web to automate their discovery and interaction because they must autonomously decide what information to exchange and how. In our previous work, we proposed ontologies for modeling the high-level security requirements and capabilities of Web services and clients.1 This modeling helps to match a client's request with appropriate services-those based on security criteria as well as functional descriptions.

Book
10 Dec 2004
TL;DR: In this article, the authors define three dimensions of privacy: expectation, knowledge, and autonomy: Expectations, knowledge and autonomy, and define a room of onea s own: self-invention, self-presentation and autonomy.
Abstract: Foreword. I. Introduction. 1. Discourses on privacy. 2. Privacy: conceptual clarifications. 3. The framework of liberal democracy. 4. Cultural differences: autonomy and authenticity. 5. A comment on the method. 6. Privacy and autonomy: the line of argument. II. Equal Freedom, Equal Privacy. On the Critique of the Liberal Tradition. 1. Head or heart: contradictions in the liberal concept of privacy. 2. The feminist critique. 3. Three classics of liberal thought: Locke, Mill and Rawls. 4. Equality and difference between the sexes. Parenthesis: On the debate over equality and difference. 5. Equal freedom, equal privacy. III. Freedom, Privacy and Autonomy. 1. Introduction. 2. A general concept of freedom. 3. Freedom and autonomy. Authenticity and identification. Parenthesis: On the concept of authenticity. The genesis of desires and autonomy as habitus. Goals and projects. 4. Why do we value privacy?. 5. Privacy and autonomy. IV. The Three Dimensions of Privacy. 1. Decisional privacy: scope for action and decisions. 1.1. Private matters and freedom for decisions. Parenthesis: abortion and the right to decisional privacy (Roe vs. Wade). 1.2. Decisional privacy and autonomy (1): the communitarian critique. 1.3. Decisional privacy and autonomy (2): the feminist critique. 1.4. What sort of freedom is protected by privacy?. 2. Informational privacy: limits to knowledge. 2.1. Expectations: what do other people know about me?. 2.2. Informational privacy and unspecified others: the Panopticon. 2.3. Informational privacy and specified others: collusions, friendships and intimate relations. 2.4. Expectations, knowledge, autonomy. 3. Local privacy: the private home. 3.1. The refuge of privacy. 3.2. A room of onea s own: self--invention, self--presentation and autonomy. 3.3. Privacy and the family: love and justice. V. Interfaces: Public and Private. 1. Interfaces and ambivalences. 2. Exposure: the staging of privacy in the public realm. 3. Concealment: the protection of the public realm from private matters. 4. The private and the public person: dissonant identities

Posted Content
TL;DR: Professor Jerry Kang argues in favor of a default rule that allows only "functionally necessary" processing of personal information unless the parties expressly agree otherwise, and proposes a proposed statute, entitled the Cyberspace Privacy Act, which translates academic theory into legislative practice.
Abstract: Cyberspace is the rapidly growing network of computing and communication technologies that have profoundly altered our lives. We already carry out myriad social, economic, and political transactions through cyberspace, and, as the technology improves, so will their quality and quantity. But the very technology that enables these transactions also makes detailed, cumulative, invisible observation of our selves possible. The potential for wide-ranging surveillance of all our cyber-activities presents a serious threat to information privacy. To help readers grasp the nature of this threat, Professor Jerry Kang starts with a general primer on cyberspace privacy. He provides a clarifying structure of philosophical and technological terms, descriptions, and concepts that will help analyze any problem at the nexus of privacy and computing-communication technologies. In the second half of the article, he focuses sharply on the specific problem of personal data generated in cyberspace transactions. The private sector seeks to exploit this data commercially, primarily for database marketing, but many individuals resist. The dominant approach to solving this problem is to view personal information as a commodity that interested parties should contract for in the course of negotiating a cyberspace transaction. But this approach has so far failed to address a critical question: Which default rules should govern the flow of personal information when parties do not explicitly contract about privacy? On economic efficiency and human dignity grounds, Professor Kang argues in favor of a default rule that allows only "functionally necessary" processing of personal information unless the parties expressly agree otherwise. The article concludes with a proposed statute, entitled the Cyberspace Privacy Act, which translates academic theory into legislative practice.