scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2006"


Book ChapterDOI
04 Mar 2006
TL;DR: In this article, the authors show that for several particular applications substantially less noise is needed than was previously understood to be the case, and also show the separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
Abstract: We continue a line of research initiated in [10,11]on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = ∑ig(xi), where xi denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.

6,211 citations


Journal Article
TL;DR: The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.
Abstract: We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.

3,629 citations


Proceedings ArticleDOI
03 Apr 2006
TL;DR: This paper shows with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems, and proposes a novel and powerful privacy definition called \ell-diversity, which is practical and can be implemented efficiently.
Abstract: Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \kappa-anonymity has gained popularity. In a \kappa-anonymized dataset, each record is indistinguishable from at least k—1 other records with respect to certain "identifying" attributes. In this paper we show with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \ell-diversity. In addition to building a formal foundation for \ell-diversity, we show in an experimental evaluation that \ell-diversity is practical and can be implemented efficiently.

2,700 citations


Journal ArticleDOI
TL;DR: This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work.
Abstract: This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.

1,994 citations


Book ChapterDOI
28 Jun 2006
TL;DR: In this paper, a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself.
Abstract: Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network's members and non-members; we analyze the impact of privacy concerns on members' behavior; we compare members' stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual's privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members' misconceptions about the online community's actual size and composition, and about the visibility of members' profiles.

1,888 citations


Journal ArticleDOI
TL;DR: Although Internet privacy concerns inhibit e-commerce transactions, the cumulative influence of Internet trust and personal Internet interest are important factors that can outweigh privacy risk perceptions in the decision to disclose personal information when an individual uses the Internet.
Abstract: While privacy is a highly cherished value, few would argue with the notion that absolute privacy is unattainable. Individuals make choices in which they surrender a certain degree of privacy in exchange for outcomes that are perceived to be worth the risk of information disclosure. This research attempts to better understand the delicate balance between privacy risk beliefs and confidence and enticement beliefs that influence the intention to provide personal information necessary to conduct transactions on the Internet. A theoretical model that incorporated contrary factors representing elements of a privacy calculus was tested using data gathered from 369 respondents. Structural equations modeling (SEM) using LISREL validated the instrument and the proposed model. The results suggest that although Internet privacy concerns inhibit e-commerce transactions, the cumulative influence of Internet trust and personal Internet interest are important factors that can outweigh privacy risk perceptions in the decision to disclose personal information when an individual uses the Internet. These findings provide empirical support for an extended privacy calculus model.

1,870 citations


Journal ArticleDOI
TL;DR: The uproar over privacy issues in social networks is discussed by describing a privacy paradox; private versus public space; and, social networking privacy issues.
Abstract: Teenagers will freely give up personal information to join social networks on the Internet. Afterwards, they are surprised when their parents read their journals. Communities are outraged by the personal information posted by young people online and colleges keep track of student activities on and off campus. The posting of personal information by teens and students has consequences. This article will discuss the uproar over privacy issues in social networks by describing a privacy paradox; private versus public space; and, social networking privacy issues. It will finally discuss proposed privacy solutions and steps that can be taken to help resolve the privacy paradox.

1,166 citations


Journal ArticleDOI
TL;DR: A taxonomy of privacy violations can be found in this article, where the authors provide a framework for how the legal system can come to a better understanding of privacy problems and propose a taxonomy that focuses specifically on the different kinds of activities that impinge upon privacy.
Abstract: incantations of “privacy” are not nuanced enough to capture the problems involved. The 9/11 Commission Report, for example, recommends that, as government agencies engage in greater information sharing with each other and with businesses, they should “safeguard the privacy of individuals about whom information is shared.” But what does safeguarding “privacy” mean? Without an understanding of what the privacy problems are, how can privacy be addressed in a meaningful way? Many commentators have spoken of privacy as a unitary concept with a uniform value, which is unvarying across different situations. In contrast, I have argued that privacy violations involve a variety of types of harmful or problematic activities. Consider the following examples of activities typically referred to as privacy violations: 8 Judith Jarvis Thomson, The Right to Privacy, in PHILOSOPHICAL DIMENSIONS OF PRIVACY: AN ANTHOLOGY 272, 272 (Ferdinand David Schoeman ed., 1984). 9 See James Q. Whitman, The Two Western Cultures of Privacy: Dignity Versus Liberty, 113 YALE L.J. 1151, 1154 (2004) (“[T]he typical privacy article rests its case precisely on an appeal to its reader’s intuitions and anxieties about the evils of privacy violations.”). 10 NAT’L COMM’N ON TERRORIST ATTACKS UPON THE U.S., THE 9/11 COMMISSION REPORT 394 (2004). 11 Daniel J. Solove, Conceptualizing Privacy, 90 CAL. L. REV. 1087, 1130 (2002) [hereinafter Solove, Conceptualizing Privacy]. In contrast to attempts to conceptualize privacy by isolating one or more common “essential” or “core” characteristics, I concluded that there is no singular essence found in all “privacy” violations. See id. at 1095-99 (concluding that “the quest for a common denominator or essence . . . can sometimes lead to confusion”). 2006] A TAXONOMY OF PRIVACY 481 A newspaper reports the name of a rape victim. Reporters deceitfully gain entry to a person’s home and secretly photograph and record the person. New X-ray devices can see through people’s clothing, amounting to what some call a “virtual strip-search.” The government uses a thermal sensor device to detect heat patterns in a person’s home. A company markets a list of five million elderly incontinent women. Despite promising not to sell its members’ personal information to others, a company does so anyway. These violations are clearly not the same. Despite the wide-ranging body of law addressing privacy issues today, commentators often lament the law’s inability to adequately protect privacy. Courts and policymakers frequently have a singular view of privacy in mind when they assess whether or not an activity violates privacy. As a result, they either conflate distinct privacy problems despite significant differences or fail to recognize a problem entirely. Privacy problems are frequently misconstrued or inconsistently recognized in the law. The concept of “privacy” is far too vague to guide adjudication and lawmaking. How can privacy be addressed in a manner that is non-reductive and contextual, yet simultaneously useful in deciding cases and making sense of the multitude of privacy problems we face? In this Article, I provide a framework for how the legal system can come to a better understanding of privacy. I aim to develop a taxonomy that focuses more specifically on the different kinds of activities that impinge upon privacy. I endeavor to shift focus away from the vague term “privacy” 12 See Florida Star v. B.J.F., 491 U.S. 524, 527 (1989). 13 See Dietemann v. Time, Inc., 449 F.2d 245, 246 (9th Cir. 1971). 14 See Beyond X-ray Vision: Can Big Brother See Right Through Your Clothes?, DISCOVER, July 2002, at 24; Guy Gugliotta, Tech Companies See Market for Detection: Security Techniques Offer New Precision, WASH. POST, Sept. 28, 2001, at A8. 15 See Kyllo v. United States, 533 U.S. 27, 29 (2001). 16 See Standards for Privacy of Individually Identifiable Health Information, 65 Fed. Reg. 82,461, 82,467 (Dec. 28, 2000) (codified at 45 C.F.R. pts. 160 & 164). 17 See In re GeoCities, 127 F.T.C. 94, 97-98 (1999). 18 See, e.g., Joel R. Reidenberg, Privacy in the Information Economy: A Fortress or Frontier for Individual Rights?, 44 FED. COMM. L.J. 195, 208 (1992) (“The American legal system does not contain a comprehensive set of privacy rights or principles that collectively address the acquisition, storage, transmission, use and disclosure of personal information within the business community.”); Paul M. Schwartz, Privacy and Democracy in Cyberspace, 52 VAND. L. REV. 1609, 1611 (1999) (“At present, however, no successful standards, legal or otherwise, exist for limiting the collection and utilization of personal data in cyberspace.”). 482 UNIVERSITY OF PENNSYLVANIA LAW REVIEW [Vol. 154: 477 and toward the specific activities that pose privacy problems. Although various attempts at explicating the meaning of “privacy” have been made, few have attempted to identify privacy problems in a comprehensive and concrete manner. The most famous attempt was undertaken in 1960 by the legendary torts scholar William Prosser. He discerned four types of harmful activities redressed under the rubric of privacy: 1. Intrusion upon the plaintiff’s seclusion or solitude, or into his private affairs. 2. Public disclosure of embarrassing private facts about the plaintiff. 3. Publicity which places the plaintiff in a false light in the public eye. 4. Appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness. Prosser’s great contribution was to synthesize the cases that emerged from Samuel Warren and Louis Brandeis’s famous law review article, The Right to Privacy. However, Prosser focused only on tort law. American privacy law is significantly more vast and complex, extending beyond torts to the constitutional “right to privacy,” Fourth Amendment law, evidentiary privileges, dozens of federal privacy statutes, and hundreds of state privacy statutes. 19 In 1967, Alan Westin identified four “basic states of individual privacy”: (1) solitude; (2) intimacy; (3) anonymity; and (4) reserve (“the creation of a psychological barrier against unwanted intrusion”). ALAN F. WESTIN, PRIVACY AND FREEDOM 31-32 (1967). These categories focus mostly on spatial distance and separateness; they fail to capture the many different dimensions of informational privacy. In 1992, Ken Gormley surveyed the law of privacy. See generally Ken Gormley, One Hundred Years of Privacy, 1992 WIS. L. REV. 1335. His categories-–tort privacy, Fourth Amendment privacy, First Amendment privacy, fundamentaldecision privacy, and state constitutional privacy-–are based on different areas of law rather than on a more systemic conceptual account of privacy. Id. at 1340. In 1998, Jerry Kang defined privacy as a union of three overlapping clusters of ideas: (1) physical space (“the extent to which an individual’s territorial solitude is shielded from invasion by unwanted objects or signals”); (2) choice (“an individual’s ability to make certain significant decisions without interference”); and (3) flow of personal information (“an individual’s control over the processing—i.e., the acquisition, disclosure, and use—of personal information”). Jerry Kang, Information Privacy in Cyberspace Transactions, 50 STAN. L. REV. 1193, 1202-03 (1998). Kang’s understanding of privacy is quite rich, but the breadth of the categories limits their usefulness in law. The same is true of the three categories identified by philosopher Judith DeCew: (1) “informational privacy”; (2) “accessibility privacy”; and (3) “expressive privacy.” JUDITH W. DECEW, IN PURSUIT OF PRIVACY: LAW, ETHICS, AND THE RISE OF TECHNOLOGY 75-77 (1997). 20 William L. Prosser, Privacy, 48 CAL. L. REV. 383, 389 (1960). 21 Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 HARV. L. REV. 193, 195-96 (1890). 22 See Anita L. Allen, Privacy in American Law, in PRIVACIES: PHILOSOPHICAL EVALUATIONS 19, 26 (Beate Rossler ed., 2004) (“American privacy law is impressive in its 2006] A TAXONOMY OF PRIVACY 483 The Freedom of Information Act contains two exemptions to protect against an “unwarranted invasion of personal privacy.” Numerous state public records laws also contain privacy exemptions. Many state constitutions contain provisions explicitly providing for a right to privacy. Moreover, Prosser wrote over forty years ago, before the breathtaking rise of the Information Age. New technologies have given rise to a panoply of different privacy problems, and many of them do not readily fit into Prosser’s four categories. Therefore, a new taxonomy to address privacy violations for contemporary times is sorely needed. The taxonomy I develop is an attempt to identify and understand the different kinds of socially recognized privacy violations, one that hopefully will enable courts and policymakers to better balance privacy against countervailing interests. The purpose of this taxonomy is to aid in the development of the law that addresses privacy. Although the primary focus will be on the law, this taxonomy is not simply an attempt to catalog existing laws, as was Prosser’s purpose. Rather, it is an attempt to understand various privacy harms and problems that have achieved a significant degree of social recognition. I will frequently use the law as a source for determining what privacy violations society recognizes. However, my aim is not simply to take stock of where the law currently stands today, but to provide a useful framework for its future development.

892 citations


Journal ArticleDOI
TL;DR: This paper proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data and presents extensive theoretical analysis and experimental results.
Abstract: This paper explores the possibility of using multiplicative random projection matrices for privacy preserving distributed data mining. It specifically considers the problem of computing statistical aggregates like the inner product matrix, correlation coefficient matrix, and Euclidean distance matrix from distributed privacy sensitive data possibly owned by multiple parties. This class of problems is directly related to many other data-mining problems such as clustering, principal component analysis, and classification. This paper makes primary contributions on two different grounds. First, it explores independent component analysis as a possible tool for breaching privacy in deterministic multiplicative perturbation-based models such as random orthogonal transformation and random rotation. Then, it proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data. The paper presents extensive theoretical analysis and experimental results. Experiments demonstrate that the proposed technique is effective and can be successfully used for different types of privacy-preserving data mining applications.

565 citations


Journal ArticleDOI
TL;DR: In this paper, an integrative framework from information privacy and relationship marketing arenas was employed to investigate whether a traditional business-to-business relationship marketing framework could be applied to the information-intensive online business to consumer channel.

554 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: This work formalizes some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information to capture naturally many notions of privacy found in legislation, including those found in HIPAA, COPPA, and GLBA.
Abstract: Contextual integrity is a conceptual framework for understanding privacy expectations and their implications developed in the literature on law, public policy, and political philosophy. We formalize some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information. In comparison with access control and privacy policy frameworks such as RBAC, EPAL, and P3P, these norms focus on who personal information is about, how it is transmitted, and past and future actions by both the subject and the users of the information. Norms can be positive or negative depending on whether they refer to actions that are allowed or disallowed. Our model is expressive enough to capture naturally many notions of privacy found in legislation, including those found in HIPAA, COPPA, and GLBA. A number of important problems regarding compliance with privacy norms, future requirements associated with specific actions, and relations between policies and legal standards reduce to standard decision procedures for temporal logic.

Journal ArticleDOI
TL;DR: The previously presented biometric-hash framework prescribes the integration of external randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics comparable to cryptographic ciphers or hashes, which are explained in this paper as arising from the random multispace quantization of biometric and external random inputs.
Abstract: Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the random multispace quantization (RMQ) of biometric and external random inputs

Journal ArticleDOI
TL;DR: This research explores the two alternative antecedents to Internet privacy concerns and intention to engage in e-commerce activity and contributes to the understanding of Internet privacy and its importance in the global information environment.
Abstract: This study focuses on antecedents to Internet privacy concerns and the behavioral intention to conduct on-line transactions. Perceptions of privacy are socially constructed through communication and transactions with social entities over a networked environment, a process that involves a certain level of technical skill and literacy. The research model specifies that social awareness and Internet literacy are related to both Internet privacy and intention to transact. Survey data collected from 422 respondents were analyzed using structural equation modeling (SEM) with LISREL and provided support for the hypothesized relationships. Social awareness was positively related and Internet literacy was negatively related to Internet privacy concerns. Moreover, Internet privacy concerns were negatively related and Internet literacy positively related to intention to transact on-line. This research explores the two alternative antecedents to Internet privacy concerns and intention to engage in e-commerce activity and contributes to our understanding of Internet privacy and its importance in the global information environment. The construct of social awareness can be broadened to develop a much-needed construct of awareness in MIS research related to the voluntary usage of information technology. A segmentation of Internet users with respect to privacy concerns is also proposed.

Journal ArticleDOI
TL;DR: Previous models of e-commerce adoption are extended by specifically assessing the impact that consumers' concerns for information privacy (CFIP) have on their willingness to engage in online transactions, and results indicate that merchant familiarity does not moderate the relationship between CFIP and risk perceptions or CFIP
Abstract: Although electronic commerce experts often cite privacy concerns as barriers to consumer electronic commerce, there is a lack of understanding about how these privacy concerns impact consumers' willingness to conduct transactions online. Therefore, the goal of this study is to extend previous models of e-commerce adoption by specifically assessing the impact that consumers' concerns for information privacy (CFIP) have on their willingness to engage in online transactions. To investigate this, we conducted surveys focusing on consumers’ willingness to transact with a well-known and less well-known Web merchant. Results of the study indicate that concern for information privacy affects risk perceptions, trust, and willingness to transact for a wellknown merchant, but not for a less well-known merchant. In addition, the results indicate that merchant familiarity does not moderate the relationship between CFIP and risk perceptions or CFIP and trust. Implications for researchers and practitioners are discussed.

Book ChapterDOI
28 Jun 2006
TL;DR: A data model to augment uncertainty to location data is suggested, and imprecise queries that hide the location of the query issuer and yields probabilistic results are proposed that investigate the evaluation and quality aspects for a range query.
Abstract: Location-based services, such as finding the nearest gas station, require users to supply their location information. However, a user's location can be tracked without her consent or knowledge. Lowering the spatial and temporal resolution of location data sent to the server has been proposed as a solution. Although this technique is effective in protecting privacy, it may be overkill and the quality of desired services can be severely affected. In this paper, we suggest a framework where uncertainty can be controlled to provide high quality and privacy-preserving services, and investigate how such a framework can be realized in the GPS and cellular network systems. Based on this framework, we suggest a data model to augment uncertainty to location data, and propose imprecise queries that hide the location of the query issuer and yields probabilistic results. We investigate the evaluation and quality aspects for a range query. We also provide novel methods to protect our solutions against trajectory-tracing. Experiments are conducted to examine the effectiveness of our approaches.

Journal ArticleDOI
TL;DR: This architecture separates data from identities by splitting communication from data analysis, and promises significant reductions in infrastructure cost because the system can exploit the sensing, computing, and communications devices already installed in many modern vehicles.
Abstract: Intelligent transportation systems increasingly depend on probe vehicles to monitor traffic: they can automatically report position, travel time, traffic incidents, and road surface problems to a telematics service provider. This kind of traffic-monitoring system could provide good coverage and timely information on many more roadways than is possible with a fixed infrastructure such as cameras and loop detectors. This approach also promises significant reductions in infrastructure cost because the system can exploit the sensing, computing, and communications devices already installed in many modern vehicles. This architecture separates data from identities by splitting communication from data analysis. Data suppression techniques can help prevent data mining algorithms from reconstructing private information from anonymous database samples

Proceedings ArticleDOI
01 Jan 2006
TL;DR: The privacy and security implications of next-generation health care technologies are explored, existing methods for handling issues are described as well as discussing which issues need further consideration.
Abstract: The face of health care is changing as new technologies are being incorporated into the existing infrastructure. Electronic patient records and sensor networks for in-home patient monitoring are at the current forefront of new technologies. Paper-based patient records are being put in electronic format enabling patients to access their records via the Internet. Remote patient monitoring is becoming more feasible as specialized sensors can be placed inside homes. The combination of these technologies will improve the quality of health care by making it more personalized and reducing costs and medical errors. While there are benefits to technologies, associated privacy and security issues need to be analyzed to make these systems socially acceptable. In this paper we explore the privacy and security implications of these next-generation health care technologies. We describe existing methods for handling issues as well as discussing which issues need further consideration.

Journal ArticleDOI
TL;DR: A model is presented in which information privacy predicts psychological empowerment, which in turn predicts discretionary behaviors on the job, including creative performance and organizational citizenship behavior (OCB), which confirms that information privacy entails judgments of information gathering control, information handling control, and legitimacy.
Abstract: This article examines the relationship of employee perceptions of information privacy in their work organizations and important psychological and behavioral outcomes. A model is presented in which information privacy predicts psychological empowerment, which in turn predicts discretionary behaviors on the job, including creative performance and organizational citizenship behavior (OCB). Results from 2 studies (Study 1: single organization, N=310; Study 2: multiple organizations, N=303) confirm that information privacy entails judgments of information gathering control, information handling control, and legitimacy. Moreover, a model linking information privacy to empowerment and empowerment to creative performance and OCBs was supported. Findings are discussed in light of organizational attempts to control employees through the gathering and handling of their personal information.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the impact of privacy disclosures on online shoppers' trust in an e-tailer through a two-phase study and find that consumers are more favorably inclined to a shopping site with a clearly stated privacy message than to one without it, especially when privacy risk is high.

Journal ArticleDOI
TL;DR: A P3P user agent called Privacy Bird is developed, which can fetch P2P privacy policies automatically, compare them with a user's privacy preferences, and alert and advise the user.
Abstract: Most people do not often read privacy policies because they tend to be long and difficult to understand. The Platform for Privacy Preferences (P3P) addresses this problem by providing a standard machine-readable format for website privacy policies. P3P user agents can fetch P3P privacy policies automatically, compare them with a user's privacy preferences, and alert and advise the user. Developing user interfaces for P3P user agents is challenging for several reasons: privacy policies are complex, user privacy preferences are often complex and nuanced, users tend to have little experience articulating their privacy preferences, users are generally unfamiliar with much of the terminology used by privacy experts, users often do not understand the privacy-related consequences of their behavior, and users have differing expectations about the type and extent of privacy policy information they would like to see. We developed a P3P user agent called Privacy Bird. Our design was informed by privacy surveys and our previous experience with prototype P3P user agents. We describe our design approach, compare it with the approach used in other P3P use agents, evaluate our design, and make recommendations to designers of other privacy agents.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This work presents the methodology for extracting and prioritizing rights and obligations from regulations and shows how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations.
Abstract: In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called Semantic Parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the Privacy Rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements.

Book
11 May 2006
TL;DR: In this article, the authors describe the privacy paradigm as social policy and privacy protection as a social policy, promoting trust and managing risk, and describe the evaluation of international privacy protection - a race to the top, the bottom, or elsewhere.
Abstract: Privacy Goals: The privacy paradigm Privacy protection as social policy Privacy protection - promoting trust and managing risk. Policy Instruments: Transnational policy instruments Legal instruments and regulatory agencies Self-regulatory instruments Technological instruments. Policy Impacts: Privacy regimes The evaluation of impact International privacy protection - a race to the top, the bottom, or somewhere else?

Journal ArticleDOI
TL;DR: It is argued for a move away from narrow views of privacy and security and toward a holistic view of situated and collective information practice.
Abstract: As everyday life is increasingly conducted online, and as the electronic world continues to move out into the physical, the privacy of information and action and the security of information systems are increasingly a focus of concern both for the research community and the public at large. Accordingly, privacy and security are active topics of investigation from a wide range of perspectives--institutional, legislative, technical, interactional, and more. In this article, we wish to contribute toward a broad understanding of privacy and security not simply as technical phenomena but as embedded in social and cultural contexts. Privacy and security are difficult concepts to manage from a technical perspective precisely because they are caught up in larger collective rhetorics and practices of risk, danger, secrecy, trust, morality, identity, and more. Reductive attempts to deal with these issues separately produce incoherent or brittle results. We argue for a move away from narrow views of privacy and security and toward a holistic view of situated and collective information practice.

Journal ArticleDOI
TL;DR: By examining RFID's history, researchers can learn from past mistakes, rediscover successful solutions, and inspire future research to solve security and privacy threats.
Abstract: As RFID technology progresses, security and privacy threats also evolve. By examining RFID's history, we can learn from past mistakes, rediscover successful solutions, and inspire future research.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: A suite of anonymization algorithms that produce an anonymous view based on a target class of workloads, consisting of one or more data mining tasks, as well as selection predicates are provided.
Abstract: Protecting data privacy is an important problem in microdata distribution. Anonymization algorithms typically aim to protect individual privacy, with minimal impact on the quality of the resulting data. While the bulk of previous work has measured quality through one-size-fits-all measures, we argue that quality is best judged with respect to the workload for which the data will ultimately be used.This paper provides a suite of anonymization algorithms that produce an anonymous view based on a target class of workloads, consisting of one or more data mining tasks, as well as selection predicates. An extensive experimental evaluation indicates that this approach is often more effective than previous anonymization techniques.

Proceedings ArticleDOI
25 Apr 2006
TL;DR: Wang et al. as mentioned in this paper proposed GROW (greedy random walk), a two-way random walk, i.e., from both source and sink, to reduce the chance an eavesdropper can collect the location information.
Abstract: While a wireless sensor network is deployed to monitor certain events and pinpoint their locations, the location information is intended only for legitimate users. However, an eavesdropper can monitor the traffic and deduce the approximate location of monitored objects in certain situations. We first describe a successful attack against the flooding-based phantom routing, proposed in the seminal work by Celal Ozturk, Yanyong Zhang, and Wade Trappe. Then, we propose GROW (Greedy Random Walk), a two-way random walk, i.e., from both source and sink, to reduce the chance an eavesdropper can collect the location information. We improve the delivery rate by using local broadcasting and greedy forwarding. Privacy protection is verified under a backtracking attack model. The message delivery time is a little longer than that of the broadcasting-based approach, but it is still acceptable if we consider the enhanced privacy preserving capability of this new approach. At the same time, the energy consumption is less than half the energy consumption of flooding-base phantom routing, which is preferred in a low duty cycle, environmental monitoring sensor network.

Journal ArticleDOI
01 Nov 2006
TL;DR: A two-party framework along with an application that generates k-anonymous data from two vertically partitioned sources without disclosing data from one site to the other satisfies the secure definition commonly defined in the literature of Secure Multiparty Computation.
Abstract: k-anonymity provides a measure of privacy protection by preventing re-identification of data to fewer than a group of k data items. While algorithms exist for producing k-anonymous data, the model has been that of a single source wanting to publish data. Due to privacy issues, it is common that data from different sites cannot be shared directly. Therefore, this paper presents a two-party framework along with an application that generates k-anonymous data from two vertically partitioned sources without disclosing data from one site to the other. The framework is privacy preserving in the sense that it satisfies the secure definition commonly defined in the literature of Secure Multiparty Computation.

Journal Article
TL;DR: This paper analyzes various inference channels that may exist in multiple anonymized datasets and discusses how to avoid such inferences, and presents an approach to securely anonymizing a continuously growing dataset in an efficient manner while assuring high data quality.
Abstract: Data anonymization techniques based on the k-anonymity model have been the focus of intense research in the last few years. Although the k-anonymity model and the related techniques provide valuable solutions to data privacy, current solutions are limited only to static data release (i.e., the entire dataset is assumed to be available at the time of release). While this may be acceptable in some applications, today we see databases continuously growing everyday and even every hour. In such dynamic environments, the current techniques may suffer from poor data quality and/or vulnerability to inference. In this paper, we analyze various inference channels that may exist in multiple anonymized datasets and discuss how to avoid such inferences. We then present an approach to securely anonymizing a continuously growing dataset in an efficient manner while assuring high data quality.

Proceedings ArticleDOI
17 Jun 2006
TL;DR: It is shown in extensive experiments that pixelation and blurring offers very poor privacy protection while significantly distorting the data and a novel framework for de-identifying facial images is introduced, which combines a model-based face image parameterization with a formal privacy protection model.
Abstract: Advances in camera and computing equipment hardware in recent years have made it increasingly simple to capture and store extensive amounts of video data. This, among other things, creates ample opportunities for the sharing of video sequences. In order to protect the privacy of subjects visible in the scene, automated methods to de-identify the images, particularly the face region, are necessary. So far the majority of privacy protection schemes currently used in practice rely on ad-hoc methods such as pixelation or blurring of the face. In this paper we show in extensive experiments that pixelation and blurring offers very poor privacy protection while significantly distorting the data. We then introduce a novel framework for de-identifying facial images. Our algorithm combines a model-based face image parameterization with a formal privacy protection model. In experiments on two large-scale data sets we demonstrate privacy protection and preservation of data utility.

Book ChapterDOI
10 Sep 2006
TL;DR: In this paper, the authors analyze various inference channels that may exist in multiple anonymized datasets and discuss how to avoid such inferences, and then present an approach to securely anonymizing a continuously growing dataset in an efficient manner while assuring high data quality.
Abstract: Data anonymization techniques based on the k-anonymity model have been the focus of intense research in the last few years. Although the k-anonymity model and the related techniques provide valuable solutions to data privacy, current solutions are limited only to static data release (i.e., the entire dataset is assumed to be available at the time of release). While this may be acceptable in some applications, today we see databases continuously growing everyday and even every hour. In such dynamic environments, the current techniques may suffer from poor data quality and/or vulnerability to inference. In this paper, we analyze various inference channels that may exist in multiple anonymized datasets and discuss how to avoid such inferences. We then present an approach to securely anonymizing a continuously growing dataset in an efficient manner while assuring high data quality.