scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2009"


MonographDOI
TL;DR: Arguing that privacy concerns should not be limited solely to concern about control over personal information, Helen Nissenbaum counters that information ought to be distributed and protected according to norms governing distinct social context, be it workplace, health care, schools, or among family and friends.
Abstract: Privacy is one of the most urgent issues associated with information technology and digital media This book claims that what people really care about when they complain and protest that privacy has been violated is not the act of sharing information itselfmost people understand that this is crucial to social life but the inappropriate, improper sharing of information Arguing that privacy concerns should not be limited solely to concern about control over personal information, Helen Nissenbaum counters that information ought to be distributed and protected according to norms governing distinct social contextswhether it be workplace, health care, schools, or among family and friends She warns that basic distinctions between public and private, informing many current privacy policies, in fact obscure more than they clarify In truth, contemporary information systems should alarm us only when they function without regard for social norms and values, and thereby weaken the fabric of social life

1,887 citations


Proceedings ArticleDOI
17 May 2009
TL;DR: A framework for analyzing privacy and anonymity in social networks is presented and a new re-identification algorithm targeting anonymized social-network graphs is developed, showing that a third of the users who can be verified to have accounts on both Twitter and Flickr can be re-identified in the anonymous Twitter graph.
Abstract: Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.

1,360 citations


Journal ArticleDOI
TL;DR: Investigating Facebook users' awareness of privacy issues and perceived benefits and risks of utilizing Facebook suggests that this lax attitude may be based on a combination of high gratification, usage patterns, and a psychological mechanism similar to third-person effect.
Abstract: This article investigates Facebook users' awareness of privacy issues and perceived benefits and risks of utilizing Facebook. Research found that Facebook is deeply integrated in users' daily lives through specific routines and rituals. Users claimed to understand privacy issues, yet reported uploading large amounts of personal information. Risks to privacy invasion were ascribed more to others than to the self. However, users reporting privacy invasion were more likely to change privacy settings than those merely hearing about others' privacy invasions. Results suggest that this lax attitude may be based on a combination of high gratification, usage patterns, and a psychological mechanism similar to third-person effect. Safer use of social network services would thus require changes in user attitude.

1,154 citations


Posted Content
TL;DR: It is necessary to respond to the surprising failure of anonymization, and this Article provides the tools to do so.
Abstract: Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often 'reidentify' or 'deanonymize' individuals hidden in anonymized data with astonishing ease. By understanding this research, we will realize we have made a mistake, labored beneath a fundamental misunderstanding, which has assured us much less privacy than we have assumed. This mistake pervades nearly every information privacy law, regulation, and debate, yet regulators and legal scholars have paid it scant attention. We must respond to the surprising failure of anonymization, and this Article provides the tools to do so.

927 citations


Journal ArticleDOI
TL;DR: It is found that an individual's CFIP interacts with argument framing and issue involvement to affect attitudes toward the use of EHRs, and results suggest that attitude toward EHR use and CFIP directly influence opt-in behavioral intentions.
Abstract: Within the emerging context of the digitization of health care, electronic health records (EHRs) constitute a significant technological advance in the way medical information is stored, communicated, and processed by the multiple parties involved in health care delivery. However, in spite of the anticipated value potential of this technology, there is widespread concern that consumer privacy issues may impede its diffusion. In this study, we pose the question: Can individuals be persuaded to change their attitudes and opt-in behavioral intentions toward EHRs, and allow their medical information to be digitized even in the presence of significant privacy concerns? To investigate this question, we integrate an individual's concern for information privacy (CFIP) with the elaboration likelihood model (ELM) to examine attitude change and likelihood of opting-in to an EHR system. We theorize that issue involvement and argument framing interact to influence attitude change, and that concern for information privacy further moderates the effects of these variables. We also propose that likelihood of adoption is driven by concern for information privacy and attitude. We test our predictions using an experiment with 366 subjects where we manipulate the framing of the arguments supporting EHRs. We find that an individual's CFIP interacts with argument framing and issue involvement to affect attitudes toward the use of EHRs. In addition, results suggest that attitude toward EHR use and CFIP directly influence opt-in behavioral intentions. An important finding for both theory and practice is that even when people have high concerns for privacy, their attitudes can be positively altered with appropriate message framing. These results as well as other theoretical and practical implications are discussed.

924 citations


Journal ArticleDOI
John Krumm1
01 Aug 2009
TL;DR: This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information, which includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data.
Abstract: This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples' attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.

732 citations


Journal ArticleDOI
TL;DR: This study extends the privacy calculus model to explore the role of information delivery mechanisms (pull and push) in the efficacy of three privacy intervention approaches (compensation, industry self-regulation, and government regulation) in influencing individual privacy decision making and suggests that providing financial compensation for push-based LBS is more important than it is for pull- based LBS.
Abstract: Location-based services (LBS) use positioning technologies to provide individual users with reachability and accessibility that would otherwise not be available in the conventional commercial realm While LBS confer greater connectivity and personalization on consumers, they also threaten users' information privacy through granular tracking of their preferences, behaviors, and identity To address privacy concerns in the LBS context, this study extends the privacy calculus model to explore the role of information delivery mechanisms (pull and push) in the efficacy of three privacy intervention approaches (compensation, industry self-regulation, and government regulation) in influencing individual privacy decision making The research model was tested using data gathered from 528 respondents through a quasi-experimental survey method Structural equations modeling using partial least squares validated the instrument and the proposed model Results suggest that the effects of the three privacy intervention approaches on an individual's privacy calculus vary based on the type of information delivery mechanism (pull and push) Results suggest that providing financial compensation for push-based LBS is more important than it is for pull-based LBS Moreover, this study shows that privacy advocates and government legislators should not treat all types of LBS as undifferentiated but could instead specifically target certain types of services

680 citations


Proceedings ArticleDOI
Siani Pearson1
23 May 2009
TL;DR: The privacy challenges that software engineers face when targeting the cloud as their production environment to offer services are assessed, and key design principles to address these are suggested.
Abstract: Privacy is an important issue for cloud computing, both in terms of legal compliance and user trust, and needs to be considered at every phase of design. In this paper the privacy challenges that software engineers face when targeting the cloud as their production environment to offer services are assessed, and key design principles to address these are suggested.

600 citations


Book
04 Sep 2009
TL;DR: This book, written by recognized authorities in the tech security world, addresses issues that affect any organization preparing to use cloud computing as an option and provides the detailed information on cloud computing security that has been lacking, until now.
Abstract: This book, written by recognized authorities in the tech security world, addresses issues that affect any organization preparing to use cloud computing as an option. Cloud computing has emerged as a popular way for corporations to save money that would otherwise go into their IT infrastructure. However, along with the promise of cloud computing there has also been considerable skepticism about the type and extent of security and privacy that these services provide. Cloud Security and Privacy walks you through the steps you need to take to ensure your web applications are secure and your data is safe, and addresses regulatory issues such as audit and compliance. Ideal for IT personnel who need to deliver and maintain applications in the cloud, business managers looking to cut costs, service providers, and investors, this book provides the detailed information on cloud computing security that has been lacking, until now.

555 citations


Journal ArticleDOI
TL;DR: Safebook as discussed by the authors is a decentralized and privacy-preserving online social network application based on the two design principles, decentralization and exploiting real-life trust, various mechanisms for privacy and security are integrated into Safebook in order to provide data storage and data management functions that preserve users privacy, data integrity, and availability.
Abstract: Online social network applications severely suffer from various security and privacy exposures. This article suggests a new approach to tackle these security and privacy problems with a special emphasis on the privacy of users with respect to the application provider in addition to defense against intruders or malicious users. In order to ensure users' privacy in the face of potential privacy violations by the provider, the suggested approach adopts a decentralized architecture relying on cooperation among a number of independent parties that are also the users of the online social network application. The second strong point of the suggested approach is to capitalize on the trust relationships that are part of social networks in real life in order to cope with the problem of building trusted and privacy- preserving mechanisms as part of the online application. The combination of these design principles is Safebook, a decentralized and privacy- preserving online social network application. Based on the two design principles, decentralization and exploiting real-life trust, various mechanisms for privacy and security are integrated into Safebook in order to provide data storage and data management functions that preserve users' privacy, data integrity, and availability. Preliminary evaluations of Safebook show that a realistic compromise between privacy and performance is feasible.

512 citations


Posted Content
TL;DR: An experiment in which self-reported privacy preferences of 171 participants were compared with their actual disclosing behavior during an online shopping episode, suggesting that current approaches to protect online users' privacy may face difficulties to do so effectively.
Abstract: interactive, privacy is a matter of increasing concern. Many surveys have investigated households' privacy attitudes and concerns, revealing a general desire among Internet users to protect their privacy. To complement these questionnaire-based studies, we conducted an experiment in which we compared selfreported privacy preferences of 171 participants with their actual disclosing behavior during an online shopping episode. Our results suggest that current approaches to protect online users' privacy, such as EU data protection regulation or P3P, may face difficulties to do so effectively. This is due to their underlying assumption that people are not only privacy conscious, but will also act accordingly. In our study, most individuals stated that privacy was important to them, with concern centering on the disclosure of different aspects of personal information. However, regardless of their specific privacy concerns, most participants did not live up to their self-reported privacy preferences. As participants were drawn into the sales dialogue with an anthropomorphic 3-D shopping bot, they answered a majority of questions, even if these were highly personal. Moreover, different privacy statements had no effect on the amount of information disclosed; in fact, the mentioning of EU regulation seemed to cause a feeling of 'false security'. The results suggest that people appreciate highly communicative EC environments and forget privacy concerns once they are `inside the Web'.

Journal ArticleDOI
01 Aug 2009
TL;DR: This paper proposes k-automorphism to protect against multiple structural attacks and develops an algorithm (called KM) that ensures k-Automorphism and discusses an extension of KM to handle "dynamic" releases of the data.
Abstract: The growing popularity of social networks has generated interesting data management and data mining problems. An important concern in the release of these data for study is their privacy, since social networks usually contain personal information. Simply removing all identifiable personal information (such as names and social security number) before releasing the data is insufficient. It is easy for an attacker to identify the target by performing different structural queries. In this paper we propose k-automorphism to protect against multiple structural attacks and develop an algorithm (called KM) that ensures k-automorphism. We also discuss an extension of KM to handle "dynamic" releases of the data. Extensive experiments show that the algorithm performs well in terms of protection it provides.

Journal ArticleDOI
TL;DR: The paper uses a three-layer model of user privacy concerns to relate them to system operations and examine their effects on user behavior, and develops guidelines for building privacy-friendly systems.
Abstract: In this paper we integrate insights from diverse islands of research on electronic privacy to offer a holistic view of privacy engineering and a systematic structure for the discipline's topics. First we discuss privacy requirements grounded in both historic and contemporary perspectives on privacy. We use a three-layer model of user privacy concerns to relate them to system operations (data transfer, storage and processing) and examine their effects on user behavior. In the second part of the paper we develop guidelines for building privacy-friendly systems. We distinguish two approaches: "privacy-by-policy" and "privacy-by-architecture." The privacy-by-policy approach focuses on the implementation of the notice and choice principles of fair information practices (FIPs), while the privacy-by-architecture approach minimizes the collection of identifiable personal data and emphasizes anonymization and client-side data storage and processing. We discuss both approaches with a view to their technical overlaps and boundaries as well as to economic feasibility. The paper aims to introduce engineers and computer scientists to the privacy research domain and provide concrete guidance on how to design privacy-friendly systems.

Journal ArticleDOI
01 Aug 2009
TL;DR: This article reports on the work on PeopleFinder, an application that enables cell phone and laptop users to selectively share their locations with others, and explores technologies that empower users to more effectively and efficiently specify their privacy preferences.
Abstract: A number of mobile applications have emerged that allow users to locate one another. However, people have expressed concerns about the privacy implications associated with this class of software, suggesting that broad adoption may only happen to the extent that these concerns are adequately addressed. In this article, we report on our work on PeopleFinder, an application that enables cell phone and laptop users to selectively share their locations with others (e.g. friends, family, and colleagues). The objective of our work has been to better understand people's attitudes and behaviors towards privacy as they interact with such an application, and to explore technologies that empower users to more effectively and efficiently specify their privacy preferences (or "policies"). These technologies include user interfaces for specifying rules and auditing disclosures, as well as machine learning techniques to refine user policies based on their feedback. We present evaluations of these technologies in the context of one laboratory study and three field studies.

Proceedings ArticleDOI
15 Jul 2009
TL;DR: The study results demonstrate that compared to existing natural language privacy policies, the proposed privacy label allows participants to find information more quickly and accurately, and provides a more enjoyable information seeking experience.
Abstract: We used an iterative design process to develop a privacy label that presents to consumers the ways organizations collect, use, and share personal information. Many surveys have shown that consumers are concerned about online privacy, yet current mechanisms to present website privacy policies have not been successful. This research addresses the present gap in the communication and understanding of privacy policies, by creating an information design that improves the visual presentation and comprehensibility of privacy policies. Drawing from nutrition, warning, and energy labeling, as well as from the effort towards creating a standardized banking privacy notification, we present our process for constructing and refining a label tuned to privacy. This paper describes our design methodology; findings from two focus groups; and accuracy, timing, and likeability results from a laboratory study with 24 participants. Our study results demonstrate that compared to existing natural language privacy policies, the proposed privacy label allows participants to find information more quickly and accurately, and provides a more enjoyable information seeking experience.

Proceedings Article
10 Aug 2009
TL;DR: Vanish is presented, a system that meets this challenge through a novel integration of cryptographic techniques with global-scale, P2P, distributed hash tables (DHTs) and meets the privacy-preserving goals described above.
Abstract: Today's technical and legal landscape presents formidable challenges to personal data privacy First, our increasing reliance on Web services causes personal data to be cached, copied, and archived by third parties, often without our knowledge or control Second, the disclosure of private data has become commonplace due to carelessness, theft, or legal actions Our research seeks to protect the privacy of past, archived data -- such as copies of emails maintained by an email provider -- against accidental, malicious, and legal attacks Specifically, we wish to ensure that all copies of certain data become unreadable after a userspecified time, without any specific action on the part of a user, and even if an attacker obtains both a cached copy of that data and the user's cryptographic keys and passwords This paper presents Vanish, a system that meets this challenge through a novel integration of cryptographic techniques with global-scale, P2P, distributed hash tables (DHTs) We implemented a proof-of-concept Vanish prototype to use both the million-plus-node Vuze Bit-Torrent DHT and the restricted-membership OpenDHT We evaluate experimentally and analytically the functionality, security, and performance properties of Vanish, demonstrating that it is practical to use and meets the privacy-preserving goals described above We also describe two applications that we prototyped on Vanish: a Firefox plugin for Gmail and other Web sites and a Vanishing File application

Journal ArticleDOI
Seounmi Youn1
TL;DR: This study identified determinants of young adolescents' level of privacy concerns, which, in turn, affects their resultant coping behaviors to protect privacy and implications of privacy education to protect online privacy among young adolescents were discussed.
Abstract: With Rogers' protection motivation theory as the theoretical framework, this study identified determinants of young adolescents' level of privacy concerns, which, in turn, affects their resultant coping behaviors to protect privacy. Survey data from 144 middle school students revealed that perceived risks of information disclosure increased privacy concerns, whereas perceived benefits offered by information exchange decreased privacy concerns. Subsequently, privacy concerns had an impact on risk-coping behaviors such as seeking out interpersonal advice or additional information (e.g., privacy statement) or refraining from using Web sites that ask for personal information. Counter to our expectation, privacy self-efficacy did not appear to be related to privacy concerns. Implications of privacy education to protect online privacy among young adolescents were discussed.

Proceedings ArticleDOI
25 Jun 2009
TL;DR: The results show that personal network size was positively associated with information revelation, no association was found between concern about unwanted audiences and information revelations, and students' Internet privacy concerns and information revelation were negatively associated.
Abstract: Despite concerns raised about the disclosure of personal information on social network sites, research has demonstrated that users continue to disclose personal information. The present study employs surveys and interviews to examine the factors that influence university students to disclose personal information on Facebook. Moreover, we study the strategies students have developed to protect themselves against privacy threats. The results show that personal network size was positively associated with information revelation, no association was found between concern about unwanted audiences and information revelation and finally, students' Internet privacy concerns and information revelation were negatively associated. The privacy protection strategies employed most often were the exclusion of personal information, the use of private email messages, and altering the default privacy settings. Based on our findings, we propose a model of information revelation and draw conclusions for theories of identity expression.

Journal ArticleDOI
TL;DR: This book deals with a very important theme that is perhaps "the issue" of the decade i.e. improving education by focusing on models, approach, powerful technologies and most importantly, innovation to look at the problem.
Abstract: (2009). Disrupting Class How Disruptive Innovation Will Change the Way the World Learns. Journal of Information Privacy and Security: Vol. 5, No. 4, pp. 70-71.

Proceedings ArticleDOI
12 Dec 2009
TL;DR: PasS (Privacy as a Service) is the first practical cloud computing privacy solution that utilizes previous research on cryptographic coprocessors to solve the problem of securely processing sensitive data in cloud computing infrastructures.
Abstract: In this paper we present PasS (Privacy as a Service); a set of security protocols for ensuring the privacy and legal compliance of customer data in cloud computing architectures. PasS allows for the secure storage and processing of users’ confidential data by leveraging the tamper-proof capabilities of cryptographic coprocessors. Using tamper-proof facilities provides a secure execution domain in the computing cloud that is physically and logically protected from unauthorized access. PasS central design goal is to maximize users’ control in managing the various aspects related to the privacy of sensitive data. This is achieved by implementing user-configurable software protection and data privacy mechanisms. Moreover, PasS provides a privacy feedback process which informs users of the different privacy operations applied on their data and makes them aware of any potential risks that may jeopardize the confidentiality of their sensitive information. To the best of our knowledge, PasS is the first practical cloud computing privacy solution that utilizes previous research on cryptographic coprocessors to solve the problem of securely processing sensitive data in cloud computing infrastructures.

BookDOI
12 Feb 2009
TL;DR: In this paper, the authors examine the existing research literature on self-disclosure and the Internet and propose three critical issues that unite the ways in which we can best understand the links between privacy, selfdisclosure, and new technology: trust and vulnerability, costs and benefits, and control over personal information.
Abstract: This article examines the extant research literature on self-disclosure and the Internet, in particular by focusing on disclosure in computer-mediated communication and web-based forms - both in surveys and in e-commerce applications. It also considers the links between privacy and self-disclosure, and the unique challenges (and opportunities) that the Internet poses for the protection of privacy. Finally, the article proposes three critical issues that unite the ways in which we can best understand the links between privacy, self-disclosure, and new technology: trust and vulnerability, costs and benefits, and control over personal information. Central to the discussion is the notion that self-disclosure is not simply the outcome of a communication encounter: rather, it is both a product and process of interaction, as well as a way of regulating interaction dynamically. By adopting a privacy approach to understanding disclosure online, it becomes possible to consider not only media effects that encourage disclosure, but also the wider context and implications of such communicative behaviours.

Journal ArticleDOI
TL;DR: The results indicate that privacy concerns significantly influence continued adoption as compared to initial adoption of location-Based Services.
Abstract: Location-Based Services (LBS) use positioning technology to provide individual users the capability of being constantly reachable and accessing network services while ‘on the move’. However, privacy concerns associated with the use of LBS may ultimately prevent consumers from gaining the convenience of ‘anytime anywhere’ personalized services. We examine the adoption of this emerging technology through a privacy lens. Drawing on the privacy literature and theories of technology adoption, we use a survey approach to develop and test a conceptual model to explore the effects of privacy concerns and personal innovativeness on customers’ adoption of LBS. In addition, as a number of IS researchers have shown that customers differ in their decision making for continued adoption as compared to initial decision making, we test the research model separately for potential and experienced customers. The results indicate that privacy concerns significantly influence continued adoption as compared to initial adoption. The implications for theory and practice are discussed.

Book ChapterDOI
22 Nov 2009
TL;DR: A privacy manager for cloud computing is described, which reduces the risk to the cloud computing user of their private data being stolen or misused, and also assists the cloud Computing provider to conform to privacy law.
Abstract: We describe a privacy manager for cloud computing, which reduces the risk to the cloud computing user of their private data being stolen or misused, and also assists the cloud computing provider to conform to privacy law. We describe different possible architectures for privacy management in cloud computing; give an algebraic description of obfuscation, one of the features of the privacy manager; and describe how the privacy manager might be used to protect private metadata of online photos.


Journal ArticleDOI
TL;DR: This paper uses two high-profile data breaches experienced by two U.S. companies, ChoicePoint and TJX, to illustrate the arguments for enhancing organizational level privacy programs based on ethical reasoning and makes recommendations for ways organizations can improve their privacy programs by incorporating moral responsibility.
Abstract: Protecting the privacy of personal information continues to pose significant challenges for organizations. Because consumers are vulnerable in their dealings with businesses due to a lack of information about and an inability to control the subsequent use of their personal information, we argue that organizations have a moral responsibility to these individuals to avoid causing harm and to take reasonable precautions toward that end. We further argue that firms can enhance their privacy programs by moving beyond merely complying with laws and other regulations and creating a culture of integrity that combines a concern for the law with an emphasis on managerial responsibility for the firm's organizational privacy behaviors. We use two high-profile data breaches experienced by two U.S. companies, ChoicePoint and TJX, to illustrate our arguments for enhancing organizational level privacy programs based on ethical reasoning. In doing so, this paper contributes to the dearth of prior organizational-level privacy research, which has largely overlooked ethical issues or the personal harms often caused by privacy violations. We conclude with recommendations for ways organizations can improve their privacy programs by incorporating moral responsibility.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: This work presents a system that combines a standard sliding-window detector tuned for a high recall, low-precision operating point with a fast post-processing stage that is able to remove additional false positives by incorporating domain-specific information not available to the sliding- window detector.
Abstract: The last two years have witnessed the introduction and rapid expansion of products based upon large, systematically-gathered, street-level image collections, such as Google Street View, EveryScape, and Mapjack. In the process of gathering images of public spaces, these projects also capture license plates, faces, and other information considered sensitive from a privacy standpoint. In this work, we present a system that addresses the challenge of automatically detecting and blurring faces and license plates for the purpose of privacy protection in Google Street View. Though some in the field would claim face detection is “solved”, we show that state-of-the-art face detectors alone are not sufficient to achieve the recall desired for large-scale privacy protection. In this paper we present a system that combines a standard sliding-window detector tuned for a high recall, low-precision operating point with a fast post-processing stage that is able to remove additional false positives by incorporating domain-specific information not available to the sliding-window detector. Using a completely automatic system, we are able to sufficiently blur more than 89% of faces and 94 – 96% of license plates in evaluation sets sampled from Google Street View imagery.

Journal ArticleDOI
TL;DR: In this paper, the consequences of consumers' privacy concerns in the context of mobile advertising are explored, and the proposed research model connects a series of psychological factors (prior negative experience, information privacy concerns, perceived ubiquity, trust, and perceived risk) and preference for degree of regulatory control.
Abstract: This study explores the consequences of consumers' privacy concerns in the context of mobile advertising. Drawing on social contract theory, the proposed research model connects a series of psychological factors (prior negative experience, information privacy concerns, perceived ubiquity, trust, and perceived risk) and preference for degree of regulatory control. Data from a survey of 510 mobile phone users in Japan show that mobile users with prior negative experiences with information disclosure possess elevated privacy concerns and perceive stronger risk, which leads them to prefer stricter regulatory controls in mobile advertising. Both perceived ubiquity and sensitivity of the information request further the negative impact of privacy concerns on trust. No such effect occurs for the impact of privacy concerns on perceived risk, however. The authors discuss some theoretical and managerial implications.

Proceedings ArticleDOI
22 Jun 2009
TL;DR: This work proposes a VANET key management scheme based on Temporary Anonymous Certified Keys (TACKs), which efficiently prevents eavesdroppers from linking a vehicle's different keys and provides timely revocation of misbehaving participants while maintaining the same or less overhead for vehicle-to-vehicle communication as the current IEEE 1609.2 standard.
Abstract: Vehicular Ad Hoc Networks (VANETs) require a mechanism to help authenticate messages, identify valid vehicles, and remove malevolent vehicles. A Public Key Infrastructure (PKI) can provide this functionality using certificates and fixed public keys. However, fixed keys allow an eavesdropper to associate a key with a vehicle and a location, violating drivers' privacy. In this work we propose a VANET key management scheme based on Temporary Anonymous Certified Keys (TACKs). Our scheme efficiently prevents eavesdroppers from linking a vehicle's different keys and provides timely revocation of misbehaving participants while maintaining the same or less overhead for vehicle-to-vehicle communication as the current IEEE 1609.2 standard for VANET security.

Proceedings ArticleDOI
20 Jul 2009
TL;DR: This work examines the difficulty of collecting profile and graph information from the popular social networking website Facebook and describes several novel ways in which data can be extracted by third parties, and demonstrates the efficiency of these methods on crawled data.
Abstract: Preventing adversaries from compiling significant amounts of user data is a major challenge for social network operators. We examine the difficulty of collecting profile and graph information from the popular social networking website Facebook and report two major findings. First, we describe several novel ways in which data can be extracted by third parties. Second, we demonstrate the efficiency of these methods on crawled data. Our findings highlight how the current protection of personal data is inconsistent with user's expectations of privacy.

Proceedings ArticleDOI
20 Apr 2009
TL;DR: This paper proposes a solution that offers automated ways to share images based on an extended notion of content ownership that promotes truthfulness, and that rewards users who promote co-ownership, and shows that supporting these type of solutions is not feasible, but can be implemented through a minimal increase in overhead to end-users.
Abstract: Social Networking is one of the major technological phenomena of the Web 2.0, with hundreds of millions of people participating. Social networks enable a form of self expression for users, and help them to socialize and share content with other users. In spite of the fact that content sharing represents one of the prominent features of existing Social Network sites, Social Networks yet do not support any mechanism for collaborative management of privacy settings for shared content. In this paper, we model the problem of collaborative enforcement of privacy policies on shared data by using game theory. In particular, we propose a solution that offers automated ways to share images based on an extended notion of content ownership. Building upon the Clarke-Tax mechanism, we describe a simple mechanism that promotes truthfulness, and that rewards users who promote co-ownership. We integrate our design with inference techniques that free the users from the burden of manually selecting privacy preferences for each picture. To the best of our knowledge this is the first time such a protection mechanism for Social Networking has been proposed. In the paper, we also show a proof-of-concept application, which we implemented in the context of Facebook, one of today's most popular social networks. We show that supporting these type of solutions is not also feasible, but can be implemented through a minimal increase in overhead to end-users.