scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2011"


Journal ArticleDOI
TL;DR: A survey of the different security risks that pose a threat to the cloud is presented and a new model targeting at improving features of an existing model must not risk or threaten other important features of the current model.

2,511 citations


Journal ArticleDOI
TL;DR: An interdisciplinary review of privacy-related research is provided in order to enable a more cohesive treatment and recommends that researchers be alert to an overarching macro model that is referred to as APCO (Antecedents → Privacy Concerns → Outcomes).
Abstract: To date, many important threads of information privacy research have developed, but these threads have not been woven together into a cohesive fabric. This paper provides an interdisciplinary review of privacy-related research in order to enable a more cohesive treatment. With a sample of 320 privacy articles and 128 books and book sections, we classify previous literature in two ways: (1) using an ethics-based nomenclature of normative, purely descriptive, and empirically descriptive, and (2) based on their level of analysis: individual, group, organizational, and societal. Based upon our analyses via these two classification approaches, we identify three major areas in which previous research contributions reside: the conceptualization of information privacy, the relationship between information privacy and other constructs, and the contextual nature of these relationships. As we consider these major areas, we draw three overarching conclusions. First, there are many theoretical developments in the body of normative and purely descriptive studies that have not been addressed in empirical research on privacy. Rigorous studies that either trace processes associated with, or test implied assertions from, these value-laden arguments could add great value. Second, some of the levels of analysis have received less attention in certain contexts than have others in the research to date. Future empirical studies-both positivist and interpretive--could profitably be targeted to these under-researched levels of analysis. Third, positivist empirical studies will add the greatest value if they focus on antecedents to privacy concerns and on actual outcomes. In that light, we recommend that researchers be alert to an overarching macro model that we term APCO (Antecedents → Privacy Concerns → Outcomes).

1,595 citations


Journal ArticleDOI
TL;DR: A critical analysis of the literature reveals that information privacy is a multilevel concept, but rarely studied as such, and calls for research on information privacy to use a broader diversity of sampling populations and to publish more design and action research in journal articles that can result in IT artifacts for protection or control of information privacy.
Abstract: Information privacy refers to the desire of individuals to control or have some influence over data about themselves. Advances in information technology have raised concerns about information privacy and its impacts, and have motivated Information Systems researchers to explore information privacy issues, including technical solutions to address these concerns. In this paper, we inform researchers about the current state of information privacy research in IS through a critical analysis of the IS literature that considers information privacy as a key construct. The review of the literature reveals that information privacy is a multilevel concept, but rarely studied as such. We also find that information privacy research has been heavily reliant on studentbased and USA-centric samples, which results in findings of limited generalizability. Information privacy research focuses on explaining and predicting theoretical contributions, with few studies in journal articles focusing on design and action contributions. We recommend that future research should consider different levels of analysis as well as multilevel effects of information privacy. We illustrate this with a multilevel framework for information privacy concerns. We call for research on information privacy to use a broader diversity of sampling populations, and for more design and action information privacy research to be published in journal articles that can result in IT artifacts for protection or control of information privacy.

1,068 citations


Journal ArticleDOI
TL;DR: In this article, the authors designed an experiment in which a shopping search engine interface clearly and compactly displays privacy policy information, and they found that when privacy information is made more salient and accessible, some consumers are willing to pay a premium to purchase from privacy protective websites.
Abstract: Although online retailers detail their privacy practices in online privacy policies, this information often remains invisible to consumers, who seldom make the effort to read and understand those policies. This paper reports on research undertaken to determine whether a more prominent display of privacy information will cause consumers to incorporate privacy considerations into their online purchasing decisions. We designed an experiment in which a shopping search engine interface clearly and compactly displays privacy policy information. When such information is made available, consumers tend to purchase from online retailers who better protect their privacy. In fact, our study indicates that when privacy information is made more salient and accessible, some consumers are willing to pay a premium to purchase from privacy protective websites. This result suggests that businesses may be able to leverage privacy protection as a selling point.

823 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: This paper provides a formal framework for the analysis of LPPMs, it captures the prior information that might be available to the attacker, and various attacks that he can perform, and clarifies the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness.
Abstract: It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.

742 citations


Proceedings ArticleDOI
12 Jun 2011
TL;DR: This paper argues that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process, different from limiting the inference about the presence of a tuple.
Abstract: Differential privacy is a powerful tool for providing privacy-preserving noisy query answers over statistical databases. It guarantees that the distribution of noisy query answers changes very little with the addition or deletion of any tuple. It is frequently accompanied by popularized claims that it provides privacy without any assumptions about the data and that it protects against attackers who know all but one record. In this paper we critically analyze the privacy protections offered by differential privacy.First, we use a no-free-lunch theorem, which defines non-privacy as a game, to argue that it is not possible to provide privacy and utility without making assumptions about how the data are generated. Then we explain where assumptions are needed. We argue that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process. This is different from limiting the inference about the presence of a tuple (for example, Bob's participation in a social network may cause edges to form between pairs of his friends, so that it affects more than just the tuple labeled as "Bob"). The definition of evidence of participation, in turn, depends on how the data are generated -- this is how assumptions enter the picture. We explain these ideas using examples from social network research as well as tabular data for which deterministic statistics have been previously released. In both cases the notion of participation varies, the use of differential privacy can lead to privacy breaches, and differential privacy does not always adequately limit inference about participation.

629 citations


Proceedings ArticleDOI
02 Nov 2011
TL;DR: A survey is deployed to 200 Facebook users recruited via Amazon Mechanical Turk, finding that 36% of content remains shared with the default privacy settings, and overall, privacy settings match users' expectations only 37% of the time, and when incorrect, almost always expose content to more users than expected.
Abstract: The sharing of personal data has emerged as a popular activity over online social networking sites like Facebook. As a result, the issue of online social network privacy has received significant attention in both the research literature and the mainstream media. Our overarching goal is to improve defaults and provide better tools for managing privacy, but we are limited by the fact that the full extent of the privacy problem remains unknown; there is little quantification of the incidence of incorrect privacy settings or the difficulty users face when managing their privacy.In this paper, we focus on measuring the disparity between the desired and actual privacy settings, quantifying the magnitude of the problem of managing privacy. We deploy a survey, implemented as a Facebook application, to 200 Facebook users recruited via Amazon Mechanical Turk. We find that 36% of content remains shared with the default privacy settings. We also find that, overall, privacy settings match users' expectations only 37% of the time, and when incorrect, almost always expose content to more users than expected. Finally, we explore how our results have potential to assist users in selecting appropriate privacy settings by examining the user-created friend lists. We find that these have significant correlation with the social network, suggesting that information from the social network may be helpful in implementing new tools for managing privacy.

545 citations


Journal ArticleDOI
TL;DR: A research model suggests that an individual’s privacy concerns form through a cognitive process involving perceived privacy risk, privacy control, and his or her disposition to value privacy, and individuals’ perceptions of institutional privacy assurances are posited to affect the riskcontrol assessment from information disclosure.
Abstract: Organizational information practices can result in a variety of privacy problems that can increase consumers’ concerns for information privacy. To explore the link between individuals and organizations regarding privacy, we study how institutional privacy assurances such as privacy policies and industry self-regulation can contribute to reducing individual privacy concerns. Drawing on Communication Privacy Management (CPM) theory, we develop a research model suggesting that an individual’s privacy concerns form through a cognitive process involving perceived privacy risk, privacy control, and his or her disposition to value privacy. Furthermore, individuals’ perceptions of institutional privacy assurances -namely, perceived effectiveness of privacy policies and perceived effectiveness of industry privacy self-regulation -are posited to affect the riskcontrol assessment from information disclosure, thus, being an essential component of privacy concerns. We empirically tested the research model through a survey that was administered to 823 users of four different types of websites: 1) electronic commerce sites, 2) social networking sites, 3) financial sites, and 4) healthcare sites. The results provide support for the majority of the hypothesized relationships. The study reported here is novel to the extent that existing empirical research has not explored the link between individuals’ privacy perceptions and institutional privacy assurances. We discuss implications for theory and practice and provide suggestions for future research.

518 citations


Posted Content
TL;DR: In developing this approach, the paper warns that the current bias in conceiving of the Net as a predominantly commercial enterprise seriously limits the privacy agenda, and proposes an alternative approach, rooted in the theory of contextual integrity.
Abstract: Recent media revelations have demonstrated the extent of third-party tracking and monitoring online, much of it spurred by data aggregation, profiling, and selective targeting. How to protect privacy online is a frequent question in public discourse and has reignited the interest of government actors. In the United States, notice-and-consent remains the fallback approach in online privacy policies, despite its weaknesses. This essay presents an alternative approach, rooted in the theory of contextual integrity. Proposals to improve and fortify notice-and-consent, such as clearer privacy policies and fairer information practices, will not overcome a fundamental flaw in the model, namely, its assumption that individuals can understand all facts relevant to true choice at the moment of pair-wise contracting between individuals and data gatherers. Instead, we must articulate a backdrop of context-specific substantive norms that constrain what information websites can collect, with whom they can share it, and under what conditions it can be shared. In developing this approach, the paper warns that the current bias in conceiving of the Net as a predominantly commercial enterprise seriously limits the privacy agenda.

469 citations


Journal ArticleDOI
TL;DR: Results from a nationally representative sample of over 1,000 adults underscore the complexity of the health information disclosure decision and show that emotion plays a significant role, highlighting the need for re-examining the timing of consent.
Abstract: As healthcare becomes increasingly digitized, the promise of improved care enabled by technological advances inevitably must be traded off against any unintended negative consequences. There is little else that is as consequential to an individual as his or her health. In this context, the privacy of one's personal health information has escalated as a matter of significant concern for the public. We pose the question: under what circumstances will individuals be willing to disclose identified personal health information and permit it to be digitized? Using privacy boundary theory and recent developments in the literature related to risk-as-feelings as the core conceptual foundation, we propose and test a model explicating the role played by type of information requested (general health, mental health, genetic), the purpose for which it is to be used (patient care, research, marketing), and the requesting stakeholder (doctors/hospitals, the government, pharmaceutical companies) in an individual's willingness to disclose personal health information. Furthermore, we explore the impact of emotion linked to one's health condition on willingness to disclose. Results from a nationally representative sample of over 1,000 adults underscore the complexity of the health information disclosure decision and show that emotion plays a significant role, highlighting the need for re-examining the timing of consent. Theoretically, the study extends the dominant cognitive-consequentialist approach to privacy by incorporating the role of emotion. It further refines the privacy calculus to incorporate the moderating influence of contextual factors salient in the healthcare setting. The practical implications of this study include an improved understanding of consumer concerns and potential impacts regarding the electronic storage of health information that can be used to craft policy.

448 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive framework to model privacy threats in software-based systems and provides an extensive catalog of privacy-specific threat tree patterns that can be used to detail the threat analysis outlined above.
Abstract: Ready or not, the digitalization of information has come, and privacy is standing out there, possibly at stake. Although digital privacy is an identified priority in our society, few systematic, effective methodologies exist that deal with privacy threats thoroughly. This paper presents a comprehensive framework to model privacy threats in software-based systems. First, this work provides a systematic methodology to model privacy-specific threats. Analogous to STRIDE, an information flow–oriented model of the system is leveraged to guide the analysis and to provide broad coverage. The methodology instructs the analyst on what issues should be investigated, and where in the model those issues could emerge. This is achieved by (i) defining a list of privacy threat types and (ii) providing the mappings between threat types and the elements in the system model. Second, this work provides an extensive catalog of privacy-specific threat tree patterns that can be used to detail the threat analysis outlined above. Finally, this work provides the means to map the existing privacy-enhancing technologies (PETs) to the identified privacy threats. Therefore, the selection of sound privacy countermeasures is simplified.

Proceedings ArticleDOI
10 Apr 2011
TL;DR: This paper defines and solves the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and gives two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models.
Abstract: With the advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in the cloud, it is necessary to allow multiple keywords in the search request and return documents in the order of their relevance to these keywords. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE).We establish a set of strict privacy requirements for such a secure cloud data utilization system. Among various multi-keyword semantics, we choose the efficient similarity measure of “coordinate matching”, i.e., as many matches as possible, to capture the relevance of data documents to the search query. We further use “inner product similarity” to quantitatively evaluate such similarity measure. We first propose a basic idea for the MRSE based on secure inner product computation, and then give two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given. Experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication.

01 Jan 2011
TL;DR: Chintagunta et al. as discussed by the authors found that display advertising became far less effective at changing stated purchase intent after the EU laws were enacted, relative to display advertising in other countries, and the loss in effectiveness was more pronounced for websites that had general content (such as news sites), where non-data-driven targeting is particularly hard to do.
Abstract: Advertisers use online customer data to target their marketing appeals. This has heightened consumers' privacy concerns, leading governments to pass laws designed to protect consumer privacy by restricting the use of data and by restricting online tracking techniques used by websites. We use the responses of 3.3 million survey takers who had been randomly exposed to 9,596 online display (banner) advertising campaigns to explore how privacy regulation in the European Union (EU) has influenced advertising effectiveness. This privacy regulation restricted advertisers' ability to collect data on Web users in order to target ad campaigns. We find that, on average, display advertising became far less effective at changing stated purchase intent after the EU laws were enacted, relative to display advertising in other countries. The loss in effectiveness was more pronounced for websites that had general content (such as news sites), where non-data-driven targeting is particularly hard to do. The loss of effectiveness was also more pronounced for ads with a smaller presence on the webpage and for ads that did not have additional interactive, video, or audio features. This paper was accepted by Pradeep Chintagunta, marketing.

Proceedings ArticleDOI
04 Jan 2011
TL;DR: This paper identifies key issues, which are believed to have long-term significance in cloud computing security and privacy, based on documented problems and exhibited weaknesses.
Abstract: In meteorology, the most destructive extratropical cyclones evolve with the formation of a bent-back front and cloud head separated from the main polar-front, creating a hook that completely encircles a pocket of warm air with colder air. The most damaging winds occur near the tip of the hook. The cloud hook formation provides a useful analogy for cloud computing, in which the most acute obstacles with outsourced services (i.e., the cloud hook) are security and privacy issues. This paper identifies key issues, which are believed to have long-term significance in cloud computing security and privacy, based on documented problems and exhibited weaknesses.

Journal ArticleDOI
TL;DR: The findings reveal that cross-cultural dimensions are significant predictors of information privacy concerns and desire for online awareness, which are, in turn, found to be predictor of attitude toward, intention to use, and actual use of IM.
Abstract: Social computing technologies typically have multiple features that allow users to reveal their personal information to other users. Such self-disclosure (SD) behavior is generally considered positive and beneficial in interpersonal communication and relationships. Using a newly proposed model based on social exchange theory, this paper investigates and empirically validates the relationships between SD technology use and culture. In particular, we explore the effects of culture on information privacy concerns and the desire for online interpersonal awareness, which influence attitudes toward, intention to use, and actual use of SD technologies. Our model was tested using arguably the strongest social computing technology for online SD-instant messaging (IM)-with users from China and the United States. Our findings reveal that cross-cultural dimensions are significant predictors of information privacy concerns and desire for online awareness, which are, in turn, found to be predictors of attitude toward, intention to use, and actual use of IM. Overall, our proposed model is applicable to both cultures. Our findings enhance the theoretical understanding of the effects of culture and privacy concerns on SD technologies and provide practical suggestions for developers of SD technologies, such as adding additional control features to applications.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper shows the necessity of search capability authorization that reduces the privacy exposure resulting from the search results, and establishes a scalable framework for Authorized Private Keyword Search (APKS) over encrypted cloud data, and proposes two novel solutions based on a recent cryptographic primitive, Hierarchical Predicate Encryption (HPE).
Abstract: In cloud computing, clients usually outsource their data to the cloud storage servers to reduce the management costs. While those data may contain sensitive personal information, the cloud servers cannot be fully trusted in protecting them. Encryption is a promising way to protect the confidentiality of the outsourced data, but it also introduces much difficulty to performing effective searches over encrypted information. Most existing works do not support efficient searches with complex query conditions, and care needs to be taken when using them because of the potential privacy leakages about the data owners to the data users or the cloud server. In this paper, using on line Personal Health Record (PHR) as a case study, we first show the necessity of search capability authorization that reduces the privacy exposure resulting from the search results, and establish a scalable framework for Authorized Private Keyword Search (APKS) over encrypted cloud data. We then propose two novel solutions for APKS based on a recent cryptographic primitive, Hierarchical Predicate Encryption (HPE). Our solutions enable efficient multi-dimensional keyword searches with range query, allow delegation and revocation of search capabilities. Moreover, we enhance the query privacy which hides users' query keywords against the server. We implement our scheme on a modern workstation, and experimental results demonstrate its suitability for practical usage.

Posted Content
TL;DR: This paper evaluates the current state of the IS literature on information privacy (where are the authors now?) and identifies promising research directions for advancing IS research on information Privacy (where should they go?).
Abstract: While information privacy has been studied in multiple disciplines over the years, the advent of the information age has both elevated the importance of privacy in theory and practice, and increased the relevance of information privacy literature for Information Systems, which has taken a leading role in the theoretical and practical study of information privacy. There is an impressive body of literature on information privacy in IS, and the two Theory and Review articles in this issue of MIS Quarterly review this literature. By integrating these two articles, this paper evaluates the current state of the IS literature on information privacy (where are we now?) and identifies promising research directions for advancing IS research on information privacy (where should we go?). Additional thoughts on further expanding the information privacy research in IS by drawing on related disciplines to enable a multidisciplinary study of information privacy are discussed.

Journal ArticleDOI
TL;DR: In this article, consumer decisions to reveal or withhold information and the relationship between such decisions and objective hazards posed by information revelation were analyzed in four experiments and found that disclosure of private information is responsive to environmental cues that bear little connection, or are even inversely related, to objective hazards.
Abstract: New marketing paradigms that exploit the capabilities for data collection, aggregation, and dissemination introduced by the Internet provide benefits to consumers but also pose real or perceived privacy hazards. In four experiments, we seek to understand consumer decisions to reveal or withhold information and the relationship between such decisions and objective hazards posed by information revelation. Our central thesis, and a central finding of all four experiments, is that disclosure of private information is responsive to environmental cues that bear little connection, or are even inversely related, to objective hazards. We address underlying processes and rule out alternative explanations by eliciting subjective judgments of the sensitivity of inquiries (experiment 3) and by showing that the effect of cues diminishes if privacy concern is activated at the outset of the experiment (experiment 4). This research highlights consumer vulnerabilities in navigating increasingly complex privacy issues intro...

Proceedings ArticleDOI
22 May 2011
TL;DR: This paper develops algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system.
Abstract: Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.

Journal ArticleDOI
TL;DR: This paper develops a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range.
Abstract: Privacy-preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides the strongest privacy guarantee. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output data set is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution.

Proceedings ArticleDOI
27 May 2011
TL;DR: In this article, the authors summarize reliability, availability, and security issues for cloud computing (RAS issues), and propose feasible and available solutions for some of them, and compare the benefits and risks of cloud computing with those of the status quo.
Abstract: Cloud computing is one of today's most exciting technologies due to its ability to reduce costs associated with computing while increasing flexibility and scalability for computer processes. During the past few years, cloud computing has grown from being a promising business idea to one of the fastest growing parts of the IT industry. IT organizations have expresses concern about critical issues (such as security) that exist with the widespread implementation of cloud computing. These types of concerns originate from the fact that data is stored remotely from the customer's location; in fact, it can be stored at any location. Security, in particular, is one of the most argued-about issues in the cloud computing field; several enterprises look at cloud computing warily due to projected security risks. The risks of compromised security and privacy may be lower overall, however, with cloud computing than they would be if the data were to be stored on individual machines instead of in a so — called "cloud" (the network of computers used for remote storage and maintenance). Comparison of the benefits and risks of cloud computing with those of the status quo are necessary for a full evaluation of the viability of cloud computing. Consequently, some issues arise that clients need to consider as they contemplate moving to cloud computing for their businesses. In this paper I summarize reliability, availability, and security issues for cloud computing (RAS issues), and propose feasible and available solutions for some of them.

Journal ArticleDOI
Paul A. Pavlou1
TL;DR: In this paper, the current state of the IS literature on information privacy (where are we now?) and identifies promising research directions for advancing information privacy research on Information Systems (where should we go?).
Abstract: While information privacy has been studied in multiple disciplines over the years, the advent of the information age has both elevated the importance of privacy in theory and practice, and increased the relevance of information privacy literature for Information Systems, which has taken a leading role in the theoretical and practical study of information privacy. There is an impressive body of literature on information privacy in IS, and the two Theory and Review articles in this issue of MIS Quarterly review this literature. By integrating these two articles, this paper evaluates the current state of the IS literature on information privacy (where are we now?) and identifies promising research directions for advancing IS research on information privacy (where should we go?). Additional thoughts on further expanding the information privacy research in IS by drawing on related disciplines to enable a multidisciplinary study of information privacy are discussed.

Journal ArticleDOI
TL;DR: This paper adapts Sebé et al.'s protocol to support public verifiability and shows the correctness and security of the protocol, and demonstrates that the proposed protocol has a good performance.
Abstract: Remote data integrity checking is a crucial technology in cloud computing. Recently, many works focus on providing data dynamics and/or public verifiability to this type of protocols. Existing protocols can support both features with the help of a third-party auditor. In a previous work, Sebe et al. propose a remote data integrity checking protocol that supports data dynamics. In this paper, we adapt Sebe et al.'s protocol to support public verifiability. The proposed protocol supports public verifiability without help of a third-party auditor. In addition, the proposed protocol does not leak any private information to third-party verifiers. Through a formal analysis, we show the correctness and security of the protocol. After that, through theoretical analysis and experimental results, we demonstrate that the proposed protocol has a good performance.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: The MobiMix approach is to break the continuity of location exposure by using mix-zones, where no applications can trace user movement, and a suite of road network mix-zone construction methods that provide higher level of attack resilience and yield a specified lower-bound on the level of anonymity.
Abstract: This paper presents MobiMix, a road network based mix-zone framework to protect location privacy of mobile users traveling on road networks. In contrast to spatial cloaking based location privacy protection, the approach in MobiMix is to break the continuity of location exposure by using mix-zones, where no applications can trace user movement. This paper makes two original contributions. First, we provide the formal analysis on the vulnerabilities of directly applying theoretical rectangle mix-zones to road networks in terms of anonymization effectiveness and attack resilience. We argue that effective mix-zones should be constructed and placed by carefully taking into consideration of multiple factors, such as the geometry of the zones, the statistical behavior of the user population, the spatial constraints on movement patterns of the users, and the temporal and spatial resolution of the location exposure. Second, we develop a suite of road network mix-zone construction methods that provide higher level of attack resilience and yield a specified lower-bound on the level of anonymity. We evaluate the MobiMix approach through extensive experiments conducted on traces produced by GTMobiSim on different scales of geographic maps. Our experiments show that MobiMix offers high level of anonymity and high level of resilience to attacks compared to existing mix-zone approaches.

Journal ArticleDOI
TL;DR: An adversary model is introduced and an analysis of the proposed obfuscation operators is provided to evaluate their robustness against adversaries aiming to reverse the obfuscation effects to retrieve a location that better approximates the location of the users.
Abstract: The pervasive diffusion of mobile communication devices and the technical improvements of location techniques are fostering the development of new applications that use the physical position of users to offer location-based services for business, social, or informational purposes. In such a context, privacy concerns are increasing and call for sophisticated solutions able to guarantee different levels of location privacy to the users. In this paper, we address this problem and present a solution based on different obfuscation operators that, when used individually or in combination, protect the privacy of the location information of users. We also introduce an adversary model and provide an analysis of the proposed obfuscation operators to evaluate their robustness against adversaries aiming to reverse the obfuscation effects to retrieve a location that better approximates the location of the users. Finally, we present some experimental results that validate our solution.

Proceedings ArticleDOI
10 Apr 2011
TL;DR: This paper proposes FindU, the first privacy-preserving personal profile matching schemes for mobile social networks, and proposes novel protocols that realize two of the user privacy levels, which can also be personalized by the users.
Abstract: Making new connections according to personal preferences is a crucial service in mobile social networking, where the initiating user can find matching users within physical proximity of him/her. In existing systems for such services, usually all the users directly publish their complete profiles for others to search. However, in many applications, the users' personal profiles may contain sensitive information that they do not want to make public. In this paper, we propose FindU, the first privacy-preserving personal profile matching schemes for mobile social networks. In FindU, an initiating user can find from a group of users the one whose profile best matches with his/her; to limit the risk of privacy exposure, only necessary and minimal information about the private attributes of the participating users is exchanged. Several increasing levels of user privacy are defined, with decreasing amounts of exchanged profile information. Leveraging secure multi-party computation (SMC) techniques, we propose novel protocols that realize two of the user privacy levels, which can also be personalized by the users. We provide thorough security analysis and performance evaluation on our schemes, and show their advantages in both security and efficiency over state-of-the-art schemes.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This paper proposes a holistic and efficient solution that comprises a secure traversal framework and an encryption scheme based on privacy homomorphism that is scalable to large datasets by leveraging an index-based approach.
Abstract: Query processing that preserves both the data privacy of the owner and the query privacy of the client is a new research problem. It shows increasing importance as cloud computing drives more businesses to outsource their data and querying services. However, most existing studies, including those on data outsourcing, address the data privacy and query privacy separately and cannot be applied to this problem. In this paper, we propose a holistic and efficient solution that comprises a secure traversal framework and an encryption scheme based on privacy homomorphism. The framework is scalable to large datasets by leveraging an index-based approach. Based on this framework, we devise secure protocols for processing typical queries such as k-nearest-neighbor queries (kNN) on R-tree index. Moreover, several optimization techniques are presented to improve the efficiency of the query processing protocols. Our solution is verified by both theoretical analysis and performance study.


Journal ArticleDOI
TL;DR: Analysis shows that the relationship between privacy attitudes and certain types of disclosures (those furthering contact) are controlled by privacy policy consumption and privacy behaviors, providing evidence that social network sites could help mitigate concerns about disclosure by providing transparent privacy policies and privacy controls.

Journal ArticleDOI
TL;DR: This paper identifies an essential type of privacy attacks: neighborhood attacks, and extends the conventional k-anonymity and l-diversity models from relational data to social network data to protect privacy against neighborhood attacks.
Abstract: Recently, more and more social network data have been published in one way or another. Preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Unfortunately, most of the previous studies on privacy preservation data publishing can deal with relational data only, and cannot be applied to social network data. In this paper, we take an initiative toward preserving privacy in social network data. Specifically, we identify an essential type of privacy attacks: neighborhood attacks. If an adversary has some knowledge about the neighbors of a target victim and the relationship among the neighbors, the victim may be re-identified from a social network even if the victim’s identity is preserved using the conventional anonymization techniques. To protect privacy against neighborhood attacks, we extend the conventional k-anonymity and l-diversity models from relational data to social network data. We show that the problems of computing optimal k-anonymous and l-diverse social networks are NP-hard. We develop practical solutions to the problems. The empirical study indicates that the anonymized social network data by our methods can still be used to answer aggregate network queries with high accuracy.