scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2012"


Proceedings ArticleDOI
23 Mar 2012
TL;DR: The research status of key technologies including encryption mechanism, communication security, protecting sensor data and cryptographic algorithms, and the challenges of IoT are discussed.
Abstract: In the past decade, internet of things (IoT) has been a focus of research. Security and privacy are the key issues for IoT applications, and still face some enormous challenges. In order to facilitate this emerging domain, we in brief review the research progress of IoT, and pay attention to the security. By means of deeply analyzing the security architecture and features, the security requirements are given. On the basis of these, we discuss the research status of key technologies including encryption mechanism, communication security, protecting sensor data and cryptographic algorithms, and briefly outline the challenges.

700 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient and privacy-preserving aggregation scheme, named EPPA, for smart grid communications that resists various security threats and preserve user privacy, and has significantly less computation and communication overhead than existing competing approaches.
Abstract: The concept of smart grid has emerged as a convergence of traditional power system engineering and information and communication technology. It is vital to the success of next generation of power grid, which is expected to be featuring reliable, efficient, flexible, clean, friendly, and secure characteristics. In this paper, we propose an efficient and privacy-preserving aggregation scheme, named EPPA, for smart grid communications. EPPA uses a superincreasing sequence to structure multidimensional data and encrypt the structured data by the homomorphic Paillier cryptosystem technique. For data communications from user to smart grid operation center, data aggregation is performed directly on ciphertext at local gateways without decryption, and the aggregation result of the original data can be obtained at the operation center. EPPA also adopts the batch verification technique to reduce authentication cost. Through extensive analysis, we demonstrate that EPPA resists various security threats and preserve user privacy, and has significantly less computation and communication overhead than existing competing approaches.

682 citations


Proceedings ArticleDOI
23 Mar 2012
TL;DR: This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle and describes future research work about dataSecurity and privacy Protection issues in cloud.
Abstract: It is well-known that cloud computing has many potential advantages and many enterprise applications and data are migrating to public or hybrid cloud. But regarding some business-critical applications, the organizations, especially large enterprises, still wouldn't move them to cloud. The market size the cloud computing shared is still far behind the one expected. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle. Then this paper discusses some current solutions. Finally, this paper describes future research work about data security and privacy protection issues in cloud.

654 citations


Posted Content
TL;DR: In this article, a path-breaking analysis of the concept of privacy as a question of access to the individual and to information about him is presented, and an account of the reasons why privacy is valuable and why it has the coherence that justified maintaining it as both a theoretical concept and an ideal.
Abstract: A path-breaking analysis of the concept of privacy as a question of access to the individual and to information about him. An account of the reasons why privacy is valuable, and why it has the coherence that justified maintaining it as both a theoretical concept and an ideal. Finally, the paper looks into the move from identifying the grounds of the value of privacy to the different question of whether and to what extent privacy should be protected by laws. While privacy is a useful concept in social and moral thought, it may well be the case that it is relatively rare that it should be protected by the law in cases where its violation does not also involve infringement or violation of other important interests or values.

549 citations


Proceedings ArticleDOI
20 May 2012
TL;DR: The current policy debate surrounding third-party web tracking is surveyed and the FourthParty web measurement platform is presented, to inform researchers with essential background and tools for contributing to public understanding and policy debates about web tracking.
Abstract: In the early days of the web, content was designed and hosted by a single person, group, or organization. No longer. Webpages are increasingly composed of content from myriad unrelated "third-party" websites in the business of advertising, analytics, social networking, and more. Third-party services have tremendous value: they support free content and facilitate web innovation. But third-party services come at a privacy cost: researchers, civil society organizations, and policymakers have increasingly called attention to how third parties can track a user's browsing activities across websites. This paper surveys the current policy debate surrounding third-party web tracking and explains the relevant technology. It also presents the FourthParty web measurement platform and studies we have conducted with it. Our aim is to inform researchers with essential background and tools for contributing to public understanding and policy debates about web tracking.

535 citations


Journal ArticleDOI
TL;DR: This paper defines and solves the problem of secure ranked keyword search over encrypted cloud data, and explores the statistical measure approach from information retrieval to build a secure searchable index, and develops a one-to-many order-preserving mapping technique to properly protect those sensitive score information.
Abstract: Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.

526 citations


Journal ArticleDOI
TL;DR: The security of HASBE is formally proved based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and its performance and computational complexity are formally analyzed.
Abstract: Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.

497 citations


Proceedings ArticleDOI
05 Sep 2012
TL;DR: A new model for privacy is introduced, namely privacy as expectations, which involves using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use and a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations.
Abstract: Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.

491 citations


Journal ArticleDOI
TL;DR: This paper presents an effective pseudonym changing at social spots (PCS) strategy to achieve the provable location privacy and develops two anonymity set analytic models to quantitatively investigate the location privacy that is achieved by the PCS strategy.
Abstract: As a prime target of the quality of privacy in vehicular ad hoc networks (VANETs), location privacy is imperative for VANETs to fully flourish. Although frequent pseudonym changing provides a promising solution for location privacy in VANETs, if the pseudonyms are changed in an improper time or location, such a solution may become invalid. To cope with the issue, in this paper, we present an effective pseudonym changing at social spots (PCS) strategy to achieve the provable location privacy. In particular, we first introduce the social spots where several vehicles may gather, e.g., a road intersection when the traffic light turns red or a free parking lot near a shopping mall. By taking the anonymity set size as the location privacy metric, we then develop two anonymity set analytic models to quantitatively investigate the location privacy that is achieved by the PCS strategy. In addition, we use game-theoretic techniques to prove the feasibility of the PCS strategy in practice. Extensive performance evaluations are conducted to demonstrate that better location privacy can be achieved when a vehicle changes its pseudonyms at some highly social spots and that the proposed PCS strategy can assist vehicles to intelligently change their pseudonyms at the right moment and place.

435 citations


Proceedings ArticleDOI
01 Apr 2012
TL;DR: The experimental study demonstrates that it is possible to build private spatial decompositions efficiently, and use them to answer a variety of queries privately with high accuracy, and provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers.
Abstract: Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of ``private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.

409 citations


Posted Content
TL;DR: The importance of providing individuals with access to their data in usable format will let individuals share the wealth created by their information and incentivize developers to offer user-side features and applications harnessing the value of big data.
Abstract: We live in an age of “big data.” Data have become the raw material of production, a new source for immense economic and social value. Advances in data mining and analytics and the massive increase in computing power and data storage capacity have expanded by orders of magnitude the scope of information available for businesses and government. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data. Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. At the same time, the “data deluge” presents privacy concerns which could stir a regulatory backlash dampening the data economy and stifling innovation. In order to craft a balance between beneficial uses of data and in individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information”, the role of individual control, and the principles of data minimization and purpose limitation. This article emphasizes the importance of providing individuals with access to their data in usable format. This will let individuals share the wealth created by their information and incentivize developers to offer user-side features and applications harnessing the value of big data. Where individual access to data is impracticable, data are likely to be de-identified to an extent sufficient to diminish privacy concerns. In addition, organizations should be required to disclose their decisional criteria, since in a big data world it is often not the data but rather the inferences drawn from them that give cause for concern.

Proceedings ArticleDOI
24 Jun 2012
TL;DR: This paper proposes a novel privacy-preserving mechanism that supports public auditing on shared data stored in the cloud that exploits ring signatures to compute verification metadata needed to audit the correctness of shared data.
Abstract: With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data --- while preserving identity privacy --- remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.

Journal ArticleDOI
TL;DR: A significant relationship between the content of privacy policies and privacy concern/trust; willingness to provide personal information and privacy concerns; privacy concern and trust is found.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors presented a novel technique called slicing, which partitions the data both horizontally and vertically, and showed that slicing preserves better data utility than generalization and can be used for membership disclosure protection.
Abstract: Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: It is proved that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program.
Abstract: We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: The proposed mechanism design first exploits a suppressing technique to build storage-efficient similarity keyword set from a given document collection, with edit distance as the similarity metric, and it correctly achieves the defined similarity search functionality with constant search time complexity.
Abstract: As the data produced by individuals and enterprises that need to be stored and utilized are rapidly increasing, data owners are motivated to outsource their local complex data management systems into the cloud for its great flexibility and economic savings. However, as sensitive cloud data may have to be encrypted before outsourcing, which obsoletes the traditional data utilization service based on plaintext keyword search, how to enable privacy-assured utilization mechanisms for outsourced cloud data is thus of paramount importance. Considering the large number of on-demand data users and huge amount of outsourced data files in cloud, the problem is particularly challenging, as it is extremely difficult to meet also the practical requirements of performance, system usability, and high-level user searching experiences. In this paper, we investigate the problem of secure and efficient similarity search over outsourced cloud data. Similarity search is a fundamental and powerful tool widely used in plaintext information retrieval, but has not been quite explored in the encrypted data domain. Our mechanism design first exploits a suppressing technique to build storage-efficient similarity keyword set from a given document collection, with edit distance as the similarity metric. Based on that, we then build a private trie-traverse searching index, and show it correctly achieves the defined similarity search functionality with constant search time complexity. We formally prove the privacy-preserving guarantee of the proposed mechanism under rigorous security treatment. To demonstrate the generality of our mechanism and further enrich the application spectrum, we also show our new construction naturally supports fuzzy search, a previously studied notion aiming only to tolerate typos and representation inconsistencies in the user searching input. The extensive experiments on Amazon cloud platform with real data set further demonstrate the validity and practicality of the proposed mechanism.

Journal ArticleDOI
01 Dec 2012
TL;DR: This study reviews fifteen established theories in online information privacy research and recognizes the primary contributions and connections of the theories and develops an integrated framework for further research.
Abstract: To study the formation of online consumers' information privacy concern and its effect, scholars from different perspectives applied multiple theories in research. To date, there has yet to be a systematic review and integration of the theories in literature. To fill the gap, this study reviews fifteen established theories in online information privacy research and recognizes the primary contributions and connections of the theories. Based on the review, an integrated framework is developed for further research. The framework highlights two interrelated trade-offs that influence an individual's information disclosure behavior: the privacy calculus (i.e., the trade-off between expected benefits and privacy risks) and the risk calculus (i.e., the trade-off between privacy risks and efficacy of coping mechanisms). These two trade-offs are together called the dual-calculus model. A decision table based on the dual-calculus model is provided to predict an individual's intention to disclose personal information online. Implications of the study for further research and practice are discussed.

Journal ArticleDOI
TL;DR: The results support the core assertion that perceived control over personal information is a key factor affecting context-specific concerns for information privacy and have important implications for service providers and consumers as well as for regulatory bodies and technology developers.
Abstract: This study seeks to clarify the nature of control in the context of information privacy to generate insights into the effects of different privacy assurance approaches on context-specific concerns for information privacy. We theorize that such effects are exhibited through mediation by perceived control over personal information and develop arguments in support of the interaction effects involving different privacy assurance approaches (individual self-protection, industry self-regulation, and government legislation). We test the research model in the context of location-based services using data obtained from 178 individuals in Singapore. In general, the results support our core assertion that perceived control over personal information is a key factor affecting context-specific concerns for information privacy. In addition to enhancing our theoretical understanding of the link between control and privacy concerns, these findings have important implications for service providers and consumers as well as for regulatory bodies and technology developers.

Journal ArticleDOI
TL;DR: A conditional privacy-preserving authentication scheme, called CPAS, using pseudo-identity-based signatures for secure vehicle-to-infrastructure communications in vehicular ad hoc networks, and can be reduced by 18%, compared with the previous scheme.
Abstract: In this paper, we propose a conditional privacy-preserving authentication scheme, called CPAS, using pseudo-identity-based signatures for secure vehicle-to-infrastructure communications in vehicular ad hoc networks. The scheme achieves conditional privacy preservation, in which each message launched by a vehicle is mapped to a distinct pseudo-identity, and a trust authority can always retrieve the real identity of a vehicle from any pseudo-identity. In the scheme, a roadside unit (RSU) can simultaneously verify multiple received signatures, thus considerably reducing the total verification time; an RSU can simultaneously verify 2540 signed-messages/s. The time for simultaneously verifying 800 signatures in our scheme can be reduced by 18%, compared with the previous scheme.

Journal ArticleDOI
TL;DR: Clearly, there are several important advantages for employees and employers when employees bring their own devices to work, but there are also significant concerns about security privacy.
Abstract: Clearly, there are several important advantages for employees and employers when employees bring their own devices to work. But there are also significant concerns about security privacy. Companies and individuals involved, or thinking about getting involved with BYOD should think carefully about the risks as well as the rewards.

Journal ArticleDOI
TL;DR: Results suggest that in order of importance only perceived severity, self-efficacy, perceived vulnerability, and gender are antecedents of information privacy concerns with social networking sites.

Proceedings ArticleDOI
26 Aug 2012
TL;DR: This paper presents the analysis and results from applying automated classifiers for disambiguating profiles belonging to the same user from different social networks, and finds User ID and Name were found to be the most discriminative features for dis Ambiguating user profiles.
Abstract: With the growing popularity and usage of online social media services, people now have accounts (some times several) on multiple and diverse services like Facebook, Linked In, Twitter and You Tube. Publicly available information can be used to create a digital footprint of any user using these social media services. Generating such digital footprints can be very useful for personalization, profile management, detecting malicious behavior of users. A very important application of analyzing users' online digital footprints is to protect users from potential privacy and security risks arising from the huge publicly available user information. We extracted information about user identities on different social networks through Social Graph API, Friend Feed, and Profilactic, we collated our own dataset to create the digital footprints of the users. We used username, display name, description, location, profile image, and number of connections to generate the digital footprints of the user. We applied context specific techniques (e.g. Jaro Winkler similarity, Word net based ontologies) to measure the similarity of the user profiles on different social networks. We specifically focused on Twitter and Linked In. In this paper, we present the analysis and results from applying automated classifiers for disambiguating profiles belonging to the same user from different social networks. User ID and Name were found to be the most discriminative features for disambiguating user profiles. Using the most promising set of features and similarity metrics, we achieved accuracy, precision and recall of 98%, 99%, and 96%, respectively.

Journal ArticleDOI
TL;DR: This article emphasizes a researcher's obligation to protect research participants' privacy in mediated research contexts; and offers an introductory framework for reconsidering how to make case-based decisions to better protect the interests of participants in situations where vulnerability or potential harm is not easily determined.
Abstract: This article focuses on innovative methods for protecting privacy in research of Internet-mediated social contexts. Traditional methods for protecting privacy by hiding or anonymizing data no longer suffice in situations where social researchers need to design studies, manage data, and build research reports in increasingly public, archivable, searchable, and traceable spaces. In such research environments, there are few means of adequately disguising details about the venue and the persons being studied. One practical method of data representation in contexts in which privacy protection is unstable is fabrication, involving creative, bricolage-style transfiguration of original data into composite accounts or representational interactions. This article traces some of the historical trends that have restricted such creative ethical solutions; emphasizes a researcher's obligation to protect research participants' privacy in mediated research contexts; and offers an introductory framework for reconsidering h...

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper investigates the searchable encryption problem in the presence of a semi-honest-but-curious server, which may execute only a fraction of search operations honestly and return a fractionof search outcome honestly, and proposes a verifiable SSE scheme to offer verifiable searchability in additional to the data privacy.
Abstract: Outsourcing data to cloud servers, while increasing service availability and reducing users' burden of managing data, inevitably brings in new concerns such as data privacy, since the server may be honest-but-curious. To mediate the conflicts between data usability and data privacy in such a scenario, research of searchable encryption is of increasing interest. Motivated by the fact that a cloud server, besides its curiosity, may be selfish in order to save its computation and/or download bandwidth, in this paper, we investigate the searchable encryption problem in the presence of a semi-honest-but-curious server, which may execute only a fraction of search operations honestly and return a fraction of search outcome honestly. To fight against this strongest adversary ever, a verifiable SSE (VSSE) scheme is proposed to offer verifiable searchability in additional to the data privacy, both of which are further confirmed by our rigorous security analysis. Besides, we treat the practicality/efficiency as a central requirement of a searchable encryption scheme. To demonstrate the lightweightness of our scheme, we implemented and tested the proposed VSSE on a laptop (serving as the server) and a mobile phone running Android 2.3.4 (serving as the end user). The experimental results optimistically suggest that the proposed scheme satisfies all of our design goals.

Proceedings ArticleDOI
01 Dec 2012
TL;DR: In this paper, the problem of releasing filtered signals that respect the privacy of the user data streams is addressed in a system theoretic context, and methods are developed to approximate a given filter by a differentially private version, so that the distortion introduced by the privacy mechanism is minimized.
Abstract: Emerging systems such as smart grids or intelligent transportation systems often require end-user applications to continuously send information to external data aggregators performing monitoring or control tasks. This can result in an undesirable loss of privacy for the users in exchange of the benefits provided by the application. Motivated by this trend, this paper introduces privacy concerns in a system theoretic context, and addresses the problem of releasing filtered signals that respect the privacy of the user data streams. Our approach relies on a formal notion of privacy from the database literature, called differential privacy, which provides strong privacy guarantees against adversaries with arbitrary side information. Methods are developed to approximate a given filter by a differentially private version, so that the distortion introduced by the privacy mechanism is minimized. Two specific scenarios are considered. First, the notion of differential privacy is extended to dynamic systems with many participants contributing independent input signals. Kalman filtering is also discussed in this context, when a released output signal must preserve differential privacy for the measured signals or state trajectories of the individual participants. Second, differentially private mechanisms are described to approximate stable filters when participants contribute to a single event stream, extending previous work on differential privacy under continual observation.

Posted Content
TL;DR: Privacy has an image problem. Over and over again, regardless of the forum in which it is debated, it is cast as old-fashioned at best and downright harmful at worst, anti-progressive, overly costly, and inimical to the welfare of the body politic as mentioned in this paper.
Abstract: Privacy has an image problem. Over and over again, regardless of the forum in which it is debated, it is cast as old-fashioned at best and downright harmful at worst — anti-progressive, overly costly, and inimical to the welfare of the body politic. Yet the perception of privacy as antiquated and socially retrograde is wrong. It is the result of a conceptual inversion that relates to the way in which the purpose of privacy has been conceived. Like the broader tradition of liberal political theory within which it is situated, legal scholarship has conceptualized privacy as a form of protection for the liberal self. Its function is principally a defensive one; it offers shelter from the pressures of societal and technological change. So characterized, however, privacy is reactive and ultimately inessential. In fact, the liberal self who is the subject of privacy theory and privacy policymaking does not exist. The self who is the real subject of privacy law- and policy-making is socially constructed, emerging gradually from a preexisting cultural and relational substrate. For this self, the purpose of privacy is quite different. Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop. So described, privacy is anything but old-fashioned, and trading it away creates two kinds of large systemic risk. First, privacy is an indispensable structural feature of liberal democratic political systems. Freedom from surveillance, whether public or private, is foundational to the capacity for critical self-reflection and informed citizenship. A society that permits the unchecked ascendancy of surveillance infrastructures cannot hope to remain a liberal democracy. Under such conditions, liberal democracy as a form of government is replaced, gradually but surely, by a form of government that I will call modulated democracy because it relies on a form of surveillance that operates by modulation: a set of processes in which the quality and content of surveillant attention is continually modified according to the subject’s own behavior, sometimes in response to inputs from the subject but according to logics that ultimately are outside the subject’s control. Second, privacy is also foundational to the capacity for innovation, and so the perception of privacy as anti-innovation is a non sequitur. A society that values innovation ignores privacy at its peril, for privacy also shelters the processes of play and experimentation from which innovation emerges. Efforts to repackage pervasive surveillance as innovation — under the moniker “Big Data” — are better understood as efforts to enshrine the methods and values of the modulated society at the heart of our system of knowledge production. In short, privacy incursions harm individuals, but not only individuals. Privacy incursions in the name of progress, innovation, and ordered liberty jeopardize the continuing vitality of the political and intellectual culture that we say we value.

Journal ArticleDOI
TL;DR: It is found that individuals' awareness of Internet privacy legislation negatively influences privacy concerns, whereas previous privacy invasions do not, and that personal innovativeness significantly influences intention to disclose location-related information.
Abstract: Although location-based social network (LBSN) services have developed rapidly in recent years, the reasons why people disclose location-related information under this environment have not been adequately investigated. This study builds a privacy calculus model to investigate the factors that influence LBSN users' intention to disclose location-related information in China. In addition, this study applies justice theory to investigate the role of privacy intervention approaches used by LBSN Web sites in enhancing users' perception of justice, including incentives provision, interaction promotion, privacy control, and privacy policy. Model testing using structural equation modeling reveals that perceived cost (users' privacy concerns) and perceived benefits (personalization and connectedness) influence intention to disclose location-related information. Meanwhile, providing incentives and promoting interaction enhance, respectively, personalization and connectedness. Privacy control and privacy policies both help in reducing privacy concerns. We also find that individuals' awareness of Internet privacy legislation negatively influences privacy concerns, whereas previous privacy invasions do not. Finally, we find that personal innovativeness significantly influences intention to disclose location-related information. This study not only extends the privacy research on social networking sites under mobile environments but also provides practical implications for service providers and policy makers to develop better LBSNs.

Journal ArticleDOI
TL;DR: This paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain and shows through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack.
Abstract: Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.

Journal ArticleDOI
TL;DR: This article analyzes the privacy risks associated with several current and prominent personalization trends, namely social-based personalization, behavioral profiling, and location-basedpersonalization, and surveys user attitudes towards privacy and personalization.
Abstract: Personalization technologies offer powerful tools for enhancing the user experience in a wide variety of systems, but at the same time raise new privacy concerns. For example, systems that personalize advertisements according to the physical location of the user or according to the user's friends' search history, introduce new privacy risks that may discourage wide adoption of personalization technologies. This article analyzes the privacy risks associated with several current and prominent personalization trends, namely social-based personalization, behavioral profiling, and location-based personalization. We survey user attitudes towards privacy and personalization, as well as technologies that can help reduce privacy risks. We conclude with a discussion that frames risks and technical solutions in the intersection between personalization and privacy, as well as areas for further investigation. This frameworks can help designers and researchers to contextualize privacy challenges of solutions when designing personalization systems.

Journal ArticleDOI
TL;DR: This work proposes encrypting private data and processing them under encryption to generate recommendations by introducing a semitrusted third party and using data packing, and presents a comparison protocol, which is the first one to the best of the knowledge, that compares multiple values that are packed in one encryption.
Abstract: Recommender systems have become an important tool for personalization of online services. Generating recommendations in online services depends on privacy-sensitive data collected from the users. Traditional data protection mechanisms focus on access control and secure transmission, which provide security only against malicious third parties, but not the service provider. This creates a serious privacy risk for the users. In this paper, we aim to protect the private data against the service provider while preserving the functionality of the system. We propose encrypting private data and processing them under encryption to generate recommendations. By introducing a semitrusted third party and using data packing, we construct a highly efficient system that does not require the active participation of the user. We also present a comparison protocol, which is the first one to the best of our knowledge, that compares multiple values that are packed in one encryption. Conducted experiments show that this work opens a door to generate private recommendations in a privacy-preserving manner.