scispace - formally typeset
Search or ask a question

Showing papers on "Information privacy published in 2015"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a nationally recognized licensing framework for AVs, determining appropriate standards for liability, security, and data privacy, which can be used to improve vehicle safety, congestion, and travel behavior.
Abstract: Autonomous vehicles (AVs) represent a potentially disruptive yet beneficial change to our transportation system. This new technology has the potential to impact vehicle safety, congestion, and travel behavior. All told, major social AV impacts in the form of crash savings, travel time reduction, fuel efficiency and parking benefits are estimated to approach $2000 to per year per AV, and may eventually approach nearly $4000 when comprehensive crash costs are accounted for. Yet barriers to implementation and mass-market penetration remain. Initial costs will likely be unaffordable. Licensing and testing standards in the U.S. are being developed at the state level, rather than nationally, which may lead to inconsistencies across states. Liability details remain undefined, security concerns linger, and without new privacy standards, a default lack of privacy for personal travel may become the norm. The impacts and interactions with other components of the transportation system, as well as implementation details, remain uncertain. To address these concerns, the federal government should expand research in these areas and create a nationally recognized licensing framework for AVs, determining appropriate standards for liability, security, and data privacy.

2,053 citations


Proceedings ArticleDOI
21 May 2015
TL;DR: A decentralized personal data management system that ensures users own and control their data is described, and a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party is implemented.
Abstract: The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society.

1,953 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the main research challenges and the existing solutions in the field of IoT security, identifying open issues and suggesting some hints for future research, and suggest some hints to future research.

1,258 citations


Journal ArticleDOI
30 Jan 2015-Science
TL;DR: This Review summarizes and draws connections between diverse streams of empirical research on privacy behavior: people’s uncertainty about the consequences of privacy-related behaviors and their own preferences over those consequences; the context-dependence of people's concern about privacy; and the degree to which privacy concerns are malleable—manipulable by commercial and governmental interests.
Abstract: This Review summarizes and draws connections between diverse streams of empirical research on privacy behavior. We use three themes to connect insights from social and behavioral sciences: people's uncertainty about the consequences of privacy-related behaviors and their own preferences over those consequences; the context-dependence of people's concern, or lack thereof, about privacy; and the degree to which privacy concerns are malleable—manipulable by commercial and governmental interests. Organizing our discussion by these themes, we offer observations concerning the role of public policy in the protection of privacy in the information age.

1,139 citations


Proceedings ArticleDOI
16 May 2015
TL;DR: IccTA, a static taint analyzer to detect privacy leaks among components in Android applications goes beyond state-of-the-art approaches by supporting inter- component detection and propagating context information among components, which improves the precision of the analysis.
Abstract: Shake Them All is a popular "Wallpaper" application exceeding millions of downloads on Google Play. At installation, this application is given permission to (1) access the Internet (for updating wallpapers) and (2) use the device microphone (to change background following noise changes). With these permissions, the application could silently record user conversations and upload them remotely. To give more confidence about how Shake Them All actually processes what it records, it is necessary to build a precise analysis tool that tracks the flow of any sensitive data from its source point to any sink, especially if those are in different components. Since Android applications may leak private data carelessly or maliciously, we propose IccTA, a static taint analyzer to detect privacy leaks among components in Android applications. IccTA goes beyond state-of-the-art approaches by supporting inter- component detection. By propagating context information among components, IccTA improves the precision of the analysis. IccTA outperforms existing tools on two benchmarks for ICC-leak detectors: DroidBench and ICC-Bench. Moreover, our approach detects 534 ICC leaks in 108 apps from MalGenome and 2,395 ICC leaks in 337 apps in a set of 15,000 Google Play apps.

556 citations


Book ChapterDOI
10 Aug 2015
TL;DR: Fog computing is a promising computing paradigm that extends cloud computing to the edge of networks but with distinct characteristics that faces new security and privacy challenges besides those inherited from cloud computing.
Abstract: Fog computing is a promising computing paradigm that extends cloud computing to the edge of networks. Similar to cloud computing but with distinct characteristics, fog computing faces new security and privacy challenges besides those inherited from cloud computing. In this paper, we have surveyed these challenges and corresponding solutions in a brief manner.

437 citations


Proceedings ArticleDOI
06 Jul 2015
TL;DR: This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy and discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.
Abstract: Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.

393 citations


01 Jan 2015
TL;DR: The public key based homomorphic authenticator is utilized and uniquely integrate it with random mask technique to achieve a privacy-preserving public auditing system for cloud data storage security while keeping all above requirements in mind.
Abstract: Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize the public key based homomorphic authenticator and uniquely integrate it with random mask technique to achieve a privacy-preserving public auditing system for cloud data storage security while keeping all above requirements in mind. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.

317 citations


Journal ArticleDOI
TL;DR: Systematic gaps in compliance with data protection principles in accredited health apps question whether certification programs relying substantially on developer disclosures can provide a trusted resource for patients and clinicians.
Abstract: Poor information privacy practices have been identified in health apps. Medical app accreditation programs offer a mechanism for assuring the quality of apps; however, little is known about their ability to control information privacy risks. We aimed to assess the extent to which already-certified apps complied with data protection principles mandated by the largest national accreditation program. Cross-sectional, systematic, 6-month assessment of 79 apps certified as clinically safe and trustworthy by the UK NHS Health Apps Library. Protocol-based testing was used to characterize personal information collection, local-device storage and information transmission. Observed information handling practices were compared against privacy policy commitments. The study revealed that 89 % (n = 70/79) of apps transmitted information to online services. No app encrypted personal information stored locally. Furthermore, 66 % (23/35) of apps sending identifying information over the Internet did not use encryption and 20 % (7/35) did not have a privacy policy. Overall, 67 % (53/79) of apps had some form of privacy policy. No app collected or transmitted information that a policy explicitly stated it would not; however, 78 % (38/49) of information-transmitting apps with a policy did not describe the nature of personal information included in transmissions. Four apps sent both identifying and health information without encryption. Although the study was not designed to examine data handling after transmission to online services, security problems appeared to place users at risk of data theft in two cases. Systematic gaps in compliance with data protection principles in accredited health apps question whether certification programs relying substantially on developer disclosures can provide a trusted resource for patients and clinicians. Accreditation programs should, as a minimum, provide consistent and reliable warnings about possible threats and, ideally, require publishers to rectify vulnerabilities before apps are released.

314 citations


Journal ArticleDOI
TL;DR: The findings show that currently mHealth developers often fail to provide app privacy policies, and the privacy policies that are available do not make information privacy practices transparent to users, require college-level literacy, and are often not focused on the app itself.

295 citations


Journal ArticleDOI
TL;DR: It was found that online privacy concerns were not significantly related to specific privacy behaviors, such as the frequency or content of disclosures on SNSs, which demonstrated that the privacy paradox still exists when it is operationalized as in prior research.
Abstract: The privacy paradox states that online privacy concerns do not sufficiently explain online privacy behaviors on social network sites (SNSs). In this study, it was first asked whether the privacy paradox would still exist when analyzed as in prior research. Second, it was hypothesized that the privacy paradox would disappear when analyzed in a new approach. The new approach featured a multidimensional operationalization of privacy by differentiating between informational, social, and psychological privacy. Next to privacy concerns, also, privacy attitudes and privacy intentions were analyzed. With the aim to improve methodological aspects, all items were designed on the basis of the theory of planned behavior. In an online questionnaire with N = 595 respondents, it was found that online privacy concerns were not significantly related to specific privacy behaviors, such as the frequency or content of disclosures on SNSs (e.g., name, cell-phone number, or religious views). This demonstrated that the privacy paradox still exists when it is operationalized as in prior research. With regard to the new approach, all hypotheses were confirmed: Results showed both a direct relation and an indirect relation between privacy attitudes and privacy behaviors, the latter mediated by privacy intentions. In addition, also an indirect relation between privacy concerns and privacy behaviors was found, mediated by privacy attitudes and privacy intentions. Therefore, privacy behaviors can be explained sufficiently when using privacy attitudes, privacy concerns, and privacy intentions within the theory of planned behavior. The behaviors of SNS users are not as paradoxical as was once believed. Copyright © 2014 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
18 Apr 2015
TL;DR: A study that evaluates the benefits of giving users an app permission manager and sending them nudges intended to raise their awareness of the data collected by their apps finds that these approaches are complementary and can each play a significant role in empowering users to more effectively control their privacy.
Abstract: Smartphone users are often unaware of the data collected by apps running on their devices. We report on a study that evaluates the benefits of giving users an app permission manager and sending them nudges intended to raise their awareness of the data collected by their apps. Our study provides both qualitative and quantitative evidence that these approaches are complementary and can each play a significant role in empowering users to more effectively control their privacy. For instance, even after a week with access to the permission manager, participants benefited from nudges showing them how often some of their sensitive data was being accessed by apps, with 95% of participants reassessing their permissions, and 58% of them further restricting some of their permissions. We discuss how participants interacted both with the permission manager and the privacy nudges, analyze the effectiveness of both solutions, and derive some recommendations.

Book ChapterDOI
22 Jul 2015
TL;DR: This paper surveys the existing literature on privacy notices and identifies challenges, requirements, and best practices for privacy notice design, and mapping out the design space for privacy notices by identifying relevant dimensions provides a taxonomy and consistent terminology of notice approaches.
Abstract: Notifying users about a system's data practices is supposed to enable users to make informed privacy decisions. Yet, current notice and choice mechanisms, such as privacy policies, are often ineffective because they are neither usable nor useful, and are therefore ignored by users. Constrained interfaces on mobile devices, wearables, and smart home devices connected in an Internet of Things exacerbate the issue. Much research has studied usability issues of privacy notices and many proposals for more usable privacy notices exist. Yet, there is little guidance for designers and developers on the design aspects that can impact the effectiveness of privacy notices. In this paper, we make multiple contributions to remedy this issue. We survey the existing literature on privacy notices and identify challenges, requirements, and best practices for privacy notice design. Further, we map out the design space for privacy notices by identifying relevant dimensions. This provides a taxonomy and consistent terminology of notice approaches to foster understanding and reasoning about notice options available in the context of specific systems. Our systemization of knowledge and the developed design space can help designers, developers, and researchers identify notice and choice requirements and develop a comprehensive notice concept for their system that addresses the needs of different audiences and considers the system's limitations and opportunities for providing notice.

Journal ArticleDOI
TL;DR: An extension to the privacy calculus model is developed, arguing that the situation‐specific assessment of risks and benefits is bounded by (1) pre‐existing attitudes or dispositions, such as general privacy concerns or general institutional trust, and (2) limited cognitive resources and heuristic thinking.
Abstract: Existing research on information privacy has mostly relied on the privacy calculus model, which views privacy-related decision-making as a rational process where individuals weigh the anticipated risks of disclosing personal data against the potential benefits. In this research, we develop an extension to the privacy calculus model, arguing that the situation-specific assessment of risks and benefits is bounded by 1 pre-existing attitudes or dispositions, such as general privacy concerns or general institutional trust, and 2 limited cognitive resources and heuristic thinking. An experimental study, employing two samples from the USA and Switzerland, examined consumer responses to a new smartphone application that collects driving behavior data and provided converging support for these predictions. Specifically, the results revealed that a situation-specific assessment of risks and benefits fully mediates the effect of dispositional factors on information disclosure. In addition, the results showed that privacy assessment is influenced by momentary affective states, indicating that consumers underestimate the risks of information disclosure when confronted with a user interface that elicits positive affect.

Proceedings Article
22 Jul 2015
TL;DR: A qualitative study to understand what people do and do not know about the Internet and how that knowledge affects their responses to privacy and security risks suggests a greater emphasis on policies and systems that protect privacy andSecurity without relying too much on users' security practices.
Abstract: Many people use the Internet every day yet know little about how it really works Prior literature diverges on how people's Internet knowledge affects their privacy and security decisions We undertook a qualitative study to understand what people do and do not know about the Internet and how that knowledge affects their responses to privacy and security risks Lay people, as compared to those with computer science or related backgrounds, had simpler mental models that omitted Internet levels, organizations, and entities People with more articulated technical models perceived more privacy threats, possibly driven by their more accurate understanding of where specific risks could occur in the network Despite these differences, we did not find a direct relationship between people's technical background and the actions they took to control their privacy or increase their security online Consistent with other work on user knowledge and experience, our study suggests a greater emphasis on policies and systems that protect privacy and security without relying too much on users' security practices

Proceedings ArticleDOI
17 May 2015
TL;DR: This paper presents a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form, and applies to all stages of machine learning, including feature selection and modeling.
Abstract: Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations -- asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.

Journal ArticleDOI
TL;DR: A study of the existing laws regulating these aspects in the European Union and the United States, a review of the academic literature related to this topic, and a proposal of some recommendations for designers in order to create mobile health applications that satisfy the current security and privacy legislation are presented.
Abstract: In a world where the industry of mobile applications is continuously expanding and new health care apps and devices are created every day, it is important to take special care of the collection and treatment of users' personal health information. However, the appropriate methods to do this are not usually taken into account by apps designers and insecure applications are released. This paper presents a study of security and privacy in mHealth, focusing on three parts: a study of the existing laws regulating these aspects in the European Union and the United States, a review of the academic literature related to this topic, and a proposal of some recommendations for designers in order to create mobile health applications that satisfy the current security and privacy legislation. This paper will complement other standards and certifications about security and privacy and will suppose a quick guide for apps designers, developers and researchers.

Journal ArticleDOI
TL;DR: This paper proposes a practical solution for privacy preserving medical record sharing for cloud computing, where the statistical analysis and cryptography are innovatively combined together to provide multiple paradigms of balance between medical data utilization and privacy protection.

Journal ArticleDOI
TL;DR: The results of this study indicate that, in the context of an e-commerce transaction with an unfamiliar vendor, information disclosure is the result of competing influences of exchange benefits and two types of privacy beliefs (privacy protection belief and privacy risk belief).
Abstract: The effect of situational factors is largely ignored by current studies on information privacy. This paper theorized and empirically tested how an individual's decision-making on information disclosure is driven by competing situational benefits and risk factors. The results of this study indicate that, in the context of an e-commerce transaction with an unfamiliar vendor, information disclosure is the result of competing influences of exchange benefits and two types of privacy beliefs (privacy protection belief and privacy risk belief). In addition, the effect of monetary rewards is dependent upon the fairness of information exchange. Monetary rewards could undermine information disclosure when information collected has low relevance to the purpose of the e-commerce transaction.

Journal ArticleDOI
Neil Selwyn1
TL;DR: A number of ways in which digital data in education could be questioned along social lines are outlined, including issues of data inequalities, the role of data in managerialist modes of organisation and control, the rise of so-called ‘dataveillance' and the reductionist nature of data-based representation.
Abstract: The generation and processing of data through digital technologies is an integral element of contemporary society, as reflected in recent debates over online data privacy, ‘Big Data’ and the rise of data mining and analytics in business, science and government This paper outlines the significance of digital data within education, arguing for increased interest in the topic from educational researchers Building on themes from the emerging sub-field of ‘digital sociology’, the paper outlines a number of ways in which digital data in education could be questioned along social lines These include issues of data inequalities, the role of data in managerialist modes of organisation and control, the rise of so-called ‘dataveillance' and the reductionist nature of data-based representation The paper concludes with a set of suggestions for future research and discussion, thus outlining the beginnings of a framework for the future critical study of digital data and education

Journal ArticleDOI
TL;DR: An enumeration of the challenges for genome data privacy is enumerated and a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward is presented.
Abstract: Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the Internet of Things to the broad managerial community and explore one of its central tensions: convenience vs. privacy and secrecy, and highlight opportunities, challenges, and managerial guidance.

Journal ArticleDOI
TL;DR: Some of the main challenges of privacy in the IoT as well as opportunities for research and innovation are discussed and some of the ongoing research efforts that address IoT privacy issues are introduced.
Abstract: Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices ("things"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.

Proceedings ArticleDOI
13 Apr 2015
TL;DR: It is argued that by accepting that not all users require the same level of privacy, a higher level of utility can often be attained by not providing excess privacy to those who do not want it.
Abstract: Differential privacy is widely accepted as a powerful framework for providing strong, formal privacy guarantees for aggregate data analysis. A limitation of the model is that the same level of privacy protection is afforded for all individuals. However, it is common that the data subjects have quite different expectations regarding the acceptable level of privacy for their data. Consequently, differential privacy may lead to insufficient privacy protection for some users, while over-protecting others. We argue that by accepting that not all users require the same level of privacy, a higher level of utility can often be attained by not providing excess privacy to those who do not want it. We propose a new privacy definition called personalized differential privacy (PDP), a generalization of differential privacy in which users specify a personal privacy requirement for their data. We then introduce several novel mechanisms for achieving PDP. Our primary mechanism is a general one that automatically converts any existing differentially private algorithm into one that satisfies PDP. We also present a more direct approach for achieving PDP, inspired by the well-known exponential mechanism. We demonstrate our framework through extensive experiments on real and synthetic data.

Journal ArticleDOI
TL;DR: Some of the economic, technical, social, and ethical issues associated with personal data markets, focusing on the privacy challenges they raise, are outlined.
Abstract: Personal data is increasingly conceived as a tradable asset Markets for personal information are emerging and new ways of valuating individuals’ data are being proposed At the same time, legal obligations over protection of personal data and individuals’ concerns over its privacy persist This article outlines some of the economic, technical, social, and ethical issues associated with personal data markets, focusing on the privacy challenges they raise

Journal ArticleDOI
TL;DR: It is argued for the use of Big Data as complementary audit evidence using the audit evidence criteria framework and cost-benefit analysis for sufficiency, reliability, and relevance considerations are provided.
Abstract: SYNOPSIS In this paper we argue for the use of Big Data as complementary audit evidence. We evaluate the applicability of Big Data using the audit evidence criteria framework and provide cost-benefit analysis for sufficiency, reliability, and relevance considerations. Critical challenges, including integration with traditional audit evidence, information transfer issues, and information privacy protection, are discussed and possible solutions are provided.

Proceedings ArticleDOI
17 May 2015
TL;DR: This paper evaluates and systematize current secure messaging solutions and proposes an evaluation framework for their security, usability, and ease-of-adoption properties, and identifies three key challenges and map the design landscape for each: trust establishment, conversation security, and transport privacy.
Abstract: Motivated by recent revelations of widespread state surveillance of personal communication, many solutions now claim to offer secure and private messaging. This includes both a large number of new projects and many widely adopted tools that have added security features. The intense pressure in the past two years to deliver solutions quickly has resulted in varying threat models, incomplete objectives, dubious security claims, and a lack of broad perspective on the existing cryptographic literature on secure communication. In this paper, we evaluate and systematize current secure messaging solutions and propose an evaluation framework for their security, usability, and ease-of-adoption properties. We consider solutions from academia, but also identify innovative and promising approaches used "in-the-wild" that are not considered by the academic literature. We identify three key challenges and map the design landscape for each: trust establishment, conversation security, and transport privacy. Trust establishment approaches offering strong security and privacy features perform poorly from a usability and adoption perspective, whereas some hybrid approaches that have not been well studied in the academic literature might provide better trade-offs in practice. In contrast, once trust is established, conversation security can be achieved without any user involvement in most two-party conversations, though conversations between larger groups still lack a good solution. Finally, transport privacy appears to be the most difficult problem to solve without paying significant performance penalties.

Journal ArticleDOI
17 Jul 2015-Science
TL;DR: The use of machine learning to make leaps across informational and social contexts to infer health conditions and risks from nonmedical data provides representative scenarios for reflections on directions with balancing innovation and regulation.
Abstract: Large-scale aggregate analyses of anonymized data can yield valuable results and insights that address public health challenges and provide new avenues for scientific discovery. These methods can extend our knowledge and provide new tools for enhancing health and wellbeing. However, they raise questions about how to best address potential threats to privacy while reaping benefits for individuals and to society as a whole. The use of machine learning to make leaps across informational and social contexts to infer health conditions and risks from nonmedical data provides representative scenarios for reflections on directions with balancing innovation and regulation.

Journal ArticleDOI
TL;DR: Big and Open Linked Data (BOLD) results in new opportunities and have the potential to transform government and its interactions with the public and can be used to create an open and transparent government.

Journal ArticleDOI
TL;DR: This work considers a setting in which a data analyst wishes to buy information from a population from which he can estimate some statistic, while the owners of the private data experience some cost for their loss of privacy.