scispace - formally typeset
Search or ask a question

Showing papers on "Privacy software published in 2023"



Journal ArticleDOI
TL;DR: In this paper , an optional privacy-preserving data aggregation scheme based on BGN homomorphic encryption is proposed, without any trusted third party, which is satisfactory in terms of computation cost and communication overhead.
Abstract: With the advances in fog computing, various users’ data, collected by smart devices in the Internet of Things (IoT), are published to facilitate services and efficiency. This leads to the users’ worry about security and privacy issues in fog-enhanced IoT. Although privacy protection can ease such worry, users need to pay some extra price, such as more computation costs. Besides, there are some users who want convenience rather than privacy. They prefer to publish their raw data to exchange for better convenience or benefit. In this article, an optional privacy-preserving data aggregation scheme based on BGN homomorphic encryption is proposed, without any trusted third party. In the proposed scheme, each user can choose either no privacy encryption or privacy encryption to upload its own data according to its own privacy sensitivity. With the help of the fog node, the control center can get the aggregated data of all smart devices and the single data of smart devices that choose the no privacy option. In addition, the proposed scheme is analyzed to achieve correctness, privacy preservation, and robustness. Experimental results and comparisons show the proposed scheme is satisfactory in terms of computation cost and communication overhead.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors present a private multi-agent LQ control framework using differential privacy, which quantifies the impact of privacy along three dimensions: the amount of information shared under privacy, the control-theoretic cost of privacy, and the tradeoffs between privacy and performance.
Abstract: As multi-agent systems proliferate and more user data, new approaches are needed to protect sensitive data while still enabling system operation. To address this need, this article presents a private multiagent LQ control framework. Agents’ state trajectories can be sensitive, and we therefore protect them using differential privacy. We quantify the impact of privacy along three dimensions: the amount of information shared under privacy, the control-theoretic cost of privacy, and the tradeoffs between privacy and performance. These analyses are done in conventional control-theoretic terms, which we use to develop guidelines for calibrating privacy as a function of system parameters. Numerical results indicate that system performance remains within desirable ranges, even under strict privacy requirements.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors looked into how explainability might help to tackle this issue and created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.

2 citations


Journal ArticleDOI
TL;DR: In this article , a large-scale longitudinal corpus of privacy policies from 1996 to 2021 is collected and analyzed in terms of the data practices they describe, the rights they grant to users, and the rights their reserve for their organizations.
Abstract: It is well-known that most users do not read privacy policies, but almost always tick the box to agree with them. While the length and readability of privacy policies have been well studied, and many approaches for policy analysis based on natural language processing have been proposed, existing studies are limited in their depth and scope, often focusing on a small number of data practices at single point in time. In this paper, we fill this gap by analyzing the 25-year history of privacy policies using machine learning and natural language processing and presenting a comprehensive analysis of policy contents. Specifically, we collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021 and analyze their content in terms of the data practices they describe, the rights they grant to users, and the rights they reserve for their organizations. We pay particular attention to changes in response to recent privacy regulations such as the GDPR and CCPA. We observe some positive changes, such as reductions in data collection post-GDPR, but also a range of concerning data practices, such as widespread implicit data collection for which users have no meaningful choices or access rights. Our work is an important step towards making privacy policies machine-readable on the user-side, which would help users match their privacy preferences against the policies offered by web services.

2 citations


Journal ArticleDOI
TL;DR: In this paper , an interactive IoT application design tool called PARROT (PrivAcy by design tool foR inteRnet Of Things) is presented, which helps developers to design privacy-aware IoT applications, taking account of privacy compliance during the design process and providing real-time feedback on potential privacy violations.
Abstract: Internet of Things (IoT) applications typically collect and analyse personal data that is categorised as sensitive or special category of personal data. These data are subject to a higher degree of protection under data privacy laws. Regardless of legal requirements to support privacy practices, such as in Privacy by Design (PbD) schemes, these practices are not yet commonly followed by software developers. The difficulty of developing privacy-preserving applications emphasises the importance of exploring the problems developers face to embed privacy techniques, suggesting the need for a supporting tool. An interactive IoT application design tool – PARROT (PrivAcy by design tool foR inteRnet Of Things) – is presented. This tool helps developers to design privacy-aware IoT applications, taking account of privacy compliance during the design process and providing real-time feedback on potential privacy violations. A user study with 18 developers was conducted, comprising a semi-structured interview and a design exercise to understand how developers typically handle privacy within the design process. Collaboration with a privacy lawyer was used to review designs produced by developers to uncover privacy limitations that could be addressed by developing a software tool. Based on the findings, a proof-of-concept prototype of PARROT was implemented and evaluated in two controlled lab studies. The outcome of the study indicates that IoT applications designed with PARROT addressed privacy concerns better and managed to reduce several of the limitations identified. From a privacy compliance perspective, PARROT helps developers to address compliance requirements throughout the design and testing process. This is achieved by incorporating privacy specific design features into the IoT application from the beginning rather than retrospectively. (Demo Video). CCS Concepts: • Security and privacy → Privacy protections ; Domain-specific security and privacy architectures ; • Software and its engineering → System modeling languages

1 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a Petri net model to evaluate the current privacy controls and hotspots during SSIoT data transitions using a Barbie Smart connected toy user interaction, which provides privacy assurance, evaluates privacy by identifying privacy hotspots needing controls, and minimizes privacy-related risks such as breach of personally identifiable information and interaction data during IoT device use.
Abstract: Most Small Scale IoT (SSIoT) devices on the market gather a significant amount of sensitive information, yet many lack privacy controls, introducing significant privacy and safety risk to users. Such risks stem from the lack of privacy integration into the system development process. No formalized SSIoT data flow model currently integrates privacy elements for evaluation during the system development lifecycle (SDLC). This work aims to review current data flow modeling techniques, used in most SSIoT System Development Lifecycle (SDLC), to identify privacy gaps and assess requisite privacy controls necessary to improve user privacy. To verify this, we designed a simulation experiment using Petri net to evaluate the current privacy controls and hotspots during SSIoT data transitions. We assess our Petri net model using a Barbie Smart connected toy user interaction. The results show that Petri net has unique privacy elements and verification schemes over all other data flow modeling techniques. Further, it provides privacy assurance, evaluates privacy by identifying privacy hotspots needing controls, and minimizes privacy-related risks such as breach of personally identifiable information and interaction data during SSIoT device use.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors examined Internet users' privacy protection behaviors in relation to individual privacy concerns and their perceived collective value of privacy over time and found that individual privacy concern is not as important for temporal protection as assumed, but that recognition of collective privacy may temporarily change people's privacy behavior.
Abstract: People’s perception of privacy can primarily be directed to themselves or to the value of privacy for society. Likewise, privacy protection can repel both individual and collective privacy threats. Focusing on this distinction, the present article examines Internet users’ privacy protection behaviors in relation to individual privacy concerns and their perceived collective value of privacy over time. We conducted a longitudinal panel study with three measurement points ( N = 1790) to investigate relations between and within persons. The results of a random-intercept cross-lagged panel model revealed positive relations between the perceived value of privacy, privacy concerns, and privacy protection between persons. At the within-person level, only a temporal increase in the perceived value of privacy was related to increased protection behaviors. This suggests that individual privacy concerns are not as important for temporal protection as assumed, but that a recognition of collective privacy may temporarily change people’s privacy behavior.

1 citations


Journal ArticleDOI
TL;DR: In this article , the authors developed a taxonomy that provides a comprehensive set of privacy requirements based on four well-established personal data protection regulations and privacy frameworks, the General Data Protection Regulation (GDPR), ISO/IEC 29100, Thailand Personal Data Protection Act (Thailand PDPA) and Asia-Pacific Economic Cooperation (APEC) privacy framework.
Abstract: Digital and physical trails of user activities are collected over the use of software applications and systems. As software becomes ubiquitous, protecting user privacy has become challenging. With the increase of user privacy awareness and advent of privacy regulations and policies, there is an emerging need to implement software systems that enhance the protection of personal data processing. However, existing data protection and privacy regulations provide key principles in high-level, making it difficult for software engineers to design and implement privacy-aware systems. In this paper, we develop a taxonomy that provides a comprehensive set of privacy requirements based on four well-established personal data protection regulations and privacy frameworks, the General Data Protection Regulation (GDPR), ISO/IEC 29100, Thailand Personal Data Protection Act (Thailand PDPA) and Asia-Pacific Economic Cooperation (APEC) privacy framework. These requirements are extracted, refined and classified (using the goal-based requirements analysis method) into a level that can be used to map with issue reports. We have also performed a study on how two large open-source software projects (Google Chrome and Moodle) address the privacy requirements in our taxonomy through mining their issue reports. The paper discusses how the collected issues were classified, and presents the findings and insights generated from our study. Mining and classifying privacy requirements in issue reports can help organisations be aware of their state of compliance by identifying privacy requirements that have not been addressed in their software projects. The taxonomy can also trace back to regulations, standards and frameworks that the software projects have not complied with based on the identified privacy requirements.

1 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel privacy-preserving information sharing scheme for OSNs in which information flow can be controlled according to the privacy requirements of the information owner and the context of information flow.

1 citations


Journal ArticleDOI
TL;DR: In this article , the main ethical, technical, and legal categories of privacy, which is much more than just data protection, are discussed and recommendations about how such technologies might mitigate privacy risks and in which cases the risks are higher than the benefits of the technology.
Abstract: What do you have to keep in mind when developing or using eye-tracking technologies regarding privacy? In this article we discuss the main ethical, technical, and legal categories of privacy, which is much more than just data protection. We additionally provide recommendations about how such technologies might mitigate privacy risks and in which cases the risks are higher than the benefits of the technology.

Proceedings ArticleDOI
01 Jan 2023
TL;DR: In this article , the authors analyze a crowd-sourced corpus of privacy questions collected from mobile app users to determine to what extent these mobile app labels actually address users' privacy concerns and questions.
Abstract: —Inspired by earlier academic research, iOS app privacy labels and the recent Google Play data safety labels have been introduced as a way to systematically present users with concise summaries of an app’s data practices. Yet, little research has been conducted to deter- mine how well today’s mobile app privacy labels address people’s actual privacy concerns or questions. We analyze a crowd-sourced corpus of privacy questions collected from mobile app users to determine to what extent these mobile app labels actually address users’ privacy concerns and questions. While there are differences between iOS labels and Google Play labels, our results indicate that an important percentage of people’s privacy questions are not answered or only partially addressed in today’s labels. Findings from this work not only shed light on the additional fields that would need to be included in mobile app privacy labels but can also help inform refinements to existing labels to better address users’ typical privacy questions

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a privacy scoring framework for services in the context of handling personal data, inspired by six standardized indicators, which summarize privacy contents as forms of privacy scoring, labels, etc., has increased to empower the users' rights by providing understandable information about privacy.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper refutes five common misconceptions about privacy-preserving IoT concerning data sensing and innovation, regulations, and privacy safeguards, and shows that data privacy should not be perceived as an impediment in IoT but as an opportunity to increase customer retention and trust.
Abstract: Billions of devices in the Internet of Things (IoT) collect sensitive data about people, creating data privacy risks and breach vulnerabilities. Accordingly, data privacy preservation is vital for sustaining the proliferation of IoT services. In particular, privacy-preserving IoT connects devices embedded with sensors and maintains the data privacy of people. However, common misconceptions exist among IoT researchers, service providers, and users about privacy-preserving IoT. This article refutes five common misconceptions about privacy-preserving IoT concerning data sensing and innovation, regulations, and privacy safeguards. For example, IoT users have a common misconception that no data collection is permitted in data privacy regulations. On the other hand, IoT service providers often think data privacy impedes IoT sensing and innovation. Addressing these misconceptions is essential for making progress in privacy-preserving IoT. This article refutes such common misconceptions using real-world experiments and online survey research. First, the experiments indicate that data privacy should not be perceived as an impediment in IoT but as an opportunity to increase customer retention and trust. Second, privacy-preserving IoT is not exclusively a regulatory problem but also a functional necessity that must be incorporated in the early stages of any IoT design. Third, people do not trust services that lack sufficient privacy measures. Fourth, conventional data security principles do not guarantee data privacy protection, and data privacy can be exposed even if data is securely stored. Fifth, IoT decentralization does not attain absolute privacy preservation.


Journal ArticleDOI
TL;DR: In this paper , the authors analyze the impact of social media on traditional and contemporary notions of privacy, and discuss how the evolution of the web from 1.0 to 3.0 has influenced privacy trends and applications.
Abstract: This article analyzes the impact of social media on traditional and contemporary notions of privacy, and discusses how the evolution of the web from 1.0 to 3.0 has influenced privacy trends and applications. With the advent of Web 3.0, users are expected to have greater control and ownership over their digital assets and personal information. While this shift presents opportunities for increased data autonomy and value, it also raises concerns about potential privacy violations. The paper explores both positive and negative consequences of this changing privacy landscape, and highlights the need for privacy protection measures. Moreover, the authors suggest that privacy will continue to evolve in the future, with users potentially viewing privacy as a personal asset. The analysis draws on a range of scholarly sources to offer a nuanced and comprehensive perspective on this complex and rapidly evolving issue.

Book ChapterDOI
01 Jan 2023
TL;DR: Privacy is fundamentally separate from security and is important in its own right as mentioned in this paper , and unlike with security, we are often at odds with companies and government agencies when it comes to privacy.
Abstract: You can’t talk about security without also talking about privacy. The two are inextricably linked. At a basic level, security enables privacy—for example, strong encryption helps us keep our data private. But privacy is fundamentally separate from security and is important in its own right. And unlike with security, we are often at odds with companies and government agencies when it comes to privacy. For this reason and others, I feel that privacy is the most important topic of this book. In this chapter, I’ll explain why.

Posted ContentDOI
16 Apr 2023
TL;DR: In this paper , the authors propose a new differential privacy paradigm called estimate-verify-release (EVR), which addresses the challenges of providing a strict upper bound for privacy parameter in differential privacy compositions by converting an estimate of privacy parameter into a formal guarantee.
Abstract: Bounding privacy leakage over compositions, i.e., privacy accounting, is a key challenge in differential privacy (DP). However, the privacy parameter ($\varepsilon$ or $\delta$) is often easy to estimate but hard to bound. In this paper, we propose a new differential privacy paradigm called estimate-verify-release (EVR), which addresses the challenges of providing a strict upper bound for privacy parameter in DP compositions by converting an estimate of privacy parameter into a formal guarantee. The EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output based on the verification result. The core component of the EVR is privacy verification. We develop a randomized privacy verifier using Monte Carlo (MC) technique. Furthermore, we propose an MC-based DP accountant that outperforms existing DP accounting techniques in terms of accuracy and efficiency. Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.

Journal ArticleDOI
TL;DR: In this paper , the authors discuss five key points related to privacy in computer ethics: the concept of privacy and its significance in the context of computer ethics; ethical considerations surrounding personal information in the digital space, including issues of consent, transparency, and data protection; the legal framework surrounding privacy in different jurisdictions, such as data protection laws and international standards; the role of technology in protecting privacy, including the use of encryption and other security measures; and finally, the challenges associated with protecting privacy in the Digital Age.
Abstract: In this digital age, privacy has become a crucial issue due to the vast amount of personal information we share online. As a fundamental aspect of computer ethics, it concerns the appropriate use of information and communication technologies. This paper will discuss five key points related to privacy in computer ethics: the concept of privacy and its significance in the context of computer ethics; ethical considerations surrounding personal information in the digital space, including issues of consent, transparency, and data protection; the legal framework surrounding privacy in different jurisdictions, such as data protection laws and international standards; the role of technology in protecting privacy, including the use of encryption and other security measures; and finally, the challenges associated with protecting privacy in the digital age, such as the risk of data breaches, identity theft, and other forms of online exploitation. Through these five key points, this paper aims to provide a comprehensive understanding of privacy in computer ethics and emphasize the importance of promoting responsible and ethical use of technology.

Posted ContentDOI
29 Jun 2023
TL;DR: In this article , the authors applied the NLP framework of Polisis to extract features of the privacy policy for 515,920 apps on the iOS App Store comparing the output to the privacy labels.
Abstract: Apple introduced \textit{privacy labels} in Dec. 2020 as a way for developers to report the privacy behaviors of their apps. While Apple does not validate labels, they do also require developers to provide a privacy policy, which offers an important comparison point. In this paper, we applied the NLP framework of Polisis to extract features of the privacy policy for 515,920 apps on the iOS App Store comparing the output to the privacy labels. We identify discrepancies between the policies and the labels, particularly as it relates to data collected that is linked to users. We find that 287$\pm196$K apps' privacy policies may indicate data collection that is linked to users than what is reported in the privacy labels. More alarming, a large number of (97$\pm30$\%) of the apps that have {\em Data Not Collected} privacy label have a privacy policy that indicates otherwise. We provide insights into potential sources for discrepancies, including the use of templates and confusion around Apple's definitions and requirements. These results suggest that there is still significant work to be done to help developers more accurately labeling their apps. Incorporating a Polisis-like system as a first-order check can help improve the current state and better inform developers when there are possible misapplication of privacy labels.


Proceedings ArticleDOI
24 May 2023
TL;DR: Li et al. as discussed by the authors proposed a differential privacy-based online DRL algorithm, which adds Gaussian noise to the gradients of the deep network according to the privacy budget and trains an autocoder to protect the raw environmental state data.
Abstract: Deep Reinforcement Learning (DRL) combines the perceptual capabilities of deep learning with the decision-making capabilities of Reinforcement Learning RL, which can achieve enhanced decision-making. However, the environmental state data contains the privacy of the users. There exists consequently a potential risk of environmental state information being leaked during RL training. Some data desensitization and anonymization technologies are currently being used to protect data privacy. There may still be a risk of privacy disclosure with these desensitization techniques. Meanwhile, policymakers need the environmental state to make decisions, which will cause the disclosure of raw environmental data. To address the privacy issues in DRL, we propose a differential privacy-based online DRL algorithm. The algorithm will add Gaussian noise to the gradients of the deep network according to the privacy budget. More important, we prove tighter bounds for the privacy budget. Furthermore, we train an autocoder to protect the raw environmental state data. In this work, we prove the privacy budget formulation for differential privacy-based online deep RL. Experiments show that the proposed algorithm can improve privacy protection while still having relatively excellent decisionmaking performance.


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a Bayesian-based location privacy inference attack method from external adversary's point of view, and an adaptive differential privacy-based dynamic incentive method was proposed to resist the internal privacy threat.
Abstract: Nowadays location-based service (LBS) has become an important service in people’s daily life. Online taxi-hailing system (DiDi, Uber, etc.) is one of the most common LBS system. As the scale of online taxi services continues to expand, some problems have gradually emerged. Specifically, the uncertainty of taxis and passengers makes it difficult to match them effectively, and the passengers’ location privacy will be threatened from both internal and external of the system. In this paper, we first proposed a Bayesian-based location privacy inference attack method from external adversary’s point of view. After that, an adaptive differential privacy-based dynamic incentive method was proposed. Firstly, an adaptivity clocking area division method was proposed to resist the internal privacy threat. Secondly, a dynamic incentive bidding method was proposed to deal with the trading issue. Thirdly, an exponential-based matching method was proposed to resist the external inference privacy threat. Further, theoretical proofs show that the proposed method satisfy the privacy properties of k-anonymity, 2μ1ε-differential privacy, and the economic properties of incentive compatibility, individual rationality. Finally, the experimental results show that the inference attack would achieve a maximum attack success rate of 95% while ensuring the accuracy within 150 meters, and the proposed adaptive differential privacy-based dynamic incentive method can not only provide a good economic performance in terms of satisfaction ratio, social welfare and travel distance, but also can resist internal privacy threat with less than 1% of privacy leakage probability and reduce the success rate of external privacy inference attacks to 25%.

Posted ContentDOI
19 Jun 2023
TL;DR: Wang et al. as discussed by the authors proposed a framework that can automatically generate privacy nutrition labels from privacy policies, which achieved a 0.75 F1 score on generating first-party data collection practices and an average of 0.93 F1score on general security practices.
Abstract: Software applications have become an omnipresent part of modern society. The consequent privacy policies of these applications play a significant role in informing customers how their personal information is collected, stored, and used. However, customers rarely read and often fail to understand privacy policies because of the ``Privacy Policy Reading Phobia'' (PPRP). To tackle this emerging challenge, we propose the first framework that can automatically generate privacy nutrition labels from privacy policies. Based on our ground truth applications about the Data Safety Report from the Google Play app store, our framework achieves a 0.75 F1-score on generating first-party data collection practices and an average of 0.93 F1-score on general security practices. We also analyse the inconsistencies between ground truth and curated privacy nutrition labels on the market, and our framework can detect 90.1% under-claim issues. Our framework demonstrates decent generalizability across different privacy nutrition label formats, such as Google's Data Safety Report and Apple's App Privacy Details.

Posted ContentDOI
22 May 2023
TL;DR: In this article , a general framework for protecting privacy in the smart grid environment and measuring the efficacy of privacy attacks is proposed. But, the framework is not suitable for large-scale deployments.
Abstract: <p>Generalized framework for protecting privacy in the smart grid environment and measuring the efficacy of privacy attacks.</p>

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a fine-grained personalized differential privacy data publishing scheme (APDP) for social networks, which defines the privacy protection levels of different users based on their attribute values.
Abstract: In the Big Data era, the wide usage of mobile devices has led to large amounts of information release and sharing through social networks, where sensitive information of the data owners may be leaked. Traditional approaches that provide the identical privacy protection levels for all users result in poor quality of service, thus the concept of personalized privacy has been proposed in recent years. However, existing methods that add different noises to each user will require both high real-time performance and resource consumption. This paper presents a fine-grained personalized differential privacy data publishing scheme (APDP) for social networks. Specifically, we design a new mechanism that defines the privacy protection levels of different users based on their attribute values. In particular, we exploit the TOPSIS method to map the attribute values to the amount of noise required to add. Furthermore, to prevent illegal download of data, the access control is integrated with differential privacy. Compared with traditional attribute-based encryption data publishing schemes, our scheme can get rid of the expensive computation overhead. Theoretical analyses and simulations show that APDP can realize efficient personalized differential privacy data publishing with reasonable data utility.

Posted ContentDOI
05 Jun 2023
TL;DR: OptimShare as discussed by the authors combines the principles of differential privacy, fuzzy logic, and probability theory to establish an integrated tool for privacy-preserving data sharing, which is a utility-focused multi-criteria solution designed to perturb input datasets selectively optimized for specific real-world applications.
Abstract: Tabular data sharing serves as a common method for data exchange. However, sharing sensitive information without adequate privacy protection can compromise individual privacy. Thus, ensuring privacy-preserving data sharing is crucial. Differential privacy (DP) is regarded as the gold standard in data privacy. Despite this, current DP methods tend to generate privacy-preserving tabular datasets that often suffer from limited practical utility due to heavy perturbation and disregard for the tables' utility dynamics. Besides, there has not been much research on selective attribute release, particularly in the context of controlled partially perturbed data sharing. This has significant implications for scenarios such as cross-agency data sharing in real-world situations. We introduce OptimShare: a utility-focused, multi-criteria solution designed to perturb input datasets selectively optimized for specific real-world applications. OptimShare combines the principles of differential privacy, fuzzy logic, and probability theory to establish an integrated tool for privacy-preserving data sharing. Empirical assessments confirm that OptimShare successfully strikes a balance between better data utility and robust privacy, effectively serving various real-world problem scenarios.

Book ChapterDOI
01 Jan 2023
TL;DR: Wang et al. as mentioned in this paper designed a personal data collection and privacy legal protection platform, analyzes the reasons for privacy leakage, and establishes an M-diversity anonymous model platform suitable for the protection of personal privacy information and data.
Abstract: In recent years, with the advent of the data age, the collection, release, and analysis of massive data have become more convenient, and information sharing has become more common. At the same time, there is also the threat of privacy information. How to effectively solve the potential privacy issues in the process of data release is our research direction. Anonymity technology is currently the main technology used in privacy protection. The main work of this article is to design a personal data collection and privacy legal protection platform, analyzes the reasons for privacy leakage, and establishes an M-diversity anonymous model platform suitable for the protection of personal privacy information and data. In-depth research has been conducted on common protection techniques. In addition, the performance test of the designed privacy protection platform system is carried out, and the number of the privacy protection data platform system functions leaking the privacy of the victims is analyzed. The experimental results show that the M-diversity anonymous model is suitable for the protection of personal privacy information and data and can enhance the information and data suitable for personal privacy.

Journal ArticleDOI
TL;DR: In this article , the authors present a review of state-of-the-art methods for preserving privacy in cyber-physical deployments without compromising their performance in terms of quality of service.
Abstract: The difficulty of privacy protection in cyber-physical installations encompasses several sectors and calls for methods like encryption, hashing, secure routing, obfuscation, and data exchange, among others. To create a privacy preservation model for cyber physical deployments, it is advised that data privacy, location privacy, temporal privacy, node privacy, route privacy, and other types of privacy be taken into account. Consideration must also be given to other types of privacy, such as temporal privacy. The computationally challenging process of incorporating these models into any wireless network also affects quality of service (QoS) variables including end-to-end latency, throughput, energy use, and packet delivery ratio. The best privacy models must be used by network designers and should have the least negative influence on these quality-of-service characteristics. The designers used common privacy models for the goal of protecting cyber-physical infrastructure in order to achieve this. The limitations of these installations' interconnection and interface-ability are not taken into account in this. As a result, even while network security has increased, the network's overall quality of service has dropped. The many state-of-the-art methods for preserving privacy in cyber-physical deployments without compromising their performance in terms of quality of service are examined and analyzed in this research. Lowering the likelihood that such circumstances might arise is the aim of this investigation and review. These models are rated according to how much privacy they provide, how long it takes from start to finish to transfer data, how much energy they use, and how fast their networks are. In order to maximize privacy while maintaining a high degree of service performance, the comparison will assist network designers and researchers in selecting the optimal models for their particular deployments. Additionally, the author of this book offers a variety of tactics that, when used together, might improve each reader's performance. This study also provides a range of tried-and-true machine learning approaches that networks may take into account and examine in order to enhance their privacy performance.