scispace - formally typeset
Search or ask a question
Author

Traian Marius Truta

Bio: Traian Marius Truta is an academic researcher from Northern Kentucky University. The author has contributed to research in topics: Information privacy & k-anonymity. The author has an hindex of 16, co-authored 46 publications receiving 1249 citations. Previous affiliations of Traian Marius Truta include Saint Petersburg State University & Wayne State University.

Papers
More filters
Proceedings ArticleDOI
03 Apr 2006
TL;DR: Two necessary conditions to achieve p-sensitive kanonymity property are presented, and used in developing algorithms to create masked microdata with p- sensitive k-anonymityproperty using generalization and suppression.
Abstract: In this paper, we introduce a new privacy protection property called p-sensitive k-anonymity. The existing kanonymity property protects against identity disclosure, but it fails to protect against attribute disclosure. The new introduced privacy model avoids this shortcoming. Two necessary conditions to achieve p-sensitive kanonymity property are presented, and used in developing algorithms to create masked microdata with p-sensitive k-anonymity property using generalization and suppression.

342 citations

01 Jan 2008
TL;DR: The development of a greedy privacy algorithm for anonymizing a social network and the introduction of a structural information loss measure that quantifies the amount of information lost due to edge generalization in the anonymization process are introduced.
Abstract: advent of social network sites in the last few years seems to be a trend that will likely continue in the years to come. Online social interaction has become very popular around the globe and most sociologists agree that this will not fade away. Such a development is possible due to the advancements in computer power, technologies, and the spread of the World Wide Web. What many naive technology users may not always realize is that the information they provide online is stored in massive data repositories and may be used for various purposes. Researchers have pointed out for some time the privacy implications of massive data gathering, and a lot of effort has been made to protect the data from unauthorized disclosure. However, most of the data privacy research has been focused on more traditional data models such as microdata (data stored as one relational table, where each row represents an individual entity). More recently, social network data has begun to be analyzed from a different, specific privacy perspective. Since the individual entities in social networks, besides the attribute values that characterize them, also have relationships with other entities, the possibility of privacy breaches increases. Our main contributions in this paper are the development of a greedy privacy algorithm for anonymizing a social network and the introduction of a structural information loss measure that quantifies the amount of information lost due to edge generalization in the anonymization process.

185 citations

Book ChapterDOI
13 May 2009
TL;DR: A greedy algorithm for anonymizing a social network and a measure that quantifies the information loss in the anonymization process due to edge generalization are presented.
Abstract: The advent of social network sites in the last years seems to be a trend that will likely continue. What naive technology users may not realize is that the information they provide online is stored and may be used for various purposes. Researchers have pointed out for some time the privacy implications of massive data gathering, and effort has been made to protect the data from unauthorized disclosure. However, the data privacy research has mostly targeted traditional data models such as microdata. Recently, social network data has begun to be analyzed from a specific privacy perspective, one that considers, besides the attribute values that characterize the individual entities in the networks, their relationships with other entities. Our main contributions in this paper are a greedy algorithm for anonymizing a social network and a measure that quantifies the information loss in the anonymization process due to edge generalization.

177 citations

Journal ArticleDOI
TL;DR: Two new privacy protection models are proposed called (p, α)-sensitive k-anonymity and (p+, α)- sensitive k-Anonymity, respectively, which allow us to release a lot more information without compromising privacy.
Abstract: Publishing data for analysis from a micro data table containing sensitive attributes, while maintaining individual privacy, is a problem of increasing significance today. The k-anonymity model was proposed for privacy preserving data publication. While focusing on identity disclosure, k-anonymity model fails to protect attribute disclosure to some extent. Many efforts are made to enhance the k-anonymity model recently. In this paper, we propose two new privacy protection models called (p, α)-sensitive k-anonymity and (p+, α)-sensitive k-anonymity, respectively. Different from previous the p-sensitive k-anonymity model, these new introduced models allow us to release a lot more information without compromising privacy. Moreover, we prove that the (p, α)-sensitive and (p+, α)-sensitive k-anonymity problems are NP-hard. We also include testing and heuristic generating algorithms to generate desired micro data table. Experimental results show that our introduced model could significantly reduce the privacy breach.

72 citations

Proceedings ArticleDOI
01 Dec 2017
TL;DR: How fake news spread in the current online social networks is presented and existing social network technologies such as influence maximization, information diffusion, and epidemiological models contributes to fake news creation and spreading are discussed.
Abstract: In this paper we present how fake news spread in the current online social networks. We discuss how existing social network technologies such as influence maximization, information diffusion, and epidemiological models contributes to fake news creation and spreading. Solutions to reducing the creation and spreading of fake news are also reviewed. We make recommendations regarding future areas of research in this field.

62 citations


Cited by
More filters
01 Jan 2002

9,314 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Proceedings ArticleDOI
15 Apr 2007
TL;DR: T-closeness as mentioned in this paper requires that the distribution of a sensitive attribute in any equivalence class is close to the distributions of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t).
Abstract: The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (ie, a set of records that are indistinguishable from each other with respect to certain "identifying" attributes) contains at least k records Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute In this paper we show that l-diversity has a number of limitations In particular, it is neither necessary nor sufficient to prevent attribute disclosure We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (ie, the distance between the two distributions should be no more than a threshold t) We choose to use the earth mover distance measure for our t-closeness requirement We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments

3,281 citations

Journal ArticleDOI
TL;DR: This survey will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish P PDP from other related problems, and propose future research directions.
Abstract: The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions.

1,669 citations

Proceedings ArticleDOI
17 May 2009
TL;DR: A framework for analyzing privacy and anonymity in social networks is presented and a new re-identification algorithm targeting anonymized social-network graphs is developed, showing that a third of the users who can be verified to have accounts on both Twitter and Flickr can be re-identified in the anonymous Twitter graph.
Abstract: Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.

1,360 citations