scispace - formally typeset
Search or ask a question

Showing papers by "Claudio Bettini published in 2012"


Proceedings ArticleDOI
19 Mar 2012
TL;DR: An initial investigation is reported of the use of differential privacy methods to extract statistics about users' preferences for POIs, and a high-level architecture is presented to apply these methods.
Abstract: Several context-aware mobile recommender systems have been recently proposed to suggest points of interest (POIs). Ideally, a user of these systems should not be allowed to know the preferred POIs of another user, since they reveal sensitive information like political opinions, religious beliefs, or sexual orientations. Unfortunately, existing POI recommender systems do not provide any formal guarantee of privacy. In this paper, we report an initial investigation of this challenging research issue. We propose the use of differential privacy methods to extract statistics about users' preferences for POIs. Actual recommendations are generated by querying those statistics, in order to formally enforce privacy. We also present a high-level architecture to apply our methods.

28 citations


Journal ArticleDOI
TL;DR: It is shown that correlations among sensitive values associated to the same individuals in different releases can be easily used to violate users' privacy by adversaries observing multiple data releases, even if state-of-the-art privacy protection techniques are applied.
Abstract: Web queries, credit card transactions, and medical records are examples of transaction data flowing in corporate data stores, and often revealing associations between individuals and sensitive information. The serial release of these data to partner institutions or data analysis centers in a nonaggregated form is a common situation. In this paper, we show that correlations among sensitive values associated to the same individuals in different releases can be easily used to violate users' privacy by adversaries observing multiple data releases, even if state-of-the-art privacy protection techniques are applied. We show how the above sequential background knowledge can be actually obtained by an adversary, and used to identify with high confidence the sensitive values of an individual. Our proposed defense algorithm is based on Jensen-Shannon divergence; experiments show its superiority with respect to other applicable solutions. To the best of our knowledge, this is the first work that systematically investigates the role of sequential background knowledge in serial release of transaction data.

28 citations


Proceedings ArticleDOI
25 Mar 2012
TL;DR: This paper proposes a novel obfuscation technique for network flows that provides formal guarantees under realistic assumptions about the adversary's knowledge and preserves the utility of network flows for network traffic analysis.
Abstract: In the last decade, the release of network flows has gained significant popularity among researchers and networking communities. Indeed, network flows are a fundamental tool for modeling the network behavior, identifying security attacks, and validating research results. Unfortunately, due to the sensitive nature of network flows, security and privacy concerns discourage the publication of such datasets. On the one hand, existing techniques proposed to sanitize network flows do not provide any formal guarantees. On the other hand, microdata anonymization techniques are not directly applicable to network flows. In this paper, we propose a novel obfuscation technique for network flows that provides formal guarantees under realistic assumptions about the adversary's knowledge. Our work is supported by extensive experiments with a large set of real network flows collected at an important Italian Tier II Autonomous System, hosting sensitive government and corporate sites. Experimental results show that our obfuscation technique preserves the utility of network flows for network traffic analysis.

23 citations


Proceedings Article
15 Jul 2012
TL;DR: This paper reports on an initial investigation about the application of probabilistic description logics to a framework for the recognition of multilevel activities in intelligent environments based on Log-linear DLs, and believes that this approach is very promising.
Abstract: A major challenge of pervasive context-aware computing and intelligent environments resides in the acquisition and modelling of rich and heterogeneous context data. Decisive aspects of this information are the ongoing human activities at different degrees of granularity. We conjecture that ontology-based activity models are key to support interoperable multilevel activity representation and recognition. In this paper, we report on an initial investigation about the application of probabilistic description logics (DLs) to a framework for the recognition of multilevel activities in intelligent environments. In particular, being based on Log-linear DLs, our approach leverages the potential of highly expressive description logics with probabilistic reasoning in one unified framework. While we believe that this approach is very promising, our preliminary investigation suggests that challenging research issues remain open, including extensive support for temporal reasoning, and optimizations to reduce the computational cost.

19 citations


Journal ArticleDOI
01 Oct 2012
TL;DR: The model supports representation of complex derivation processes, integrity verification, and a shared ontology to facilitate interoperability and deals with uncertainty and takes into account temporal aspects related to the quality of data.
Abstract: Ambient intelligence systems would benefit from the possibility of assessing quality and reliability of context information based on its derivation history, named provenance. While various provenance frameworks have been proposed in data management, context data have some peculiar features that claim for a specific support. However, no provenance model specifically targeted to context data has been proposed till the time of writing. In this paper, we report an initial investigation of this challenging research issue by proposing a provenance model for data acquired and processed in ambient intelligence systems. Our model supports representation of complex derivation processes, integrity verification, and a shared ontology to facilitate interoperability. The model also deals with uncertainty and takes into account temporal aspects related to the quality of data. We experimentally show the impact of the provenance model in terms of increased dependability of a sensor-based smart-home infrastructure. We also conducted experiments to evaluate the communication and computational overhead introduced to support our provenance model, using sensors and mobile devices currently available on the market.

10 citations


Proceedings ArticleDOI
06 Nov 2012
TL;DR: This paper describes a location privacy attack that, only using partial information about the distances between users and public knowledge on the average density of population, can discover the approximate position of users on a map, independently on the fake or hidden position assigned to them by a privacy preserving algorithm.
Abstract: Proximity services alert users about the presence of other users or moving objects based on their distance. Distance preserving transformations are among the techniques that may be used to avoid revealing the actual position of users while still effectively providing these services. Some of the proposed transformations have been shown to actually guarantee location privacy with the assumption that users are uniformly distributed in the considered geographical region, which is unrealistic assumption when the region extends to a county, a state or a country.In this paper we describe a location privacy attack that, only using partial information about the distances between users and public knowledge on the average density of population, can discover the approximate position of users on a map, independently on the fake or hidden position assigned to them by a privacy preserving algorithm. We implement this attack with an algorithm and we experimentally evaluate it showing that it is practically feasible and that partial distance information like the one exchanged in common friend-finder services can be sufficient to violate users' privacy.

6 citations