scispace - formally typeset
Search or ask a question
Author

Camilla Bistolfi

Bio: Camilla Bistolfi is an academic researcher. The author has contributed to research in topics: Data Protection Act 1998 & General Data Protection Regulation. The author has an hindex of 1, co-authored 3 publications receiving 28 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper argues that pseudonymization can be used both to reduce the risks of reidentification and help data controllers and processors to respect their personal data protection obligations by keeping control over their activities.

38 citations

Proceedings ArticleDOI
23 Mar 2016
TL;DR: Without standards which lead to interoperability the right to data portability is destined to remain more a declaration of principle than a real and effective tool for individual self-determination in the digital environment.
Abstract: The General Data Protection Regulation introduces a brand new right to data portability that should enable the data subject "to transmit those data to another controller without hindrance from the controller to which the data have been provided" This brief paper enucleates the legal and technical limitations to this new right: without standards which lead to interoperability the right to data portability is destined to remain more a declaration of principle than a real and effective tool for individual self-determination in the digital environment

1 citations

Book ChapterDOI
07 Sep 2016
TL;DR: This work attempted to re-read the right to privacy, recombining it with data protection and giving life to what it has called “data protecy”, a merger of safeguards which should be always considered in IoT.
Abstract: In the course of this brief work we have tried to provide answers to questions that have arisen concerning some of the most critical aspects of IoT, in terms of individuals’ fundamental rights safeguard, especially concerning the emerging complexities with respect to the protection of the personal sphere and the information of the data subject. We attempted to re-read the right to privacy, recombining it with data protection and giving life to what we have called “data protecy”, a merger of safeguards which should be always considered in IoT. Hence, we focused on the new concept of “3D Privacy”, as a consequence of the data protecy approach, that consists in adopting also physical security measures, empowering users and non-users as data subjects with material tools in order to self-control over their information and to self-defend from data collection in IoT open environments. This new approach has shown that it should be now necessary not only to appeal to abstract rules or policies, but also to find concrete, material instruments, ranging from the “off” button until very “personal anti-radar gadgets” or other sensors misleading devices, however limiting their use only to strictly private contexts.

Cited by
More filters
Proceedings ArticleDOI
24 Aug 2018
TL;DR: This paper proposes GuideMe, a 6-step systematic approach that supports elicitation of solution requirements that link GDPR data protection obligations with the privacy controls that fulfill these obligations and that should be implemented in an organization's software system.
Abstract: The General Data Protection Regulation (GDPR) aims to protect personal data of EU residents and can impose severe sanctions for non-compliance. Organizations are currently implementing various measures to ensure their software systems fulfill GDPR obligations such as identifying a legal basis for data processing or enforcing data anonymization. However, as regulations are formulated vaguely, it is difficult for practitioners to extract and operationalize legal requirements from the GDPR. This paper aims to help organizations understand the data protection obligations imposed by the GDPR and identify measures to ensure compliance. To achieve this goal, we propose GuideMe, a 6-step systematic approach that supports elicitation of solution requirements that link GDPR data protection obligations with the privacy controls that fulfill these obligations and that should be implemented in an organization's software system. We illustrate and evaluate our approach using an example of a university information system. Our results demonstrate that the solution requirements elicited using our approach are aligned with the recommendations of privacy experts and are expressed correctly.

58 citations

Journal ArticleDOI
01 Feb 2021
TL;DR: In this article, a modified value sensitive design (VSD) approach is proposed to integrate a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived.
Abstract: Value sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.

55 citations

Journal ArticleDOI
TL;DR: In this article, the legal basis for using social media data while ensuring data subjects' rights through a case study based on the European Union's General Data Protection Regulation is investigated, and the authors recommend that conservation scientists carefully consider their research objectives so as to facilitate responsible use of social media datasets in conservation science research, for example, in conservation culturomics and investigations of illegal wildlife trade online.
Abstract: Social media data are being increasingly used in conservation science to study human-nature interactions. User-generated content, such as images, video, text, and audio, and the associated metadata can be used to assess such interactions. A number of social media platforms provide free access to user-generated social media content. However, similar to any research involving people, scientific investigations based on social media data require compliance with highest standards of data privacy and data protection, even when data are publicly available. Should social media data be misused, the risks to individual users' privacy and well-being can be substantial. We investigated the legal basis for using social media data while ensuring data subjects' rights through a case study based on the European Union's General Data Protection Regulation. The risks associated with using social media data in research include accidental and purposeful misidentification that has the potential to cause psychological or physical harm to an identified person. To collect, store, protect, share, and manage social media data in a way that prevents potential risks to users involved, one should minimize data, anonymize data, and follow strict data management procedure. Risk-based approaches, such as a data privacy impact assessment, can be used to identify and minimize privacy risks to social media users, to demonstrate accountability and to comply with data protection legislation. We recommend that conservation scientists carefully consider our recommendations in devising their research objectives so as to facilitate responsible use of social media data in conservation science research, for example, in conservation culturomics and investigations of illegal wildlife trade online.

50 citations

Journal ArticleDOI
TL;DR: Questions remain about when and if genomic data can be truly irreversibly de‐identified, and a decentralized, context‐specific and risk‐based approach to data protection with emphasis on the accountability of data controllers is recommended.
Abstract: EMBO Reports (2019) e48316 Human genomic data have become an important and rich resource for biomedical and clinical research. At the same time, concerns about the identifiability of genomic data have been central to discussions regarding adequate protection of personal data and privacy. Addressing such concerns is paramount for research and clinical data repositories, as well as for ensuring interoperability of standards across jurisdictions. However, in spite of increased scholarly and policy scrutiny during the past decade, questions remain about when and if genomic data can be truly irreversibly de‐identified. > … the new law in the EU mandates that data that has been merely pseudonymized is regarded as personal data that falls under its scope, while anonymous data would not be subject to the regulation. These discussions have acquired renewed salience in Europe after the EU Regulation 2016/679, also known as the General Data Protection Regulation or GDPR (https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=EN), came into effect. At its core, the GDPR mandates a decentralized, context‐specific and risk‐based approach to data protection with emphasis on the accountability of data controllers (Arts. 5(2) and 24) [1]. Additionally, under a so‐called “research exemption”, the GDPR allows for some flexibility for the processing of personal data for scientific research (Art. 9(2)(j)), and it relaxes the stringent requirements for specific consent (Recital 33) and data storage (Art. 5 (1)(e)). Moreover, it allows EU Member States to introduce further provisions for the processing of genetic, biometric, and health‐related data (Art. 9(4)). The GDPR lists genetic data as “special categories of personal data” or sensitive data (Art. 9), which makes their processing for research purposes (Art. 9(2)(j)) subject to the adoption of adequate organizational and technical safeguards, such as pseudonymization (Art. 89(1)) [1], [2]. Pseudonymization is …

42 citations

01 Jan 2018
TL;DR: In this article, the authors focused on identification of current role of social media in public marketing and analyzed the Facebook pages of 13 regions of the Czech Republic and analyzed five blocks of Kietzmann's honeycomb framework: identity, conversation, sharing, presence, and reputation.
Abstract: Social media has become a new phenomenon of the society, which significantly affects not individuals only, but also organizations, including public institutions. An article aims on identification of current role of social media in public marketing. Specifically, it focuses on the sample of 13 regions of the Czech Republic and analyzes Facebook pages of its regional authorities. The content analysis concentrates on five blocks (out of seven original ones) of Kietzmann ́s honeycomb framework: identity, conversation, sharing, presence, and reputation. Findings confirmed that all the regions have their Facebook page set up, one third of regions react on citizen ́s request up to few minutes, the other one third up to one day. Regional authorities regularly publish its posts (11 posts per week in average) and share their own content, mainly.

31 citations