scispace - formally typeset

Showing papers in "Computer Law & Security Review in 2017"


Journal ArticleDOI
TL;DR: It is argued that blockchain can introduce long–awaited transparency in matters of copyright ownership chain; substantially mitigate risks of online piracy by enabling control over digital copy and creating a civilized market for “used” digital content.
Abstract: The paper focuses on various legal-related aspects of the application of blockchain technologies in the copyright sphere. Specifically, it outlines the existing challenges for distribution of copyrighted works in the digital environment, how they can be solved with blockchain, and what associated issues need to be addressed in this regard. It is argued that blockchain can introduce long-awaited transparency in matters of copyright ownership chain; substantially mitigate risks of online piracy by enabling control over digital copy and creating a civilized market for “used” digital content. It also allows to combine the simplicity of application of creative commons/open source type of licenses with revenue streams, and thus facilitate fair compensation of authors by means of cryptocurrency payments and Smart contracts. However, these benefits do not come without a price: many new issues will need to be resolved to enable the potential of blockchain technologies. Among them are: where to store copyrighted content (on blockchain or “off-chain”) and the associated need to adjust the legal status of online intermediaries; how to find a right balance between immutable nature of blockchain records and the necessity to adjust them due to the very nature of copyright law, which assigns ownership based on a set of informal facts, not visible to the public. Blockchain as a kind of time stamping service cannot itself ensure the trustworthiness of facts, which originate “off-chain”. More work needs to be done on the legal side: special provisions aimed at facilitating user's trust in blockchain records and their good faith usage of copyrighted works based on them need to be introduced and transactions with cryptocurrencies have to be legalized as well as the status of Smart contracts and their legal consequences. Finally, the economics of blockchain copyright management systems need to be carefully considered in order to ensure that they will have necessary network effects. If those issues are resolved in a satisfactory way, blockchain has the potential to rewrite how the copyright industry functions and digital content is distributed.

148 citations


Journal ArticleDOI
TL;DR: The application of blockchain to e-Residency has the potential to fundamentally change the way identity information is controlled and authenticated and used in the communications context and this is the first study of its kind to examine policy issues around blockchain.
Abstract: In December 2014, Estonia became the first nation to open its digital borders to enable anyone, anywhere in the world to apply to become an e-Resident. Estonian e-Residency is essentially a commercial initiative. The e-ID issued to Estonian e-Residents enables commercial activities with the public and private sectors. It does not provide citizenship in its traditional sense, and the e-ID provided to e-Residents is not a travel document. However, in many ways it is an international ‘passport’ to the virtual world. E-Residency is a profound change and the recent announcement that the Estonian government is now partnering with Bitnation to offer a public notary service to Estonian e-Residents based on blockchain technology is of significance. The application of blockchain to e-Residency has the potential to fundamentally change the way identity information is controlled and authenticated. This paper examines the legal, policy, and technical implications of this development.

130 citations


Journal ArticleDOI
TL;DR: The potential issues with legal and practical enforceability that arise from the use of smart contracts within both civil and common law jurisdictions are considered.
Abstract: Swift developments in the emerging field of blockchain technology have facilitated the birth of ‘smart contracts’: computerised transaction protocols which autonomously execute the terms of a contract. Smart contracts are disintermediated and generally transparent in nature, offering the promise of increased commercial efficiency, lower transaction and legal costs, and anonymous transacting. The business world is actively investigating the use of blockchain technology for various commercial purposes. Whilst questions surround the security and reliability of this technology, and the negative impact it may have upon traditional intermediaries, there are equally significant concerns that smart contracts will encounter considerable difficulty adapting to current legal frameworks regulating contracts across jurisdictions. This article considers the potential issues with legal and practical enforceability that arise from the use of smart contracts within both civil and common law jurisdictions.

115 citations


Journal ArticleDOI
TL;DR: The aim of this article is to propose a first systematic interpretation of this new right, by suggesting a pragmatic and extensive approach, particularly taking advantage as much as possible of the interrelationship that this new legal provision can have with regard to the Digital Single Market and the fundamental rights of digital users.
Abstract: The right to data portability is one of the most important novelties within the EU General Data Protection Regulation, both in terms of warranting control rights to data subjects and in terms of being found at the intersection between data protection and other fields of law (competition law, intellectual property, consumer protection, etc.). It constitutes, thus, a valuable case of development and diffusion of effective user-centric privacy enhancing technologies and a first tool to allow individuals to enjoy the immaterial wealth of their personal data in the data economy. Indeed, a free portability of personal data from one controller to another can be a strong tool for data subjects in order to foster competition of digital services and interoperability of platforms and in order to enhance controllership of individuals on their own data. However, the adopted formulation of the right to data portability in the GDPR could benefit from further clarification: several interpretations are possible, particularly with regard to the object of the right and its interrelation with other rights, potentially leading to additional challenges within its technical implementation. The aim of this article is to propose a first systematic interpretation of this new right, by suggesting a pragmatic and extensive approach, particularly taking advantage as much as possible of the interrelationship that this new legal provision can have with regard to the Digital Single Market and the fundamental rights of digital users. In sum, the right to data portability can be approximated under two different perspectives: the minimalist approach (the adieu scenario) and the empowering approach (the fusing scenario), which the authors consider highly preferable.

94 citations


Journal ArticleDOI
TL;DR: The Australian government has ambitious aims to release greater amounts of its data to the public, but is likely this task will prove difficult due to uncertainties surrounding the reliability of de-identification and the requirements of privacy law, as well as a public service culture which is yet to fully embrace the open data movement.
Abstract: Governments around the world are posting many thousands of their datasets on online portals. A major purpose of releasing this data is to drive innovation through Big Data analysis, as well as to promote government transparency and accountability. This article considers the benefits and risks of releasing government data as open data, and identifies the challenges the Australian government faces in releasing its data into the public domain. The Australian government has ambitious aims to release greater amounts of its data to the public. However, it is likely this task will prove difficult due to uncertainties surrounding the reliability of de-identification and the requirements of privacy law, as well as a public service culture which is yet to fully embrace the open data movement.

63 citations


Journal ArticleDOI
TL;DR: If individuals are shown the “price” of their personal data, they can acquire higher awareness about their power in the digital market and thus be effectively empowered for the protection of their information privacy.
Abstract: The commodification of digital identities is an emerging reality in the data-driven economy. Personal data of individuals represent monetary value in the data-driven economy and are often considered a counter performance for “free” digital services or for discounts for online products and services. Furthermore, customer data and profiling algorithms are already considered a business asset and protected through trade secrets. At the same time, individuals do not seem to be fully aware of the monetary value of their personal data and tend to underestimate their economic power within the data-driven economy and to passively succumb to the propertization of their digital identity. An effort that can increase awareness of consumers/users on their own personal information could be making them aware of the monetary value of their personal data. In other words, if individuals are shown the “price” of their personal data, they can acquire higher awareness about their power in the digital market and thus be effectively empowered for the protection of their information privacy. This paper analyzes whether consumers/users should have a right to know the value of their personal data. After analyzing how EU legislation is already developing in the direction of propertization and monetization of personal data, different models for quantifying the value of personal data are investigated. These models are discussed, not to determine the actual prices of personal data, but to show that the monetary value of personal data can be quantified, a conditio-sine-qua-non for the right to know the value of your personal data. Next, active choice models, in which users are offered the option to pay for online services, either with their personal data or with money, are discussed. It is concluded, however, that these models are incompatible with EU data protection law. Finally, practical, moral and cognitive problems of pricing privacy are discussed as an introduction to further research. We conclude that such research is needed to see to which extent these problems can be solved or mitigated. Only then, it can be determined whether the benefits of introducing a right to know the value of your personal data outweigh the problems and hurdles related to it.

57 citations


Journal ArticleDOI
TL;DR: It is suggested that the result of this paper may be its use in further research defining the scope of SAI rights and obligations, which is formulated according to the technical capabilities integrated in SAI and the SAI's ability to interact independently with other legal subjects.
Abstract: The purpose of this paper is to determine whether Systems of Artificial Intelligence (SAI) can be deemed subjects of law. This aim is formulated according to the technical capabilities integrated in SAI and the SAI's ability to interact independently with other legal subjects. SAI features, such as direct connection with intellectual skills, the ability to understand, learn and make autonomous decisions may cause situations where autonomous systems based on AI will make decisions which will be in the best interests of individuals, even though conflicting with the user's own will. To consider the possibility of SAI being recognized as possessing legal personality, we analyse the concept and features of SAI and define its operating principles. We give hypothetical examples to demonstrate the necessity of SAIs being recognized as such. The paper undertakes legal personality analysis of SAI performed: (i) using the philosophical and legal concepts of a subject (person); (ii) discussing artificial (unnatural subjects of law) as an alternative to the recognition of legal personality of SAI; (iii) using elements of legal personality set for natural and legal persons. The analysis leads to the conclusion that the scope of SAI rights and obligations will not necessarily be the same as the scope of rights and obligations of other subjects of law. Thus, SAI could only have rights and obligations that are strictly defined by legislators. This conclusion suggests that the result of this paper may be its use in further research defining the scope of SAI rights and obligations.

51 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine the problem of AI memory and the right to be forgotten, and conclude that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments.
Abstract: This article examines the problem of AI memory and the Right to Be Forgotten. First, this article analyzes the legal background behind the Right to Be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the authors explore whether the Right to Be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to Be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments. Finally, this article addresses the core issue at the heart of the AI and Right to Be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation.

48 citations


Journal ArticleDOI
TL;DR: Why the implementation of Privacy by Design is a necessity in a number of sectors where specific data protection concerns arise and how it can be implemented is demonstrated.
Abstract: This article examines the extent to which Privacy by Design can safeguard privacy and personal data within a rapidly evolving society. This paper will first briefly explain the theoretical concept and the general principles of Privacy by Design, as laid down in the General Data Protection Regulation. Then, by indicating specific examples of the implementation of the Privacy by Design approach, it will be demonstrated why the implementation of Privacy by Design is a necessity in a number of sectors where specific data protection concerns arise (biometrics, e-health and video-surveillance) and how it can be implemented.

42 citations


Journal ArticleDOI
TL;DR: The extent to which virtual currencies are regulated under EU financial and economic law is analyzed, with particular attention to cryptocurrencies, to provide insights valuable to service providers active in this nascent market.
Abstract: The goal of this paper is to analyze the extent to which virtual currencies are regulated under EU financial and economic law, with particular attention to cryptocurrencies. The focus of this paper is put on recent developments regarding anti-money laundering legislation. In the last decade, the EU has adopted several legal frameworks governing different aspects of the payments landscape, most notably regarding payment services and electronic money. However, it remains unclear how virtual currencies – and more in particular cryptocurrencies – fit under those legal frameworks. This paper will first briefly analyze whether core legislation in the fields of payment services and e-money can apply to virtual currencies. Next, and more importantly, the focus will be put on recent developments at the EU level, which aim to bring certain virtual currency service providers under the scope of anti-money laundering rules. While at the moment only such inclusion under anti-money laundering rules appears to be viable, it remains to be seen what the consequences of this evolution are for developments in virtual currencies. This paper provides an analysis of a regulatory issue currently debated by legislators worldwide. In doing so, it aims to provide insights valuable to service providers active in this nascent market.

40 citations


Journal ArticleDOI
TL;DR: A framework has to be developed that adds new layers of protection for fundamental rights and safeguards against erroneous and malicious use at the levels of analysis and use and the oversight regime is in need of strengthening.
Abstract: Big Data analytics in national security, law enforcement and the fight against fraud have the potential to reap great benefits for states, citizens and society but require extra safeguards to protect citizens' fundamental rights. This involves a crucial shift in emphasis from regulating Big Data collection to regulating the phases of analysis and use. In order to benefit from the use of Big Data analytics in the field of security, a framework has to be developed that adds new layers of protection for fundamental rights and safeguards against erroneous and malicious use. Additional regulation is needed at the levels of analysis and use, and the oversight regime is in need of strengthening. At the level of analysis – the algorithmic heart of Big Data processes – a duty of care should be introduced that is part of an internal audit and external review procedure. Big Data projects should also be subject to a sunset clause. At the level of use, profiles and (semi-) automated decision-making should be regulated more tightly. Moreover, the responsibility of the data processing party for accuracy of analysis – and decisions taken on its basis – should be anchored in legislation. The general and security-specific oversight functions should be strengthened in terms of technological expertise, access and resources. The possibilities for judicial review should be expanded to stimulate the development of case law.

Journal ArticleDOI
TL;DR: To what extent a different interpretation of the term ‘driver’ in traffic laws and international Conventions can accommodate the deployment of self-driving cars without a human driver present will be discussed in this article.
Abstract: Self-driving cars and self-driving technology are tested on public roads in several countries on a large scale With this development not only technical, but also legal questions arise This article will give a brief overview of the legal developments in multiple jurisdictions – California (USA), United Kingdom, and the Netherlands – and will highlight several legal questions regarding the testing and deployment of self-driving cars Policymakers are confronted with the question how the testing of self-driving cars can be regulated The discussed jurisdictions all choose a different approach Different legal instruments – binding regulation, non-binding regulation, granting exemptions – are used to regulate the testing of self-driving cars Are these instruments suitable for the objectives the jurisdictions want to achieve? As technology matures, self-driving cars will at some point become available to the general public Regarding this post-testing phase, two pressing problems arise: how to deal with the absence of a human driver and how does this affect liability and insurance? The Vienna Convention on Road Traffic 1968 and the Geneva Convention on Road Traffic 1949, as well as national traffic laws, are based on the notion that only a human can drive a car To what extent a different interpretation of the term ‘driver’ in traffic laws and international Conventions can accommodate the deployment of self-driving cars without a human driver present will be discussed in this article When the self-driving car becomes reality, current liability regimes can fall short Liability for car accidents might shift from the driver or owner to the manufacturer of the car This could have a negative effect on the development of self-driving cars In this context, it will also be discussed to what extent insurance can affect this development

Journal ArticleDOI
TL;DR: The paper illustrates and explains information linkage during the process of data integration in a smart neighbourhood scenario to enable a technical and legal framework to ensure stakeholders awareness and protection of subjects about privacy breaches due to information linkage.
Abstract: Internet of things (IoT) is changing the way data is collected and processed. The scale and variety of devices, communication networks, and protocols involved in data collection present critical challenges for data processing and analyses. Newer and more sophisticated methods for data integration and aggregation are required to enhance the value of real-time and historical IoT data. Moreover, the pervasive nature of IoT data presents a number of privacy threats because of intermediate data processing steps, including data acquisition, data aggregation, fusion and integration. User profiling and record linkage are well studied topics in online social networks (OSNs); however, these have become more critical in IoT applications where different systems share and integrate data and information. The proposed study aims to discuss the privacy threat of information linkage, technical and legal approaches to address it in a heterogeneous IoT ecosystem. The paper illustrates and explains information linkage during the process of data integration in a smart neighbourhood scenario. Through this work, the authors aim to enable a technical and legal framework to ensure stakeholders awareness and protection of subjects about privacy breaches due to information linkage.

Journal ArticleDOI
TL;DR: In this article, the authors compared the protection of privacy and personal data in eight EU member states: France, Germany, the UK, Ireland, Romania, Italy, Sweden, and the Netherlands.
Abstract: Although the protection of personal data is harmonized within the EU by Directive 95/46/EC and will be further harmonized by the General Data Protection Regulation (GDPR) in 2018, there are significant differences in the ways in which EU member states implemented the protection of privacy and personal data in national laws, policies, and practices. This paper presents the main findings of a research project that compares the protection of privacy and personal data in eight EU member states: France, Germany, the UK, Ireland, Romania, Italy, Sweden, and the Netherlands. The comparison focuses on five major themes: awareness and trust, government policies for personal data protection, the applicable laws and regulations, implementation of those laws and regulations, and supervision and enforcement. The comparison of privacy and data protection regimes across the EU shows some remarkable findings, revealing which countries are frontrunners and which countries are lagging behind on specific aspects. For instance, the roles of and interplay between governments, civil rights organizations, and data protections authorities vary from country to country. Furthermore, with regard to privacy and data protection there are differences in the intensity and scope of political debates, information campaigns, media attention, and public debate. New concepts like privacy impact assessments, privacy by design, data breach notifications and big data are on the agenda in some but not in all countries. Significant differences exist in (the levels of) enforcement by the different data protection authorities, due to different legal competencies, available budgets and personnel, policies, and cultural factors.

Journal ArticleDOI
TL;DR: Whether international fears of China's new Cyber Security Law are justified is focused on, and why China needs a cyber security regime is analyzed.
Abstract: Chinese officials are increasingly turning to a policy known as Informatisation, connecting industry online, to utilise technology to improve efficiency and tackle economic developmental problems in China. However, various recent laws have made foreign technology firms uneasy about perceptions of Rule of Law in China. Will these new laws, under China's stated policy of “Network Sovereignty” (“网络主权” “wangluo zhuquan”) affect China's ability to attract foreign technology firms, talent and importantly technology transfers? Will they slow China's technology and Smart City drive? This paper focuses on the question of whether international fears of China's new Cyber Security Law are justified. In Parts I and II, the paper analyses why China needs a cyber security regime. In Parts III and IV it examines the law itself.

Journal ArticleDOI
TL;DR: This paper argues that pseudonymization can be used both to reduce the risks of reidentification and help data controllers and processors to respect their personal data protection obligations by keeping control over their activities.
Abstract: In order to carry out the so-called “Big Data analysis”, the collection of personal data seems to be inevitable. The opportunities arising from the analysis of such information need to be balanced with the risks for the data protection of individuals. In this sense, the anonymization technique might be a solution, but it seems to be inappropriate in certain circumstances, among which Big Data processing can be included. In fact, anonymization has a high degree of uncontrollability of the impacts of profiling directed to individual targets whose data has been anonymized. In this sense, pseudonymization can be used both to reduce the risks of reidentification and help data controllers and processors to respect their personal data protection obligations by keeping control over their activities. On the one hand, pseudonymization ensures the capability to reconstruct the processes of identity masking, by allowing re-identification. On the other hand the accountability of the data controller and data processor is guaranteed, thanks to the fact that there will always be a person who can re-identify subjects included in a cluster, acting as a “data keeper”.

Journal ArticleDOI
TL;DR: This article makes a cross-examination of legislations, compares them with one another in order to offer a reflection on the future of portable data in Europe, and finally attempts to identify the best approach to attribute data portability.
Abstract: The number of online services is constantly growing, offering numerous and unprecedented advantages for consumers. Often, the access to these services requires the disclosure of personal information. This personal data is very valuable as it concedes significant advantages over competitors, allowing better answers to the customer's needs and therefore offering services of a better quality. For some services, analysing the customers' data is at the core of their business model. Furthermore, personal data has a monetary value as it enables the service providers to pursue targeted advertising. Usually, the first companies who provide a service will benefit from large volumes of data and might create market entrance barriers for new online providers, thus preventing users from the benefits of competition. Furthermore, by holding a grip on this personal data, they are making it more expensive or burdensome for the user to shift to a new service. Because of this value, online services tend to keep collected information and impede their users to reuse the personal data they have provided. This behaviour results in the creation of a lock-in effect. Upcoming awareness for this problem has led to the demand of a right to data portability. The aim of this paper is to analyse the different legislative systems that exist or have been recently created in this regard that would grant a right to data portability. Firstly, this article draws up the framework of data portability, explaining its origin, general aspects, advantages as well as its possible downfalls. Secondly, the core of the article is approached as the different ways of granting data portability are analysed. In this regard, the possible application of European Competition Law to prohibit restrictions to data portability is examined. Afterwards, an examination of the application of U.S. Antitrust Law is made to determine whether it could be a source of inspiration for European legislators. Finally, an analysis of the new General Data Protection Regulation is made with respect to the development of data portability throughout the European legislative procedure. This article makes a cross-examination of legislations, compares them with one another in order to offer a reflection on the future of portable data in Europe, and finally attempts to identify the best approach to attribute data portability.

Journal ArticleDOI
TL;DR: The article tests the applicability of the GDPR to online price personalisation practices by applying criteria as ‘personal data’ and ‘automated processing’ to several discriminatory pricing cases and examples and provides a capita selecta of rights and obligations pertinent to online discriminatory pricing.
Abstract: The General Data Protection Regulation (GDPR) contains various provisions with relevance to online price discrimination. This article, which analyses a number of essential elements on this junction, aims to provide a theory on whether, and, if so, how the GDPR affects price discrimination based on the processing of personal data. First, the contribution clarifies the concept of price discrimination, as well as its typology and relevance for big data settings. Subsequent to studying this topic in the context of the Commission's Digital Single Market strategy, the article tests the applicability of the GDPR to online price personalisation practices by applying criteria as ‘personal data’ and ‘automated processing’ to several discriminatory pricing cases and examples. Secondly, the contribution evaluates the possible lawfulness of price personalisation under the GDPR on the basis of consent, the necessity for pre-contractual or contractual measures, and the data controller's legitimate interests. The paper concludes by providing a capita selecta of rights and obligations pertinent to online discriminatory pricing, such as transparency obligations and the right to access, as well as the right to rectify the data on which price discrimination is based, and the right not to be subject to certain discriminatory pricing decisions.

Journal ArticleDOI
TL;DR: It is argued that substantive rules that establish a basic set of privacy norms regarding the collection, use and disclosure of data are necessary and can be realized in part via a privacy code of practice for the connected vehicle.
Abstract: The consent model of privacy protection assumes that individuals control their personal information and are able to assess the risks associated with data sharing. The model is attractive for policy-makers and automakers because it has the effect of glossing over the conceptual ambiguities that are latent in definitions of privacy. Instead of formulating a substantive and normative position on what constitutes a reasonable expectation of privacy in the circumstance, individuals are said to have control over their data. Organizations have obligations to respect rights to notice, access and consent regarding the collection, use and disclosure of personal data once that data has been shared. The policy goal becomes how to provide individuals with control over their personal data in the consent model of privacy protection. This paper argues that the privacy issues raised by vehicular ad hoc networks make this approach increasingly untenable. It is argued that substantive rules that establish a basic set of privacy norms regarding the collection, use and disclosure of data are necessary. This can be realized in part via a privacy code of practice for the connected vehicle. This paper first explores the relationship between privacy, consent and personal information in relation to the connected car. This is followed by a description of vehicular ad hoc networks and a survey of the technical proposals aimed at securing data. The privacy issues that will likely remain unsolved by enhancing individual consent are then discussed. The last section provides some direction on how a code of practice can assist in determining when individual consent will need to be enhanced and when alternatives to consent will need to be implemented.

Journal ArticleDOI
TL;DR: The findings of a recent survey of EU DPAs are reported that explore the problems they have in comprehending new technologies and how they are dealing with them.
Abstract: The ability of data protection authorities (DPAs) to gain and deploy sufficient knowledge of new technological developments in their regulation of personal-information practices is an important consideration now and for the future. However, DPAs' capacity to keep abreast of these developments has been questionable, and improvements in this are a matter of concern, especially given DPAs' task requirements under the European Union's (EU) General Data Protection Regulation (GDPR). This article reports the findings of a recent survey of EU DPAs that explore the problems they have in comprehending new technologies and how they are dealing with them.

Journal ArticleDOI
TL;DR: The aim of this paper is to establish the grounds for a future regulatory framework for Person Carrier Robots, which includes legal and ethical aspects, and to take into account other interdisciplinary aspects of robot technology to offer complete legal coverage to citizens.
Abstract: The aim of this paper is to establish the grounds for a future regulatory framework for Person Carrier Robots, which includes legal and ethical aspects. Current industrial standards focus on physical human–robot interaction, i.e. on the prevention of harm. Current robot technology nonetheless challenges other aspects in the legal domain. The main issues comprise privacy, data protection, liability, autonomy, dignity, and ethics. The paper first discusses the need to take into account other interdisciplinary aspects of robot technology to offer complete legal coverage to citizens. As the European Union starts using impact assessment methodology for completing new technologies regulations, a new methodology based on it to approach the insertion of personal care robots will be discussed. Then, after framing the discussion with a use case, analysis of the involved legal challenges will be conducted. Some concrete scenarios will contribute to easing the explanatory analysis.

Journal ArticleDOI
TL;DR: A number of key issues and questions that raise concerns from a multi-dimensional children's rights perspective are identified and clarified to clarify remaining ambiguities in the run-up to the actual application of the GDPR from 25 May 2018 onwards.
Abstract: The EU General Data Protection Regulation (GDPR) devotes particular attention to the protection of personal data of children. The rationale is that children are less aware of the risks and the potential consequences of the processing of their personal data on their rights. Yet, the text of the GDPR offers little clarity as to the actual implementation and impact of a number of provisions that may significantly affect children and their rights, leading to legal uncertainty for data controllers, parents and children. This uncertainty relates for instance to the age of consent for processing children's data in relation to information society services, the technical requirements regarding parental consent in that regard, the interpretation of the extent to which profiling of children is allowed and the level of transparency that is required vis-a-vis children. This article aims to identify a number of key issues and questions – both theoretical and practical – that raise concerns from a multi-dimensional children's rights perspective, and to clarify remaining ambiguities in the run-up to the actual application of the GDPR from 25 May 2018 onwards.

Journal ArticleDOI
TL;DR: The provisions of the Guidelines and their attempt to address the major challenges of the new big data paradigm set the stage for concluding remarks about the most suitable regulatory model to deal with the different issues posed by the development of technology.
Abstract: In January 2017 the Consultative Committee of Convention 108 adopted its Guidelines on the Protection of Individuals with Regard to the Processing of Personal Data in a World of Big Data. These are the first guidelines on data protection provided by an international body which specifically address the issues surrounding big data applications. This article examines the main provisions of these Guidelines and highlights the approach adopted by the Consultative Committee, which contextualises the traditional principles of data protection in the big data scenario and also takes into account the challenges of the big data paradigm. The analysis of the different provisions adopted focuses primarily on the core of the Guidelines namely the risk assessment procedure. Moreover, the article discusses the novel solutions provided by the Guidelines with regard to the data subject's informed consent, the by-design approach, anonymization, and the role of the human factor in big data-supported decisions. This critical analysis of the Guidelines introduces a broader reflection on the divergent approaches of the Council of Europe and the European Union to regulating data processing. Where the principle-based model of the Council of Europe differs from the approach adopted by the EU legislator in the detailed Regulation (EU) 2016/679. In the light of this, the provisions of the Guidelines and their attempt to address the major challenges of the new big data paradigm set the stage for concluding remarks about the most suitable regulatory model to deal with the different issues posed by the development of technology.

Journal ArticleDOI
TL;DR: This paper argues that the GDPR turned certification into a new regulatory instrument in data protection, and suggests to call it monitored self-regulation, seeking to fill the gap between self- regulation and traditional regulation in order to build a regulation continuum.
Abstract: The endorsement of certification in Article 42 and 43 of the General Data Protection Regulation (hereinafter GDPR) extends the scope of this procedure to the enforcement of fundamental rights The GDPR also leverages the high flexibility of this procedure to make of certification something else than a voluntary process attesting the conformity with technical standards This paper argues that the GDPR turned certification into a new regulatory instrument in data protection, I suggest to call it monitored self-regulation, seeking to fill the gap between self-regulation and traditional regulation in order to build a regulation continuum

Journal ArticleDOI
TL;DR: As the business model based on cloud computing grows, public bodies, and in particular the European Union, are striving to find solutions to properly regulate the future economy, either by introducing new laws, or by finding the best ways to apply existing principles.
Abstract: Data is a modern form of wealth in the digital world, and massive amounts of data circulate in cloud environments. While this enormously facilitates the sharing of information, both for personal and professional purposes, it also introduces some critical problems concerning the ownership of the information. Data is an intangible good that is stored in large data warehouses, where the hardware architectures and software programs running the cloud services coexist with the data of many users. This context calls for a twofold protection: on one side, the cloud is made up of hardware and software that constitute the business assets of the service provider (property of the cloud); on the other side, there is a definite need to ensure that users retain control over their data (property in the cloud). The law grants protection to both sides under several perspectives, but the result is a complex mix of interwoven regimes, further complicated by the intrinsically international nature of cloud computing that clashes with the typical diversity of national laws. As the business model based on cloud computing grows, public bodies, and in particular the European Union, are striving to find solutions to properly regulate the future economy, either by introducing new laws, or by finding the best ways to apply existing principles.

Journal ArticleDOI
TL;DR: Examination of existing laws and oversight bodies in the Russian Federation is discussed, how the current provisions are inadequate to deal with new developments in Big Data, and proposes recommendations for amending and updating existing law and policies.
Abstract: This article examines the impact of Big Data technology on Russian citizens' constitutional rights to a private life. There are several laws in the Russian Federation covering data privacy and protection, but these are proving inadequate to protect the citizens' rights in the face of the ever-increasing use of massive data sets and their analysis by Big Data tools. One particular problem in this regard is that datasets of anonymised records currently not covered under personal data laws (because they do not identify individuals) can, in fact, be used to identify data subjects (the individuals to whom the data refers) when combined and analysed using Big Data tools. Furthermore, existing sanctions for misuse of personal data are minor, and often fail to act as a deterrent when the commercial benefits of exploiting user data (e.g. through targeted advertising) are so much greater. From the point of view of companies handling Big Data, a general confusion over definitions and responsibilities is making compliance with the law difficult, leaving most to come up with their own forms of best practice, rather than being able to follow clear industry recommendations. The article examines existing laws and oversight bodies, discusses how the current provisions are inadequate to deal with new developments in Big Data, and proposes recommendations for amending and updating existing laws and policies.

Journal ArticleDOI
TL;DR: It is posits that in clarifying the law, a new approach to categorising personal data is required to achieve the benefits of categorisation and increase the transparency of personal data processing for data subjects.
Abstract: Transparency is a key principle of EU data protection law and the obligation to inform is key to ensuring transparency. The purpose of this obligation is to provide data subjects with information that allows them to assess the compliance and trustworthiness of the data controller. Despite the benefits of categorising personal data for this purpose, a coherent and consistent approach to doing so under the obligation to inform has not emerged. It is unclear what a ‘category’ of personal data is and when this information must be provided. This results in reduced transparency for data subjects and uncertainty for data controllers regarding their legal obligations, defeating the purpose of this obligation. This article highlights these issues and calls for clarification on them. It also posits that in clarifying the law, a new approach to categorising personal data is required to achieve the benefits of categorisation and increase the transparency of personal data processing for data subjects.

Journal ArticleDOI
TL;DR: The problems that are created by the ever-increasing amount of ‘well-being’ apps and the fact that most will not be classed as medical devices are discussed.
Abstract: Regulation of medical devices has been one of the most notable regulatory initiatives of the European Union. The need to ensure that medical devices are of a high quality is self-evident in nature. This is demonstrated by the lack of willingness of both healthcare institutions and professionals to use medical devices that have not properly been certified. In determining which devices are medical devices and should therefore meet the requirements of the regulatory framework, both the current and the proposed frameworks foresee a central place for the concept of ‘intended purpose’. This means that only those manufacturers that have explicitly stated that their device is to be used for a medical purpose should have to comply with the medical device framework. Unfortunately, however, this concept has become increasingly problematic given the rise in mHealth (mobile health) practices and ‘appification’ (shift to mobile devices) in particular, arguably posing potentially serious risks to human health in certain cases. This article discusses the problems that are created by the ever-increasing amount of ‘well-being’ apps and the fact that most will not be classed as medical devices. Despite apparently being aware of these problems, the EU Commission has opted to maintain its current approach in the newly proposed regulation, choosing not to employ other approaches as the FDA has for example done in opting to use a ‘risk based case-by-case approach’.

Journal ArticleDOI
TL;DR: The analysis shows that the sharing of cyber threat intelligence is in the public interest so as to override the rights of a data subject, as long as it is carried out in ways that are strictly necessary in order to achieve security objectives.
Abstract: This article reports on preliminary findings and recommendations of a cross-discipline project to accelerate international business-to-business automated sharing of cyber-threat intelligence, particularly IP addresses. The article outlines the project and its objectives and the importance of determining whether IP addresses can be lawfully shared as cyber threat intelligence. The goal of the project is to enhance cyber-threat intelligence sharing throughout the cyber ecosystem. The findings and recommendations from this project enable businesses to navigate the international legal environment and develop their policy and procedures to enable timely, effective and legal sharing of cyber-threat information. The project is the first of its kind in the world. It is unique in both focus and scope. Unlike the cyber-threat information sharing reviews and initiatives being developed at country and regional levels, the focus of this project and this article is on business-to-business sharing. The scope of this project in terms of the 34 jurisdictions reviewed as to their data protection requirements is more comprehensive than any similar study to date. This article focuses on the sharing of IP addresses as cyber threat intelligence in the context of the new European Union (EU) data protection initiatives agreed in December 2015 and formally adopted by the European Council and Parliament in April 2016. The new EU General Data Protection Regulation (GDPR) applies to EU member countries, a major focus of the international cyber threat sharing project. The research also reveals that EU data protection requirements, particularly the currently applicable law of the Data Protection Directive 95/46/EC (1995 Directive) (the rules of which the GDPR will replace in practice in 2018), generally form the basis of current data protection requirements in countries outside Europe. It is expected that this influence will continue and that the GDPR will shape the development of data protection internationally. In this article, the authors examine whether static and dynamic IP addresses are “personal data” as defined in the GDPR and its predecessor the 1995 Directive that is currently the model for data protection in many jurisdictions outside Europe. The authors then consider whether sharing of that data by a business without the consent of the data subject, can be justified in the public interest so as to override individual rights under Articles 7 and 8(1) of the Charter of Fundamental Rights of the European Union, which underpin EU data protection. The analysis shows that the sharing of cyber threat intelligence is in the public interest so as to override the rights of a data subject, as long as it is carried out in ways that are strictly necessary in order to achieve security objectives. The article concludes by summarizing the project findings to date, and how they inform international sharing of cyber-threat intelligence within the private sector.

Journal ArticleDOI
Timothy Webb1, Sumer Dayal1
TL;DR: It is established that stakeholders have a shared responsibility to address cybersecurity threats that can affect medical devices and manufacturers and health care providers should consider identification, detection and prevention steps at the pre-market and post-market stages.
Abstract: Cybersecurity in medical devices has become a pressing issue in modern times Technological progress has simultaneously benefited health care and created new risks Through examining regulatory guidance, this article establishes that stakeholders have a shared responsibility to address cybersecurity threats that can affect such devices Manufacturers and health care providers should consider identification, detection and prevention steps at the pre-market and post-market stages End users and medical practitioners should practice good cyber hygiene to mitigate cybersecurity risks Collectively, increased collaboration across all stakeholders is fundamental to ensure effective protection