scispace - formally typeset
Search or ask a question
Author

Ignacio Sanchez

Bio: Ignacio Sanchez is an academic researcher. The author has contributed to research in topics: General Data Protection Regulation & Applications of artificial intelligence. The author has an hindex of 2, co-authored 2 publications receiving 100 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The aim of this article is to propose a first systematic interpretation of this new right, by suggesting a pragmatic and extensive approach, particularly taking advantage as much as possible of the interrelationship that this new legal provision can have with regard to the Digital Single Market and the fundamental rights of digital users.

137 citations

Proceedings ArticleDOI
03 Mar 2021
TL;DR: The current inability of complex, deep learning based machine learning models to make clear causal links between input data and final decisions represents a limitation for providing exact, human-legible reasons behind specific decisions, making the provision of satisfactorily, fair and transparent explanations a serious challenge.
Abstract: Can we achieve an adequate level of explanation for complex machine learning models in high-risk AI applications when applying the EU data protection framework? In this article, we address this question, analysing from a multidisciplinary point of view the connection between existing legal requirements for the explainability of AI systems and the current state of the art in the field of explainable AI. We present a case study of a real-life scenario designed to illustrate the application of an AI-based automated decision making process for the medical diagnosis of COVID-19 patients. The scenario exemplifies the trend in the usage of increasingly complex machine-learning algorithms with growing dimensionality of data and model parameters. Based on this setting, we analyse the challenges of providing human legible explanations in practice and we discuss their legal implications following the General Data Protection Regulation (GDPR). Although it might appear that there is just one single form of explanation in the GDPR, we conclude that the context in which the decision-making system operates requires that several forms of explanation are considered. Thus, we propose to design explanations in multiple forms, depending on: the moment of the disclosure of the explanation (either ex ante or ex post); the audience of the explanation (explanation for an expert or a data controller and explanation for the final data subject); the layer of granularity (such as general, group-based or individual explanations); the level of the risks of the automated decision regarding fundamental rights and freedoms. Consequently, explanations should embrace this multifaceted environment. Furthermore, we highlight how the current inability of complex, deep learning based machine learning models to make clear causal links between input data and final decisions represents a limitation for providing exact, human-legible reasons behind specific decisions. This makes the provision of satisfactorily, fair and transparent explanations a serious challenge. Therefore, there are cases where the quality of possible explanations might not be assessed as an adequate safeguard for automated decision-making processes under Article 22(3) GDPR. Accordingly, we suggest that further research should focus on alternative tools in the GDPR (such as algorithmic impact assessments from Article 35 GDPR or algorithmic lawfulness justifications) that might be considered to complement the explanations of automated decision-making.

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive detail is presented on the core and enabling technologies, which are used to build the 5G security model; network softwarization security, PHY (Physical) layer security and 5G privacy concerns, among others.
Abstract: Security has become the primary concern in many telecommunications industries today as risks can have high consequences. Especially, as the core and enable technologies will be associated with 5G network, the confidential information will move at all layers in future wireless systems. Several incidents revealed that the hazard encountered by an infected wireless network, not only affects the security and privacy concerns, but also impedes the complex dynamics of the communications ecosystem. Consequently, the complexity and strength of security attacks have increased in the recent past making the detection or prevention of sabotage a global challenge. From the security and privacy perspectives, this paper presents a comprehensive detail on the core and enabling technologies, which are used to build the 5G security model; network softwarization security, PHY (Physical) layer security and 5G privacy concerns, among others. Additionally, the paper includes discussion on security monitoring and management of 5G networks. This paper also evaluates the related security measures and standards of core 5G technologies by resorting to different standardization bodies and provide a brief overview of 5G standardization security forces. Furthermore, the key projects of international significance, in line with the security concerns of 5G and beyond are also presented. Finally, a future directions and open challenges section has included to encourage future research.

304 citations

Journal ArticleDOI
TL;DR: The current state of development of the PETs in various fields is identified and whether the existing PETs comply with the latest legal principles and privacy standards and reduce the threats to privacy is examined.
Abstract: Internet of Things (IoT) devices have brought much efficiency and convenience to our daily life. However, the devices may collect a myriad of data from people without their consent. Controlling the large amount of data generated from the devices from being misused is critical to mitigate privacy risks. Therefore, privacy protection on personal data has become an important factor in the development of the IoT. Historically, privacy enhancing technologies (PETs) can effectively enhance the privacy and protect users’ personally identifiable information. To date, many researchers have stressed the importance of PETs and proposed solutions relevant to different application fields of the IoT. However, to the best of our knowledge, none of the research has analyzed the PETs in IoT from the aspects of privacy threat issues and privacy legislation. As a result, this paper surveys on the solutions of PETs in the field of IoT, which has filtered down from the large number of published academic papers to the 120 primary studies published between 2014 and 2017. After collecting the papers, we categorized them based on the functions and the coverage of privacy protection, and analyzed them from different aspects, ranging from high-level principles of general data protection regulations and ISO/IEC 29100:2011 requirements to the actual resolution of privacy threats in IoT. Thus, we aim to identify the current state of development of the PETs in various fields and examine whether the existing PETs comply with the latest legal principles and privacy standards and reduce the threats to privacy. Finally, recommendations for future research are given based on the results.

71 citations

Journal ArticleDOI
TL;DR: A pilot open database is proposed that lists edge cases faced by LA system builders as a method for guiding ethicists working in the field towards places where support is needed to inform their practice, and uses three edge cases to draw attention to tensions.
Abstract: Artificial intelligence and data analysis (AIDA) are increasingly entering the field of education. Within this context, the subfield of learning analytics (LA) has, since its inception, had a strong emphasis upon ethics, with numerous checklists and frameworks proposed to ensure that student privacy is respected and potential harms avoided. Here, we draw attention to some of the assumptions that underlie previous work in ethics for LA, which we frame as three tensions. These assumptions have the potential of leading to both the overcautious underuse of AIDA as administrators seek to avoid risk, or the unbridled misuse of AIDA as practitioners fail to adhere to frameworks that provide them with little guidance upon the problems that they face in building LA for institutional adoption. We use three edge cases to draw attention to these tensions, highlighting places where existing ethical frameworks fail to inform those building LA solutions. We propose a pilot open database that lists edge cases faced by LA system builders as a method for guiding ethicists working in the field towards places where support is needed to inform their practice. This would provide a middle space where technical builders of systems could more deeply interface with those concerned with policy, law and ethics and so work towards building LA that encourages human flourishing across a lifetime of learning. Practitioner NotesWhat is already known about this topic Applied ethics has a number of well‐established theoretical groundings that we can use to frame the actions of ethical agents, including, deontology, consequentialism and virtue ethics.Learning analytics has developed a number of checklists, frameworks and evaluation methodologies for supporting trusted and ethical development, but these are often not adhered to by practitioners.Laws like the General Data Protection Regulation (GDPR) apply to fields like education, but the complexity of this field can make them difficult to apply.What this paper adds Evidence of tensions and gaps in existing ethical frameworks and checklists to support the ethical development and implementation of learning analytics.A set of three edge cases that demonstrate places where existing work on the ethics of AI in education has failed to provide guidance.A "practical ethics" conceptualisation that draws on virtue ethics to support practitioners in building learning analytics systems.Implications for practice and/or policy Those using AIDA in education should collect and share example edge cases to support development of practical ethics in the field.A multiplicity of ethical approaches are likely to be useful in understanding how to develop and implement learning analytics ethically in practical contexts. [ABSTRACT FROM AUTHOR]

70 citations

Journal ArticleDOI
01 Mar 2021
TL;DR: How the existing power sector is reshaping in the direction of P2P energy trading with the application of blockchain technology is explored, using a thorough review of recently published research work.
Abstract: Renewable-energy resources require overwhelming adoption by the common masses for safeguarding the environment from pollution. In this context, the prosumer is an important emerging concept. A prosumer in simple terms is the one who consumes as well as produces electricity and sells it either to the grid or to a neighbour. In the present scenario, peer-to-peer (P2P) energy trading is gaining momentum as a new vista of research that is viewed as a possible way for prosumers to sell energy to neighbours. Enabling P2P energy trading is the only method of making renewable-energy sources popular among the common masses. For making P2P energy trading successful, blockchain technology is sparking considerable interest among researchers. Combined with smart contracts, a blockchain provides secure tamper-proof records of transactions that are recorded in distributed ledgers that are immutable. This paper explores, using a thorough review of recently published research work, how the existing power sector is reshaping in the direction of P2P energy trading with the application of blockchain technology. Various challenges that are being faced by researchers in the implementation of blockchain technology in the energy sector are discussed. Further, this paper presents different start-ups that have emerged in the energy-sector domain that are using blockchain technology. To give insight into the application of blockchain technology in the energy sector, a case of the application of blockchain technology in P2P trading in electrical-vehicle charging is discussed. At the end, some possible areas of research in the application of blockchain technology in the energy sector are discussed.

50 citations

Journal ArticleDOI
TL;DR: A CES definition and typology of eight services is proposed and the results of three spatial models employing crowdsourced data to measure CES on Texel, a coastal island in the Netherlands are presented.
Abstract: Cultural ecosystem services (CES) are some of the most valuable contributions of ecosystems to human well-being. Nevertheless, these services are often underrepresented in ecosystem service assessments. Defining CES for the purposes of spatial quantification has been challenging because it has been difficult to spatially model CES. However, rapid increases in mobile network connectivity and the use of social media have generated huge amounts of crowdsourced data. This offers an opportunity to define and spatially quantify CES. We inventoried established CES conceptualisations and sources of crowdsourced data to propose a CES definition and typology for spatial quantification. Furthermore, we present the results of three spatial models employing crowdsourced data to measure CES on Texel, a coastal island in the Netherlands. Defining CES as information-flows best enables service quantification. A general typology of eight services is proposed. The spatial models produced distributions consistent with known areas of cultural importance on Texel. However, user representativeness and measurement uncertainties affect our results. Ethical considerations must also be taken into account. Still, crowdsourced data is a valuable source of information to define and model CES due to the level of detail available. This can encourage the representation of CES in ecosystem service assessments.

46 citations