scispace - formally typeset
Search or ask a question
Author

Xavier Rubiralta Costa

Bio: Xavier Rubiralta Costa is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 385 citations.

Papers
More filters
01 Jan 2018
TL;DR: The conferencia "Les politiques d'Open Data / Open Acces: Implicacions a la recerca" orientada a investigadors i gestors de projectes europeus que va tenir lloc el 20 de setembre de 2018 a la Universitat Autonoma de Barcelona.
Abstract: Presentacio sobre l'Oficina de Proteccio de Dades Personals de la UAB i la politica Open Science. Va formar part de la conferencia "Les politiques d'Open Data / Open Acces: Implicacions a la recerca" orientada a investigadors i gestors de projectes europeus que va tenir lloc el 20 de setembre de 2018 a la Universitat Autonoma de Barcelona

665 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is argued that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
Abstract: We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.

1,500 citations

Journal ArticleDOI
TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Abstract: Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.

813 citations

Journal ArticleDOI
TL;DR: This paper aims to provide a comprehensive study concerning FL’s security and privacy aspects that can help bridge the gap between the current state of federated AI and a future in which mass adoption is possible.

565 citations

Proceedings ArticleDOI
21 Apr 2020
TL;DR: An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.
Abstract: A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.

371 citations

Journal ArticleDOI
TL;DR: The SecureBoost framework is shown to be as accurate as other nonfederated gradient tree-boosting algorithms that require centralized data, and thus, it is highly scalable and practical for industrial applications such as credit risk analysis.
Abstract: The protection of user privacy is an important concern in machine learning, as evidenced by the rolling out of the General Data Protection Regulation (GDPR) in the European Union (EU) in May 2018 The GDPR is designed to give users more control over their personal data, which motivates us to explore machine learning frameworks for data sharing that do not violate user privacy To meet this goal, in this paper, we propose a novel lossless privacy-preserving tree-boosting system known as SecureBoost in the setting of federated learning This federated-learning system allows the learning process to be jointly conducted over multiple parties with partially common user samples but different feature sets, which corresponds to a vertically partitioned data set An advantage of SecureBoost is that it provides the same level of accuracy as the non privacy-preserving approach while at the same time, reveals no information of each private data provider We formally prove that the SecureBoost framework is as accurate as other non-federated gradient tree-boosting algorithms that concentrate data in one place In addition, we describe information leakage during the protocol execution and propose ways to provably reduce it

321 citations