scispace - formally typeset
Search or ask a question
Author

Amina Adadi

Bio: Amina Adadi is an academic researcher from SIDI. The author has contributed to research in topics: Web service & Semantic Web Stack. The author has an hindex of 5, co-authored 9 publications receiving 1285 citations. Previous affiliations of Amina Adadi include École Normale Supérieure & Sidi Mohamed Ben Abdellah University.

Papers
More filters
Journal ArticleDOI
Amina Adadi1, Mohammed Berrada1
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract: At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

2,258 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a comprehensive review of existing data-efficient methods and systematizes them into four categories: creating more data, transferring knowledge from rich data domains into poor data domains, altering data-hungry algorithms to reduce their dependency upon the amount of samples, or transferring knowledge between rich and poor domains.
Abstract: The leading approaches in Machine Learning are notoriously data-hungry Unfortunately, many application domains do not have access to big data because acquiring data involves a process that is expensive or time-consuming This has triggered a serious debate in both the industrial and academic communities calling for more data-efficient models that harness the power of artificial learners while achieving good results with less training data and in particular less human supervision In light of this debate, this work investigates the issue of algorithms’ data hungriness First, it surveys the issue from different perspectives Then, it presents a comprehensive review of existing data-efficient methods and systematizes them into four categories Specifically, the survey covers solution strategies that handle data-efficiency by (i) using non-supervised algorithms that are, by nature, more data-efficient, by (ii) creating artificially more data, by (iii) transferring knowledge from rich-data domains into poor-data domains, or by (iv) altering data-hungry algorithms to reduce their dependency upon the amount of samples, in a way they can perform well in small samples regime Each strategy is extensively reviewed and discussed In addition, the emphasis is put on how the four strategies interplay with each other in order to motivate exploration of more robust and data-efficient algorithms Finally, the survey delineates the limitations, discusses research challenges, and suggests future opportunities to advance the research on data-efficiency in machine learning

65 citations

Book ChapterDOI
Amina Adadi1, Mohammed Berrada1
08 Apr 2020
TL;DR: This paper reflects on recent investigations about the interpretability and explainability of artificial intelligence methods and discusses their impact on medicine and healthcare.
Abstract: As artificial intelligence penetrates deeper into work and personal life, it raises questions about trust and transparency. These questions are of greater consequence in healthcare where decisions are literally a matter of life and death. In this paper, we reflect on recent investigations about the interpretability and explainability of artificial intelligence methods and discuss their impact on medicine and healthcare.

61 citations

Proceedings ArticleDOI
01 Oct 2015
TL;DR: This paper presents a dynamic approach for semantically composing e-Government Web services based on Artificial Intelligence (AI) techniques to improve the citizen centric e- government vision by providing a platform for automatically discovering, composing and optimizing e-government services.
Abstract: A major propelling technology for electronic government (e-Government) is the powerful concept of Semantic Web Service. Semantically enriched Web services promise to increase the level of automation and to reduce integration efforts significantly. On the other hand, and due to the heterogeneous structure of the public sector, the achievement of interoperability and integration is a key challenge for a comprehensive e-Government. Therefore, the combination of e-Government and Semantic Web Services is very much natural. In this paper, we present a dynamic approach for semantically composing e-Government Web services based on Artificial Intelligence (AI) techniques. The overall objective of our approach is to improve the citizen centric e-Government vision by providing a platform for automatically discovering, composing and optimizing e-Government services.

14 citations

Proceedings ArticleDOI
23 Oct 2019
TL;DR: The goal of this work is to improve the explainability of recommender systems by using a knowledge extraction method.
Abstract: Most current Machine Learning based recommender systems act like black boxes, not offering the user any insight into the system logic or justification for the recommendations. Thus, risking losing trust with users and failing to achieve acceptance. The goal of this work is to improve the explainability of recommender systems by using a knowledge extraction method.

14 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.

2,827 citations

Posted Content
TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Abstract: In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

1,602 citations

Journal ArticleDOI
TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Abstract: Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.

813 citations

Journal ArticleDOI
TL;DR: This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.

808 citations

01 Jan 2007
TL;DR: A translation apparatus is provided which comprises an inputting section for inputting a source document in a natural language and a layout analyzing section for analyzing layout information.
Abstract: A translation apparatus is provided which comprises: an inputting section for inputting a source document in a natural language; a layout analyzing section for analyzing layout information including cascade information, itemization information, numbered itemization information, labeled itemization information and separator line information in the source document inputted by the inputting section and specifying a translation range on the basis of the layout information; a translation processing section for translating a source document text in the specified translation range into a second language; and an outputting section for outputting a translated text provided by the translation processing section.

740 citations