Other affiliations: École Normale Supérieure, Sidi Mohamed Ben Abdellah University
Bio: Amina Adadi is an academic researcher from SIDI. The author has contributed to research in topic(s): Web service & Semantic Web Stack. The author has an hindex of 5, co-authored 9 publication(s) receiving 1285 citation(s). Previous affiliations of Amina Adadi include École Normale Supérieure & Sidi Mohamed Ben Abdellah University.
17 Sep 2018-IEEE Access
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract: At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
••01 Oct 2015
TL;DR: This paper presents a dynamic approach for semantically composing e-Government Web services based on Artificial Intelligence (AI) techniques to improve the citizen centric e- government vision by providing a platform for automatically discovering, composing and optimizing e-government services.
Abstract: A major propelling technology for electronic government (e-Government) is the powerful concept of Semantic Web Service. Semantically enriched Web services promise to increase the level of automation and to reduce integration efforts significantly. On the other hand, and due to the heterogeneous structure of the public sector, the achievement of interoperability and integration is a key challenge for a comprehensive e-Government. Therefore, the combination of e-Government and Semantic Web Services is very much natural. In this paper, we present a dynamic approach for semantically composing e-Government Web services based on Artificial Intelligence (AI) techniques. The overall objective of our approach is to improve the citizen centric e-Government vision by providing a platform for automatically discovering, composing and optimizing e-Government services.
••08 Apr 2020
TL;DR: This paper reflects on recent investigations about the interpretability and explainability of artificial intelligence methods and discusses their impact on medicine and healthcare.
Abstract: As artificial intelligence penetrates deeper into work and personal life, it raises questions about trust and transparency. These questions are of greater consequence in healthcare where decisions are literally a matter of life and death. In this paper, we reflect on recent investigations about the interpretability and explainability of artificial intelligence methods and discuss their impact on medicine and healthcare.
01 Dec 2021-Journal of Big Data
TL;DR: In this paper, the authors present a comprehensive review of existing data-efficient methods and systematizes them into four categories: creating more data, transferring knowledge from rich data domains into poor data domains, altering data-hungry algorithms to reduce their dependency upon the amount of samples, or transferring knowledge between rich and poor domains.
Abstract: The leading approaches in Machine Learning are notoriously data-hungry Unfortunately, many application domains do not have access to big data because acquiring data involves a process that is expensive or time-consuming This has triggered a serious debate in both the industrial and academic communities calling for more data-efficient models that harness the power of artificial learners while achieving good results with less training data and in particular less human supervision In light of this debate, this work investigates the issue of algorithms’ data hungriness First, it surveys the issue from different perspectives Then, it presents a comprehensive review of existing data-efficient methods and systematizes them into four categories Specifically, the survey covers solution strategies that handle data-efficiency by (i) using non-supervised algorithms that are, by nature, more data-efficient, by (ii) creating artificially more data, by (iii) transferring knowledge from rich-data domains into poor-data domains, or by (iv) altering data-hungry algorithms to reduce their dependency upon the amount of samples, in a way they can perform well in small samples regime Each strategy is extensively reviewed and discussed In addition, the emphasis is put on how the four strategies interplay with each other in order to motivate exploration of more robust and data-efficient algorithms Finally, the survey delineates the limitations, discusses research challenges, and suggests future opportunities to advance the research on data-efficiency in machine learning
02 Apr 2019-Advances in Bioinformatics
TL;DR: This work presents the results gleaned through a systematic review of prominent gastroenterology literature using machine learning techniques, and delimit the scope of application, discuss current limitations including bias, lack of transparency, accountability, and data availability, and put forward future avenues.
Abstract: Machine learning has undergone a transition phase from being a pure statistical tool to being one of the main drivers of modern medicine. In gastroenterology, this technology is motivating a growing number of studies that rely on these innovative methods to deal with critical issues related to this practice. Hence, in the light of the burgeoning research on the use of machine learning in gastroenterology, a systematic review of the literature is timely. In this work, we present the results gleaned through a systematic review of prominent gastroenterology literature using machine learning techniques. Based on the analysis of 88 journal articles, we delimit the scope of application, we discuss current limitations including bias, lack of transparency, accountability, and data availability, and we put forward future avenues.
01 Jun 2020-Information Fusion
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.
Abstract: In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
01 Jan 2007
TL;DR: A translation apparatus is provided which comprises an inputting section for inputting a source document in a natural language and a layout analyzing section for analyzing layout information.
Abstract: A translation apparatus is provided which comprises: an inputting section for inputting a source document in a natural language; a layout analyzing section for analyzing layout information including cascade information, itemization information, numbered itemization information, labeled itemization information and separator line information in the source document inputted by the inputting section and specifying a translation range on the basis of the layout information; a translation processing section for translating a source document text in the specified translation range into a second language; and an outputting section for outputting a translated text provided by the translation processing section.
Swansea University1, University of Bradford2, Loughborough University3, University of Bedfordshire4, Prin. L. N. Welingkar Institute of Management Development and Research5, Aston University6, University of Edinburgh7, Indian Institute of Technology Delhi8, Delft University of Technology9, Copenhagen Business School10, Norwich University11, Government of Tamil Nadu12, University of Greenwich13, Indian Institute of Management Tiruchirappalli14, Symbiosis International University15, University of Essex16, University of the West of England17, Capgemini18
TL;DR: This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.
Abstract: As far back as the industrial revolution, significant development in technical innovation has succeeded in transforming numerous manual tasks and processes that had been in existence for decades where humans had reached the limits of physical capacity. Artificial Intelligence (AI) offers this same transformative potential for the augmentation and potential replacement of human tasks and activities within a wide range of industrial, intellectual and social applications. The pace of change for this new AI technological age is staggering, with new breakthroughs in algorithmic machine learning and autonomous decision-making, engendering new opportunities for continued innovation. The impact of AI could be significant, with industries ranging from: finance, healthcare, manufacturing, retail, supply chain, logistics and utilities, all potentially disrupted by the onset of AI technologies. The study brings together the collective insight from a number of leading expert contributors to highlight the significant opportunities, realistic assessment of impact, challenges and potential research agenda posed by the rapid emergence of AI within a number of domains: business and management, government, public sector, and science and technology. This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.
26 Jul 2019-Electronics
TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Abstract: Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.
•01 Dec 2019
TL;DR: GNNExplainer as mentioned in this paper identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction, and generates consistent and concise explanations for an entire class of instances.
Abstract: Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GnnExplainer, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GnnExplainer identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GnnExplainer can generate consistent and concise explanations for an entire class of instances. We formulate GnnExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GnnExplainer provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs.