scispace - formally typeset
Search or ask a question
Author

Martin Pereira-Fariña

Bio: Martin Pereira-Fariña is an academic researcher from University of Santiago de Compostela. The author has contributed to research in topics: Fuzzy set & Fuzzy logic. The author has an hindex of 6, co-authored 25 publications receiving 167 citations. Previous affiliations of Martin Pereira-Fariña include University of Dundee & Spanish National Research Council.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a systematic literature review of contrastive and counterfactual explanations of artificial intelligence algorithms is presented, which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study.
Abstract: A number of algorithms in the field of artificial intelligence offer poorly interpretable decisions. To disclose the reasoning behind such algorithms, their output can be explained by means of so-called evidence-based (or factual) explanations. Alternatively, contrastive and counterfactual explanations justify why the output of the algorithms is not any different and how it could be changed, respectively. It is of crucial importance to bridge the gap between theoretical approaches to contrastive and counterfactual explanation and the corresponding computational frameworks. In this work we conduct a systematic literature review which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study. We first examine theoretical foundations of contrastive and counterfactual accounts of explanation. Then, we report the state-of-the-art computational frameworks for contrastive and counterfactual explanation generation. In addition, we analyze how grounded such frameworks are on the insights from the inspected theoretical approaches. As a result, we highlight a variety of properties of the approaches under study and reveal a number of shortcomings thereof. Moreover, we define a taxonomy regarding both theoretical and practical approaches to contrastive and counterfactual explanation.

176 citations

Journal ArticleDOI
01 Sep 2013
TL;DR: An application for the linguistic descriptions of driving activity in a simulation environment has been developed and a relevancy analysis is performed in order to compile the most representative and suitable statements in a final report.
Abstract: Linguistic data summarization targets the description of patterns emerging in data by means of linguistic expressions. Just as human beings do, computers can use natural language to represent and fuse heterogeneous data in a multi criteria decision making environment. Linguistic data description is particularly well suited for applications in which there is a necessity of understanding data at different levels of expertise or human-computer interaction is involved. In this paper, an application for the linguistic descriptions of driving activity in a simulation environment has been developed. In order to ensure safe driving practices, all new onboard devices in transportation systems need to be evaluated. Work performed in this application paper will be used for the automatic evaluation of onboard devices. Based on Fuzzy Logic, and as a contribution to Computational Theory of Perceptions, the proposed solution is part of our research on granular linguistic models of phenomena. The application generates a set of valid sentences describing the quality of driving. Then a relevancy analysis is performed in order to compile the most representative and suitable statements in a final report. Real time-series data from a vehicle simulator have been used to evaluate the performance of the presented application in the framework of a real project.

50 citations

Journal ArticleDOI
TL;DR: A new approximate syllogistic reasoning schema is described that expands some of the approaches expounded in the literature into two ways: a number of different types of quantifiers taken from Theory of Generalized Quantifiers and similarity quantifiers are considered and any number of premises can be taken into account within the reasoning process.

25 citations

Proceedings ArticleDOI
01 Jul 2020
TL;DR: Experimental results show that unification of factual and counterfactual explanations under the paradigm of fuzzy inference systems proves promising for explaining the reasoning of classification algorithms.
Abstract: Data-driven classification algorithms have proven highly effective in a range of complex tasks. However, their output is sometimes questioned, as the reasoning behind it may remain unclear due to a high number of poorly interpretable parameters used during training. Evidence-based (factual) explanations for single classifications answer the question why a particular class is selected in terms of the given observations. On the contrary, counterfactual explanations pay attention to why the rest of classes are not selected. Accordingly, we hypothesize that providing classifiers with a combination of both factual and counterfactual explanations is likely to make them more trustworthy. In order to investigate how such explanations can be produced, we introduce a new method to generate factual and counterfactual explanations for the output of pretrained decision trees and fuzzy rule-based classifiers. Experimental results show that unification of factual and counterfactual explanations under the paradigm of fuzzy inference systems proves promising for explaining the reasoning of classification algorithms.

23 citations

Book ChapterDOI
TL;DR: In this paper, a self-explaining Fuzzy unordered rule induction algorithm (FURIA) is proposed to generate evidence-based and counterfactual explanations for single classifications.
Abstract: In this chapter, we describe how to generate not only interpretable but also self-explaining fuzzy systems. Such systems are expected to manage information granules naturally as humans do. We take as starting point the Fuzzy Unordered Rule Induction Algorithm (FURIA for short) which produces a good interpretability-accuracy trade-off. FURIA rules have local semantics and manage information granules without linguistic interpretability. With the aim of making FURIA rules self-explaining, we have created a linguistic layer which endows FURIA with global semantics and linguistic interpretability. Explainable FURIA rules provide users with evidence-based (factual) and counterfactual explanations for single classifications. Factual explanations answer the question why a particular class is selected in terms of the given observations. In addition, counterfactual explanations pay attention to why the rest of classes are not selected. Thus, endowing FURIA rules with the capability to generate a combination of both factual and counterfactual explanations is likely to make them more trustworthy. We illustrate how to build self-explaining FURIA classifiers in two practical use cases regarding beer style classification and vehicle classification. Experimental results are encouraging. The generated classifiers exhibit accuracy comparable to a black-box classifier such as Random Forest. Moreover, their explainability is comparable to that provided by white-box classifiers designed with the Highly Interpretable Linguistic Knowledge fuzzy modeling methodology (HILK for short) in terms of explainability.

13 citations


Cited by
More filters
01 Mar 1999

3,234 citations

Journal ArticleDOI
TL;DR: In this article, a systematic literature review of contrastive and counterfactual explanations of artificial intelligence algorithms is presented, which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study.
Abstract: A number of algorithms in the field of artificial intelligence offer poorly interpretable decisions. To disclose the reasoning behind such algorithms, their output can be explained by means of so-called evidence-based (or factual) explanations. Alternatively, contrastive and counterfactual explanations justify why the output of the algorithms is not any different and how it could be changed, respectively. It is of crucial importance to bridge the gap between theoretical approaches to contrastive and counterfactual explanation and the corresponding computational frameworks. In this work we conduct a systematic literature review which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study. We first examine theoretical foundations of contrastive and counterfactual accounts of explanation. Then, we report the state-of-the-art computational frameworks for contrastive and counterfactual explanation generation. In addition, we analyze how grounded such frameworks are on the insights from the inspected theoretical approaches. As a result, we highlight a variety of properties of the approaches under study and reveal a number of shortcomings thereof. Moreover, we define a taxonomy regarding both theoretical and practical approaches to contrastive and counterfactual explanation.

176 citations

Posted Content
TL;DR: This survey presents a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality.
Abstract: Over the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality.

137 citations