scispace - formally typeset
Search or ask a question
Author

Marie-Jeanne Lesot

Bio: Marie-Jeanne Lesot is an academic researcher from University of Paris. The author has contributed to research in topics: Cluster analysis & Fuzzy logic. The author has an hindex of 7, co-authored 52 publications receiving 391 citations. Previous affiliations of Marie-Jeanne Lesot include Otto-von-Guericke University Magdeburg & Centre national de la recherche scientifique.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: An encompassing, self-contained introduction to the foundations of the broad field of fuzzy clustering is presented, with special emphasis on the interpretation of the two most encountered types of gradual cluster assignments: the fuzzy and the possibilistic membership degrees.

117 citations

Book ChapterDOI
11 Jun 2018
TL;DR: An inverse classification approach whose principle consists in determining the minimal changes needed to alter a prediction: in an instance-based framework, given a data point whose classification must be explained, the proposed method consists in identifying a close neighbor classified differently, where the closeness definition integrates a sparsity constraint.
Abstract: In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data). It proposes an inverse classification approach whose principle consists in determining the minimal changes needed to alter a prediction: in an instance-based framework, given a data point whose classification must be explained, the proposed method consists in identifying a close neighbor classified differently, where the closeness definition integrates a sparsity constraint. This principle is implemented using observation generation in the Growing Spheres algorithm. Experimental results on two datasets illustrate the relevance of the proposed approach that can be used to gain knowledge about the classifier.

93 citations

Book ChapterDOI
11 May 2007

78 citations

Book ChapterDOI
16 Sep 2019
TL;DR: This paper focuses on the notion of explanation justification, defined as connectedness to ground-truth data, in the context of counterfactuals, and shows that state-of-the-art post-hoccounterfactual approaches can minimize the impact of this risk by generating less local explanations.
Abstract: Post-hoc interpretability approaches, although powerful tools to generate explanations for predictions made by a trained black-box model, have been shown to be vulnerable to issues caused by lack of robustness of the classifier. In particular, this paper focuses on the notion of explanation justification, defined as connectedness to ground-truth data, in the context of counterfactuals. In this work, we explore the extent of the risk of generating unjustified explanations. We propose an empirical study to assess the vulnerability of classifiers and show that the chosen learning algorithm heavily impacts the vulnerability of the model. Additionally, we show that state-of-the-art post-hoc counterfactual approaches can minimize the impact of this risk by generating less local explanations (Source code available at: https://github.com/thibaultlaugel/truce).

25 citations

01 Jan 2006
TL;DR: An extension of typicality degrees to unsupervised learning is proposed to perform clustering and constitutes a GustafsonKessel variant and makes it possible to identify ellipsoidal clusters with robustness as regards outliers.
Abstract: Typicality degrees were defined in supervised learning as a tool to build characteristic representatives for data categories. In this paper, an extension of these typicality degrees to unsupervised learning is proposed to perform clustering. The proposed algorithm constitutes a GustafsonKessel variant and makes it possible to identify ellipsoidal clusters with robustness as regards outliers.

21 citations


Cited by
More filters
Book
10 Dec 1997

2,025 citations

Posted Content
TL;DR: The main advances regarding the use of the Choquet and Sugeno integrals in multi-criteria decision aid over the last decade are reviewed in this paper, mainly a bipolar extension of both Choquet integral and the Sugeno integral.
Abstract: The main advances regarding the use of the Choquet and Sugeno integrals in multi-criteria decision aid over the last decade are reviewed. They concern mainly a bipolar extension of both the Choquet integral and the Sugeno integral, interesting particular submodels, new learning techniques, a better interpretation of the models and a better use of the Choquet integral in multi-criteria decision aid. Parallel to these theoretical works, the Choquet integral has been applied to many new fields, and several softwares and libraries dedicated to this model have been developed.

449 citations

Journal ArticleDOI
TL;DR: Experiments with synthetic and real data sets show that the proposed ECM (evidential c-means) algorithm can be considered as a promising tool in the field of exploratory statistics.

359 citations

Posted Content
TL;DR: A rubric is designed with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric, providing easy comparison and comprehension of the advantages and disadvantages of different approaches.
Abstract: Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.

267 citations

Posted Content
TL;DR: It is shown that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the search for counterfactual instances and result in more interpretable explanations.
Abstract: We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for $\textit{black box}$ models.

235 citations