Author

# Thierry Denux

Bio: Thierry Denux is an academic researcher from University of Technology of Compiègne. The author has contributed to research in topics: Dempster–Shafer theory & Transferable belief model. The author has an hindex of 12, co-authored 13 publications receiving 1222 citations.

##### Papers

More filters

••

TL;DR: Experiments with synthetic and real data sets show that the proposed ECM (evidential c-means) algorithm can be considered as a promising tool in the field of exploratory statistics.

359 citations

••

TL;DR: This paper extends the theory of belief functions by introducing new concepts and techniques, allowing to model the situation in which the beliefs held by a rational agent may only be expressed (or are only known) with some imprecision.

163 citations

••

TL;DR: An extension of the discounting operation is proposed, allowing to use more detailed information regarding the reliability of the source in different contexts, i.e., conditionally on different hypotheses regarding the variable on interest.

153 citations

••

TL;DR: Using the generalized Bayesian theorem, an extension of Bayes' theorem in the belief function framework, a criterion generalizing the likelihood function is derived, allowing the ability of this approach to exploit partial information about class labels.

130 citations

••

TL;DR: Experiments with simulated data show that correct detection rates over 99% and correct localization rates over 92% can be achieved using this approach, which represents a major improvement over the state of the art reference method.

106 citations

##### Cited by

More filters

••

TL;DR: In this paper, label noise consists of mislabeled instances: no additional information is assumed to be available like e.g., confidences on labels.

Abstract: Label noise is an important issue in classification, with many potential negative consequences. For example, the accuracy of predictions may decrease, whereas the complexity of inferred models and the number of necessary training samples may increase. Many works in the literature have been devoted to the study of label noise and the development of techniques to deal with label noise. However, the field lacks a comprehensive survey on the different types of label noise, their consequences and the algorithms that consider label noise. This paper proposes to fill this gap. First, the definitions and sources of label noise are considered and a taxonomy of the types of label noise is proposed. Second, the potential consequences of label noise are discussed. Third, label noise-robust, label noise cleansing, and label noise-tolerant algorithms are reviewed. For each category of approaches, a short discussion is proposed to help the practitioner to choose the most suitable technique in its own particular field of application. Eventually, the design of experiments is also discussed, what may interest the researchers who would like to test their own algorithms. In this paper, label noise consists of mislabeled instances: no additional information is assumed to be available like e.g., confidences on labels.

1,440 citations

••

01 Apr 2002

TL;DR: This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.

Abstract: Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. Dempster-Shafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.

1,033 citations

••

01 Mar 2006

TL;DR: A generic rule-base inference methodology using the evidential reasoning (RIMER) approach is proposed, capable of capturing vagueness, incompleteness, and nonlinear causal relationships, while traditional if-then rules can be represented as a special case.

Abstract: In this paper, a generic rule-base inference methodology using the evidential reasoning (RIMER) approach is proposed. Existing knowledge-base structures are first examined, and knowledge representation schemes under uncertainty are then briefly analyzed. Based on this analysis, a new knowledge representation scheme in a rule base is proposed using a belief structure. In this scheme, a rule base is designed with belief degrees embedded in all possible consequents of a rule. Such a rule base is capable of capturing vagueness, incompleteness, and nonlinear causal relationships, while traditional if-then rules can be represented as a special case. Other knowledge representation parameters such as the weights of both attributes and rules are also investigated in the scheme. In an established rule base, an input to an antecedent attribute is transformed into a belief distribution. Subsequently, inference in such a rule base is implemented using the evidential reasoning (ER) approach. The scheme is further extended to inference in hierarchical rule bases. A numerical study is provided to illustrate the potential applications of the proposed methodology.

606 citations

••

TL;DR: The ER approach will be used to aggregate multiple environmental factors, resulting in an aggregated distributed assessment for each alternative policy, and a new analytical ER algorithm will be investigated which provides a means for using the ER approach in decision situations where an explicit ER aggregation function is needed.

468 citations

••

TL;DR: This paper provides an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.

Abstract: The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often referred to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.

421 citations