scispace - formally typeset
Search or ask a question
Author

Thierry Denoeux

Bio: Thierry Denoeux is an academic researcher from University of Technology of Compiègne. The author has contributed to research in topics: Dempster–Shafer theory & Computer science. The author has an hindex of 26, co-authored 120 publications receiving 2850 citations. Previous affiliations of Thierry Denoeux include Institut Universitaire de France & Centre national de la recherche scientifique.


Papers
More filters
Journal ArticleDOI
01 Mar 2000
TL;DR: A new adaptive pattern classifier based on the Dempster-Shafer theory of evidence is presented, which uses reference patterns as items of evidence regarding the class membership of each input pattern under consideration.
Abstract: A new adaptive pattern classifier based on the Dempster-Shafer theory of evidence is presented This method uses reference patterns as items of evidence regarding the class membership of each input pattern under consideration This evidence is represented by basic belief assignments (BBA) and pooled using the Dempster's rule of combination This procedure can be implemented in a multilayer neural network with specific architecture consisting of one input layer, two hidden layers and one output layer The weight vector, the receptive field and the class membership of each prototype are determined by minimizing the mean squared differences between the classifier outputs and target values After training, the classifier computes for each input vector a BBA that provides a description of the uncertainty pertaining to the class of the current pattern, given the available evidence This information may be used to implement various decision rules allowing for ambiguous pattern rejection and novelty detection The outputs of several classifiers may also be combined in a sensor fusion context, yielding decision procedures which are very robust to sensor failures or changes in the system environment Experiments with simulated and real data demonstrate the excellent performance of this classification scheme as compared to existing statistical and neural network techniques

399 citations

Journal ArticleDOI
01 May 1998
TL;DR: A learning procedure for optimizing the parameters in the evidence-theoretic k-nearest neighbor rule, a pattern classification method based on the Dempster-Shafer theory of belief functions, is presented.
Abstract: The paper presents a learning procedure for optimizing the parameters in the evidence-theoretic k-nearest neighbor rule, a pattern classification method based on the Dempster-Shafer theory of belief functions. In this approach, each neighbor of a pattern to be classified is considered as an item of evidence supporting certain hypotheses concerning the class membership of that pattern. Based on this evidence, basic belief masses are assigned to each subset of the set of classes. Such masses are obtained for each of the k-nearest neighbors of the pattern under consideration and aggregated using Dempster's rule of combination. In many situations, this method was found experimentally to yield lower error rates than other methods using the same information. However, the problem of tuning the parameters of the classification rule was so far unresolved. The authors determine optimal or near-optimal parameter values from the data by minimizing an error function. This refinement of the original method is shown experimentally to result in substantial improvement of classification accuracy.

292 citations

Journal ArticleDOI
01 Feb 2004
TL;DR: A notion of credal partition is introduced, which subsumes those of hard, fuzzy, and possibilistic partitions, allowing to gain deeper insight into the structure of the data.
Abstract: A new relational clustering method is introduced, based on the Dempster-Shafer theory of belief functions (or evidence theory). Given a matrix of dissimilarities between n objects, this method, referred to as evidential clustering (EVCLUS), assigns a basic belief assignment (or mass function) to each object in such a way that the degree of conflict between the masses given to any two objects reflects their dissimilarity. A notion of credal partition is introduced, which subsumes those of hard, fuzzy, and possibilistic partitions, allowing to gain deeper insight into the structure of the data. Experiments with several sets of real data demonstrate the good performances of the proposed method as compared with several state-of-the-art relational clustering techniques.

244 citations

Journal ArticleDOI
TL;DR: Different strategies that can be applied in this context to reach a decision (e.g. assignment to a class or rejection), provided the possible consequences of each action can be quantified are examined.

206 citations

Journal ArticleDOI
01 Dec 2006
TL;DR: This paper shows that both methods actually proceed from the same underlying principle, i.e., the GBT, and that they essentially differ by the nature of the assumed available information, and collapses to a kernel rule in the case of precise and categorical learning data and for certain initial assumptions.
Abstract: The transferable belief model (TBM) is a model to represent quantified uncertainties based on belief functions, unrelated to any underlying probability model. In this framework, two main approaches to pattern classification have been developed: the TBM model-based classifier, relying on the general Bayesian theorem (GBT), and the TBM case-based classifier, built on the concept of similarity of a pattern to be classified with training patterns. Until now, these two methods seemed unrelated, and their connection with standard classification methods was unclear. This paper shows that both methods actually proceed from the same underlying principle, i.e., the GBT, and that they essentially differ by the nature of the assumed available information. This paper also shows that both methods collapse to a kernel rule in the case of precise and categorical learning data and for certain initial assumptions, and a simple relationship between basic belief assignments produced by the two methods is exhibited in a special case. These results shed new light on the issues of classification and supervised learning in the TBM. They also suggest new research directions and may help users in selecting the most appropriate method for each particular application, depending on the nature of the information at hand

151 citations


Cited by
More filters
01 Jan 2002

9,314 citations

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity.
Abstract: Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.

7,547 citations

01 Jan 2006

3,012 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations

Journal ArticleDOI
Robi Polikar1
TL;DR: Conditions under which ensemble based systems may be more beneficial than their single classifier counterparts are reviewed, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined are reviewed.
Abstract: In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting "several experts" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise

2,628 citations