scispace - formally typeset
Search or ask a question
Author

Thierry Denœux

Bio: Thierry Denœux is an academic researcher from University of Technology of Compiègne. The author has contributed to research in topics: Dempster–Shafer theory & Cluster analysis. The author has an hindex of 27, co-authored 97 publications receiving 2765 citations. Previous affiliations of Thierry Denœux include Shanghai University & Beijing University of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a new operator, the cautious rule of combination, is introduced, which is commutative, associative and idempotent, and can be used to combine belief functions induced by reliable, but possibly overlapping bodies of evidence.

360 citations

Journal ArticleDOI
TL;DR: A category of learning problems in which the class membership of training patterns is assessed by an expert and encoded in the form of a possibility distribution is considered, and two approaches are proposed, based on the transformation of each possibility distribution into a consonant belief function, or on the use of generalized belief structures with fuzzy focal elements.

145 citations

Journal ArticleDOI
TL;DR: A rational approach to the representation and manipulation of imprecise degrees of belief in the framework of evidence theory by adopting as a starting point the non-probabilistic interpretation of belief functions provided by Smets’ Transferable Belief Model and a generalization of various concepts of Dempster–Shafer theory.

136 citations

Journal ArticleDOI
TL;DR: In this paper, a new approach to regression analysis based on a fuzzy extension of belief function theory is introduced, which provides a prediction regarding the value of the output variable y in the form of a fuzzy belief assignment (FBA), defined as a collection of fuzzy sets of values with associated masses of belief.

118 citations

Journal ArticleDOI
TL;DR: Overall, optimizing a single t-norm based rule yields better results than using a fixed rule, including Dempster's rule, and the two-step strategy brings further improvements.

112 citations


Cited by
More filters
Book
01 Jan 2009

8,216 citations

Proceedings Article
04 Dec 2006
TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Abstract: Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

4,385 citations

Journal ArticleDOI
TL;DR: This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms with relevant analyses and discussions.
Abstract: Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made toward this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes.

2,495 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the data fusion state of the art is proposed, exploring its conceptualizations, benefits, and challenging aspects, as well as existing methodologies.

1,684 citations