scispace - formally typeset
Search or ask a question
Author

Enrico Fagiuoli

Bio: Enrico Fagiuoli is an academic researcher from University of Milan. The author has contributed to research in topics: Bayesian network & Influence diagram. The author has an hindex of 8, co-authored 17 publications receiving 455 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The computation on a general Bayesian network with convex sets of conditional distributions is formalized as a global optimization problem and it is shown that such a problem can be reduced to a combinatorial problem, suitable to exact algorithmic solutions.

143 citations

Journal ArticleDOI
TL;DR: In this article, partial stochastic orderings are introduced for aging properties classification, and new classes of life distributions, based on them, are proposed, and an application to stochastically comparison between Poisson shock models is proposed.
Abstract: New concepts of partial stochastic orderings are introduced, and the relations among them and the classical partial orderings are shown. Relevance of these partial orderings in aging properties classification is discussed, and new classes of life distributions, based on them, are proposed. An application to stochastic comparison between Poisson shock models is proposed. © 1993 John Wiley & Sons, Inc.

79 citations

Journal ArticleDOI
TL;DR: Recently defined classes of life distributions are considered, and some relationships among them are proposed in this article, where the life distribution H of a device subject to shocks occurring randomly according to a Poisson process is also considered and sufficient conditions for H to belong to these classes are discussed.
Abstract: Recently defined classes of life distributions are considered, and some relationships among them are proposed. The life distribution H of a device subject to shocks occurring randomly according to a Poisson process is also considered, and sufficient conditions for H to belong to these classes are discussed.

62 citations

Journal ArticleDOI
01 Oct 1999
TL;DR: In this article, a new characterization of the dilation order in convex functions with finite expectations is presented, which enables us to give new interpretations to the order of the expected dilation, and using them we identify conditions that imply the order.
Abstract: LetX andY be two random variables with finite expectationsE X andE Y, respectively. ThenX is said to be smaller thanY in the dilation order ifE[ϕ(X-E X)]≤E[ϕ(Y-E Y)] for any convex functionϕ for which the expectations exist. In this paper we obtain a new characterization of the dilation order. This characterization enables us to give new interpretations to the dilation order, and using them we identify conditions which imply the dilation order. A sample of applications of the new characterization is given.

57 citations

Journal ArticleDOI
TL;DR: Overall, given the favorable trade-off between expressiveness and efficient computation, the newly proposed classifier appears to be a good candidate for the wide-scale application of reliable classifiers based on credal networks, to real and complex tasks.
Abstract: Bayesian networks are models for uncertain reasoning which are achieving a growing importance also for the data mining task of classification. Credal networks extend Bayesian nets to sets of distributions, or credal sets. This paper extends a state-of-the-art Bayesian net for classification, called tree-augmented naive Bayes classifier, to credal sets originated from probability intervals. This extension is a basis to address the fundamental problem of prior ignorance about the distribution that generates the data, which is a commonplace in data mining applications. This issue is often neglected, but addressing it properly is a key to ultimately draw reliable conclusions from the inferred models. In this paper we formalize the new model, develop an exact linear-time classification algorithm, and evaluate the credal net-based classifier on a number of real data sets. The empirical analysis shows that the new classifier is good and reliable, and raises a problem of excessive caution that is discussed in the paper. Overall, given the favorable trade-off between expressiveness and efficient computation, the newly proposed classifier appears to be a good candidate for the wide-scale application of reliable classifiers based on credal networks, to real and complex tasks.

40 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The basic concepts of rough set theory are presented and some rough set-based research directions and applications are pointed out, indicating that the rough set approach is fundamentally important in artificial intelligence and cognitive sciences.

2,004 citations

Journal ArticleDOI
TL;DR: It is suggested that different social science methodologies, such as psychology, cognitive science and human behavior might implement DMT, as an alternative to the methodologies already on offer, and the direction of any future developments in DMT methodologies and applications is discussed.
Abstract: In order to determine how data mining techniques (DMT) and their applications have developed, during the past decade, this paper reviews data mining techniques and their applications and development, through a survey of literature and the classification of articles, from 2000 to 2011. Keyword indices and article abstracts were used to identify 216 articles concerning DMT applications, from 159 academic journals (retrieved from five online databases), this paper surveys and classifies DMT, with respect to the following three areas: knowledge types, analysis types, and architecture types, together with their applications in different research and practical domains. A discussion deals with the direction of any future developments in DMT methodologies and applications: (1) DMT is finding increasing applications in expertise orientation and the development of applications for DMT is a problem-oriented domain. (2) It is suggested that different social science methodologies, such as psychology, cognitive science and human behavior might implement DMT, as an alternative to the methodologies already on offer. (3) The ability to continually change and acquire new understanding is a driving force for the application of DMT and this will allow many new future applications.

563 citations

Journal ArticleDOI
01 Jul 2000
TL;DR: Credal networks as mentioned in this paper is a compact representation for a set of probability distributions, and it is closely related to very popular statistical models such as Markov chains, Bayesian networks, Markov random fields, etc.
Abstract: Overview Graphical models: basic definitions and applications. Inference and learning in credal networks. Quick break: software packages. Formal definitions and computational complexity. Applications. Later Exact and approximate algorithms for reasoning. Agenda Some motivation Graph-theoretical statistical models, directed and undirected Bayesian networks Markov random fields and other graph-theoretical models Credal networks Other models related to credal networks Learning and Reasoning Software packages Computational Complexity Final remarks Applications Computer vision problems Military planning What is this? Why spend time on such a topic? A credal network is a compact representation for a set of probability distributions. It is a based on graphs, so it is easy to draw and to understand. It offers a compact and easy language in which to express complicated situations. It is closely related to very popular statistical models such as Markov chains, Bayesian networks, Markov random fields, etc. Compact and easy language to represent uncertainty. Network developed by the IDSIA team. Goal: to support decision making regarding such flows. (c) Segmentation produced by Bayesian network (d) Segmentation produced by credal network And an application on knowledge representation Description logics are used to create terminologies. These sentences encode a large credal network. Agenda Some motivation Graph-theoretical statistical models, directed and undirected Bayesian networks Markov random fields and other graph-theoretical models Credal networks Other models related to credal networks Learning and Reasoning Software packages Computational Complexity Final remarks Applications Computer vision problems Military planning First, the unstructured approach We can compute the probability of events by marginalization: Inferences Conditional probabilities are obtained by Bayes rule: Drawbacks: Exponential number of parameters to elicit. Exponential number of terms in the summation to perform inferences. Bayesian networks, Markov random fields, etc, provide a way to alleviate these issues through graph-theoretical tools.

365 citations

Proceedings Article
30 Jul 2005
TL;DR: This paper presents the first exact inference algorithm that operates directly on a first- order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models).
Abstract: Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paperwe present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.

349 citations