Author
Luis M. de Campos
Other affiliations: University of Vigo
Bio: Luis M. de Campos is an academic researcher from University of Granada. The author has contributed to research in topics: Bayesian network & Local search (optimization). The author has an hindex of 30, co-authored 143 publications receiving 3593 citations. Previous affiliations of Luis M. de Campos include University of Vigo.
Papers published on a yearly basis
Papers
More filters
TL;DR: A number of basic operations necessary to develop a calculus with probability intervals, such as combination, marginalization, conditioning and integration are studied in detail.
Abstract: We study probability intervals as an interesting tool to represent uncertain information. A number of basic operations necessary to develop a calculus with probability intervals, such as combination, marginalization, conditioning and integration are studied in detail. Moreover, probability intervals are compared with other uncertainty theories, such as lower and upper probabilities, Choquet capacities of order two and belief and plausibility functions. The advantages of probability intervals with respect to these formalisms in computational efficiency are also highlighted.
305 citations
TL;DR: A non-Bayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory which represents a penalization of the Kullback-Leibler divergence between the joint probability distributions associated with a candidate network and with the available data set.
Abstract: We propose a new scoring function for learning Bayesian networks from data using score+search algorithms. This is based on the concept of mutual information and exploits some well-known properties of this measure in a novel way. Essentially, a statistical independence test based on the chi-square distribution, associated with the mutual information measure, together with a property of additive decomposition of this measure, are combined in order to measure the degree of interaction between each variable and its parent variables in the network. The result is a non-Bayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory. The MIT score also represents a penalization of the Kullback-Leibler divergence between the joint probability distributions associated with a candidate network and with the available data set. Detailed results of a complete experimental evaluation of the proposed scoring function and its comparison with the well-known K2, BDeu and BIC/MDL scores are also presented.
302 citations
TL;DR: A new Bayesian network model is presented to deal with the problem of hybrid recommendation by combining content-based and collaborative features and is equipped with a flexible topology and efficient mechanisms to estimate the required probability distributions so that probabilistic inference may be performed.
Abstract: Recommender systems enable users to access products or articles that they would otherwise not be aware of due to the wealth of information to be found on the Internet. The two traditional recommendation techniques are content-based and collaborative filtering. While both methods have their advantages, they also have certain disadvantages, some of which can be solved by combining both techniques to improve the quality of the recommendation. The resulting system is known as a hybrid recommender system. In the context of artificial intelligence, Bayesian networks have been widely and successfully applied to problems with a high level of uncertainty. The field of recommendation represents a very interesting testing ground to put these probabilistic tools into practice. This paper therefore presents a new Bayesian network model to deal with the problem of hybrid recommendation by combining content-based and collaborative features. It has been tailored to the problem in hand and is equipped with a flexible topology and efficient mechanisms to estimate the required probability distributions so that probabilistic inference may be performed. The effectiveness of the model is demonstrated using the MovieLens and IMDB data sets.
301 citations
TL;DR: This paper proposes a new algorithm for learning BNs based on a recently introduced metaheuristic, which has been successfully applied to solve a variety of combinatorial optimization problems: ant colony optimization (ACO).
Abstract: One important approach to learning Bayesian networks (BNs) from data uses a scoring metric to evaluate the fitness of any given candidate network for the data base, and applies a search procedure to explore the set of candidate networks. The most usual search methods are greedy hill climbing, either deterministic or stochastic, although other techniques have also been used. In this paper we propose a new algorithm for learning BNs based on a recently introduced metaheuristic, which has been successfully applied to solve a variety of combinatorial optimization problems: ant colony optimization (ACO). We describe all the elements necessary to tackle our learning problem using this metaheuristic, and experimentally compare the performance of our ACO-based algorithm with other algorithms used in the literature. The experimental work is carried out using three different domains: ALARM, INSURANCE and BOBLO.
194 citations
TL;DR: This paper proposes a new local search method that uses a different search space, and which takes account of the concept of equivalence between network structures: restricted acyclic partially directed graphs (RPDAGs).
Abstract: Although many algorithms have been designed to construct Bayesian network structures using different approaches and principles, they all employ only two methods: those based on independence criteria, and those based on a scoring function and a search procedure (although some methods combine the two). Within the score+search paradigm, the dominant approach uses local search methods in the space of directed acyclic graphs (DAGs), where the usual choices for defining the elementary modifications (local changes) that can be applied are arc addition, arc deletion, and arc reversal. In this paper, we propose a new local search method that uses a different search space, and which takes account of the concept of equivalence between network structures: restricted acyclic partially directed graphs (RPDAGs). In this way, the number of different configurations of the search space is reduced, thus improving efficiency. Moreover, although the final result must necessarily be a local optimum given the nature of the search method, the topology of the new search space, which avoids making early decisions about the directions of the arcs, may help to find better local optima than those obtained by searching in the DAG space. Detailed results of the evaluation of the proposed search method on several test problems, including the well-known Alarm Monitoring System, are also presented.
125 citations
Cited by
More filters
Journal Article•
9,185 citations
Book•
01 Jan 2004
TL;DR: Ant colony optimization (ACO) is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals as discussed by the authors In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization.
Abstract: Swarm intelligence is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization Ant colony optimization (ACO) takes inspiration from the foraging behavior of some ant species These ants deposit pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony Ant colony optimization exploits a similar mechanism for solving optimization problems From the early nineties, when the first ant colony optimization algorithm was proposed, ACO attracted the attention of increasing numbers of researchers and many successful applications are now available Moreover, a substantial corpus of theoretical results is becoming available that provides useful guidelines to researchers and practitioners in further applications of ACO The goal of this article is to introduce ant colony optimization and to survey its most notable applications
6,861 citations
TL;DR: An overview of recommender systems as well as collaborative filtering methods and algorithms is provided, which explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.
Abstract: Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.
2,639 citations
21 Apr 2009
TL;DR: Ant Colony Optimization (ACO) is a stochastic local search method that has been inspired by the pheromone trail laying and following behavior of some ant species as discussed by the authors.
Abstract: Ant Colony Optimization (ACO) is a stochastic local search method that has been inspired by the pheromone trail laying and following behavior of some ant species [1]. Artificial ants in ACO essentially are randomized construction procedures that generate solutions based on (artificial) pheromone trails and heuristic information that are associated to solution components. Since the first ACO algorithm has been proposed in 1991, this algorithmic method has attracted a large number of researchers and in the meantime it has reached a significant level of maturity. In fact, ACO is now a well-established search technique for tackling a wide variety of computationally hard problems.
2,424 citations
TL;DR: The introduction of ant colony optimization (ACO) is discussed and all ACO algorithms share the same idea and the ACO is formalized into a meta-heuristics for combinatorial problems.
Abstract: The introduction of ant colony optimization (ACO) and to survey its most notable applications are discussed. Ant colony optimization takes inspiration from the forging behavior of some ant species. These ants deposit Pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. The model proposed by Deneubourg and co-workers for explaining the foraging behavior of ants is the main source of inspiration for the development of ant colony optimization. In ACO a number of artificial ants build solutions to an optimization problem and exchange information on their quality through a communication scheme that is reminiscent of the one adopted by real ants. ACO algorithms is introduced and all ACO algorithms share the same idea and the ACO is formalized into a meta-heuristics for combinatorial problems. It is foreseeable that future research on ACO will focus more strongly on rich optimization problems that include stochasticity.
2,270 citations