scispace - formally typeset
Search or ask a question
Author

Jesús Cerquides

Bio: Jesús Cerquides is an academic researcher from Spanish National Research Council. The author has contributed to research in topics: Combinatorial auction & Naive Bayes classifier. The author has an hindex of 23, co-authored 57 publications receiving 1376 citations. Previous affiliations of Jesús Cerquides include University of Barcelona & Pablo de Olavide University.


Papers
More filters
Journal ArticleDOI
TL;DR: A weighted naive Bayes algorithm is proposed, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions and is found to be a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE.
Abstract: Despite the simplicity of the Naive Bayes classifier, it has continued to perform well against more sophisticated newcomers and has remained, therefore, of great interest to the machine learning community. Of numerous approaches to refining the naive Bayes classifier, attribute weighting has received less attention than it warrants. Most approaches, perhaps influenced by attribute weighting in other machine learning algorithms, use weighting to place more emphasis on highly predictive attributes than those that are less predictive. In this paper, we argue that for naive Bayes attribute weighting should instead be used to alleviate the conditional independence assumption. Based on this premise, we propose a weighted naive Bayes algorithm, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions. We perform extensive evaluations and find that WANBIA is a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE.

159 citations

Journal ArticleDOI
TL;DR: The challenges that researchers must address to establish MAS as the key enabling technology for SNs are identified.
Abstract: Sensor networks (SNs) have arisen as one of the most promising technologies for the next decades. The recent emergence of small and inexpensive sensors based upon microelectromechanical systems ease the development and proliferation of this kind of networks in a wide range of actual-world applications. Multiagent systems (MAS) have been identified as one of the most suitable technologies to contribute to the deployment of SNs that exhibit flexibility, robustness and autonomy. The purpose of this survey is 2-fold. On the one hand, we review the most relevant contributions of agent technologies to this emerging application domain. On the other hand, we identify the challenges that researchers must address to establish MAS as the key enabling technology for SNs.

155 citations

Journal ArticleDOI
TL;DR: It is empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics.
Abstract: In this paper we propose a novel message-passing algorithm, the so-called Action-GDL, as an extension to the generalized distributive law (GDL) to efficiently solve DCOPs. Action-GDL provides a unifying perspective of several dynamic programming DCOP algorithms that are based on GDL, such as DPOP and DCPOP algorithms. We empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics.

80 citations

Proceedings Article
06 Jan 2007
TL;DR: A bidding language is introduced for a new type of combinatorial auction that allows agents to bid for goods to buy, for Goods to sell, and for transformations of goods.
Abstract: We introduce a new type of combinatorial auction that allows agents to bid for goods to buy, for goods to sell, and for transformations of goods. One such transformation can be seen as a step in a production process, so solving the auction requires choosing the sequence in which the accepted bids should be implemented. We introduce a bidding language for this type of auction and analyse the corresponding winner determination problem.

64 citations

Book ChapterDOI
03 Oct 2005
TL;DR: The conjugate distribution for one dependence estimators is developed and empirically show that uniform averaging is clearly superior to Bayesian model averaging for this family of models and the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation.
Abstract: Ensemble classifiers combine the classification results of several classifiers. Simple ensemble methods such as uniform averaging over a set of models usually provide an improvement over selecting the single best model. Usually probabilistic classifiers restrict the set of possible models that can be learnt in order to lower computational complexity costs. In these restricted spaces, where incorrect modeling assumptions are possibly made, uniform averaging sometimes performs even better than bayesian model averaging. Linear mixtures over sets of models provide an space that includes uniform averaging as a particular case. We develop two algorithms for learning maximum a posteriori weights for linear mixtures, based on expectation maximization and on constrained optimizition. We provide a nontrivial example of the utility of these two algorithms by applying them for one dependence estimators. We develop the conjugate distribution for one dependence estimators and empirically show that uniform averaging is clearly superior to Bayesian model averaging for this family of models. After that we empirically show that the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation.

60 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

3,734 citations

Journal ArticleDOI
TL;DR: Baxter has inherited the mantle of Onsager who started the process by solving exactly the two-dimensional Ising model in 1944 as mentioned in this paper, and there has been a growing belief that all the twodimensional lattice statistical models will eventually be solved and that it will be Professor Baxter who solves them.
Abstract: R J Baxter 1982 London: Academic xii + 486 pp price £43.60 Over the past few years there has been a growing belief that all the twodimensional lattice statistical models will eventually be solved and that it will be Professor Baxter who solves them. Baxter has inherited the mantle of Onsager who started the process by solving exactly the two-dimensional Ising model in 1944.

1,658 citations

Journal Article
TL;DR: The paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method, but it does not deal with some advanced topics in depth.
Abstract: In a recently published paper in JMLR, Demˇ sar (2006) recommends a set of non-parametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method. However, it does not deal with some advanced topics in depth. Regarding these topics, we focus on more powerful proposals of statistical procedures for comparing n n classifiers. Moreover, we illustrate an easy way of obtaining adjusted and comparable p-values in multiple comparison procedures.

1,312 citations

Journal ArticleDOI
TL;DR: This paper aims at a systematic study of discretization methods with their history of development, effect on classification, and trade-off between speed and accuracy.
Abstract: Discrete values have important roles in data mining and knowledge discovery They are about intervals of numbers which are more concise to represent and specify, easier to use and comprehend as they are closer to a knowledge-level representation than continuous values Many studies show induction tasks can benefit from discretization: rules with discrete values are normally shorter and more understandable and discretization can lead to improved predictive accuracy Furthermore, many induction algorithms found in the literature require discrete features All these prompt researchers and practitioners to discretize continuous features before or during a machine learning or data mining task There are numerous discretization methods available in the literature It is time for us to examine these seemingly different methods for discretization and find out how different they really are, what are the key components of a discretization process, how we can improve the current level of research for new development as well as the use of existing methods This paper aims at a systematic study of discretization methods with their history of development, effect on classification, and trade-off between speed and accuracy Contributions of this paper are an abstract description summarizing existing discretization methods, a hierarchical framework to categorize the existing methods and pave the way for further development, concise discussions of representative discretization methods, extensive experiments and their analysis, and some guidelines as to how to choose a discretization method under various circumstances We also identify some issues yet to solve and future research for discretization

981 citations