scispace - formally typeset
Search or ask a question

Showing papers in "Eureka in 2013"


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This research presents a generic rule-based spatial decision support software tool, able to approach most spatial decision problems within a single framework.
Abstract: Nowadays, most Spatial Decision Support Systems (SDSSs) are designed to solve a specific problem in a given region. This fact makes rather difficult or even impossible to develop comparative analyses and studies among different solutions. This research presents a generic rule-based spatial decision support software tool, able to approach most spatial decision problems within a single framework. To achieve this, the rule-based RIMER+ decision model was embedded in a Geographic Information System (GIS) environment. Such system was named Spatial RIMER+, and is able to consider expert knowledge, data uncertainty and both spatial and nonspatial information during the decision making process.

12 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: A review about the quality of the similarity measure and its applications in machine learning is presented and it is shown that this measure is based on two basic aspects on the universe of objects: the granularity of the information and the principle that, similar problems have similar solutions.
Abstract: In this paper, a review about the quality of the similarity measure and its applications in machine learning is presented. This measure is analyzed from the perspective of the granular computing. The granular computing allows analyzing the information at different levels of abstraction and from different approaches. The analysis shows that this measure is based on two basic aspects on the universe of objects: the granularity of the information and the principle that, similar problems have similar solutions. Using the measure, a method was formulated to build relations of similarity; these relations and other results have been used in improving machine learning techniques.

11 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This paper proposes FuzzyPred, a metaheuristics-based data-mining method to obtain fuzzy predicates in normal form, and believes that the patterns obtained represent an interesting new representation of knowledge that is only obtained by this proposal.
Abstract: Advanced technologies have enabled us to collect large amounts of data. These data may be transformed into useful knowledge. Because of our limited ability to manually process the data, it is necessary to use automatic tools to mine useful knowledge. Many data-mining methods have been proposed which are normally restricted to a given representation, such as rules and clusters. This paper proposes FuzzyPred, a metaheuristics-based data-mining method to obtain fuzzy predicates in normal form. We believe that the patterns obtained by FuzzyPred represent an interesting new representation of knowledge that is only obtained by our proposal.

9 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: Two abstract models based on Swarm Intelligence for learning parameters characterizing FCM are described, allowing the simulation of the system and also the extraction of relevant knowledge associated with underlying patterns.
Abstract: In recent years Fuzzy Cognitive Maps (FCM) has become a useful Soft Computing technique for modeling and simulation. They are connectionist and recurrent structures involving concepts describing the system behavior, and causal connections. This paper describes two abstract models based on Swarm Intelligence for learning parameters characterizing FCM, which is a central issue on this field. At the end, we obtain accurate maps, allowing the simulation of the system and also the extraction of relevant knowledge associated with underlying patterns.

8 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This work presents an alternative approach, based on global characteristics and motifs, to mine medical time series databases using machine learning algorithms, and demonstrates that this approach is more accurate and provides more interpretable models than the method that does not extract features.
Abstract: In the last decade, the interest for temporal data analysis methods has increased significantly in many application areas. One of these areas is the medical field, in which temporal data is in the core of innumerous diagnosis exams. However, only a small portion of all gathered medical data is properly analyzed, in part, due to the lack of appropriate temporal methods and tools. This work presents an alternative approach, based on global characteristics and motifs, to mine medical time series databases using machine learning algorithms. Characteristics are data statistics that present a global summary of the data. Motifs are frequently recurrent subsequences that usually represent interesting local patterns. We use a combination of global characteristics and local motifs to describe the data and feed machine learning algorithms. A case study is performed on three databases of Electrocardiogram exams. Our results show the superior performance of our approach in comparison to the naive method that provides raw temporal data directly to the learning algorithms. We demonstrate that our approach is more accurate and provides more interpretable models than the method that does not extract features.

7 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, the authors focus on defining TTRP models for incorporating the uncertainty present in their data and propose a soft-computing-based approach to solve real-world problems where decision makers use subjective knowledge when making decisions.
Abstract: Techniques based on Soft Computing are useful to solve real-world problems where decision makers use subjective knowledge when making decisions. In many problems in transport and logistics it is necessary to take into account that the available knowledge about the problem is imprecise or uncertain. Truck and Trailer Routing Problem (TTRP) is one of most recent and interesting problems in transport routing planning. Most of models used in the literature assume that the data available are accurate; for this reason it would be appropriate to focus research toward defining TTRP models for incorporating the uncertainty present in their data.

6 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: A new multiobjective evolutionary algorithm is introduced with the purpose of transforming a known valued outranking relation into an antisymmetric crisp out ranking relation, on a set of classes of alternatives, where the elements of each class are indifferent each other.
Abstract: The framework of multiobjective optimization is used to tackle the multicriteria ranking problem. The conceptual advantages of the multiobjective formulation are discussed and a new multiobjective evolutionary algorithm is introduced with the purpose of transforming a known valued outranking relation into an antisymmetric crisp outranking relation, on a set of classes of alternatives, where the elements of each class are indifferent each other, and with this as a background, we propose a recommendation for ranking problems of medium-sized set of alternatives. The performance of the algorithm is evaluated on a test problem. It was capable of producing a high-quality recommendation.

6 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: A new taxonomy for feature selection algorithms created for high-dimensional datasets is proposed and several selectors are described, analyzed and evaluated, it was observed that the Cfs-SFS algorithm reached the best solutions in most of the cases.
Abstract: In this paper a new taxonomy for feature selection algorithms created for high-dimensional datasets is proposed. Also, several selectors are described, analyzed and evaluated. It was observed that the Cfs-SFS algorithm reached the best solutions in most of the cases. Nevertheless, its application in very high-dimensional datasets is not recommended due to its computational cost. Cfs-BARS, Cfs-IRU and MRMR algorithms have similar results to those of Cfs-SFS, but in a relatively lesser time. The INTERACT algorithm gets good solutions too, but its computational cost is higher if compared to the above mentioned. On the other hand, the QPFS and FSBMC algorithms reached the worst solutions.

5 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: A general and flexible approach for knowledge discovery which allows obtaining different knowledge structure using a metaheuristic approach and experimental results show some advantages of the proposed approach for representing patterns and trends from data.
Abstract: Compensatory Fuzzy Logic (CFL) is a logical system, which enables an optimal way for the modeling of knowledge. Its axiomatic character enables the work of natural language translation of logic, so it is used in knowledge discovery and decision-making. In this work we propose a general and flexible approach for knowledge discovery which allows obtaining different knowledge structure using a metaheuristic approach. The proposed method was tested by experimental analysis from a data set, using a tool developed in visual Prolog. The experimental results show some advantages of the proposed approach for representing patterns and trends from data.

5 citations


Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This paper develops a model for composing publicoriented project portfolios by developing an evolutionary method, which is found to perform well in some test examples.
Abstract: Selecting project portfolios Decision-Maker usually starts with limited information about projects and portfolios. One of the challenges involved in analyzing, searching and selecting the best portfolio is having a method to evaluate the impact of every project and portfolio in order to compare them. This paper develops a model for composing publicoriented project portfolios. Information concerning the quality of the projects is in the form of a project-ranking, which can be obtained by the application of a proper multi-criteria method; however the ranking does not assume an appropriate evaluation. A best portfolio is primarily found through a multi-objective optimization that regards the impact indicators that reflect the quality of the projects in the portfolio and competent portfolios’ cardinalities. Overall good solutions are obtained by developing an evolutionary method, which is found to perform well in some test examples.

5 citations


Proceedings ArticleDOI
01 Oct 2013-Eureka
TL;DR: This paper presents a Web-based Multicriteria Group Decision Support System for solving multicriteria ranking problems: how to rank a set of alternatives in decreasing order of preference by a collaborative group of decision makers in sequential or parallel coordination mode and in a distributed and asynchronous environment.
Abstract: This paper presents a Web-based Multicriteria Group Decision Support System for solving multicriteria ranking problems: how to rank a set of alternatives - having evaluations in terms of several criteria - in decreasing order of preference by a collaborative group of decision makers in sequential or parallel coordination mode and in a distributed and asynchronous environment. The functional architecture incorporate the following features: The ELECTRE III model to aggregate multiple criteria preferences, a ELECTRE based method to aggregate the multiple criteria group preferences, an evolutionary algorithm to exploit a valued outranking relation, the Brainstorming technique to stimulate and to generate ideas, use of a facilitator tool for optimization the meetings coordination of the group members, use of an organize tool, use of a voting tool, use of a Graphic interface, a Group Norm subsystem, a Discussion subsystem and a Multiple Criteria Decision Analysis subsystem.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: The main goal of this research is to develop a sensitive analysis among some fuzzy operators, to ask the question: which is the most robustness fuzzy operators?
Abstract: The main goal of this research is develop a sensitive analysis (SA) among some fuzzy operators, to ask the question: which is the most robustness fuzzy operators? The fuzzy operators consider in this study are: Zadeh operators, Probabilistic operators and finally the compensatory fuzzy logic operators: Geometric mean and Arithmetic mean. The Sobol model and the Monte Carlo simulations was been used to develop the SA. According with the main result of the study the compensatory fuzzy logic operators are the most robust.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this paper, a decision support model based on compensatory fuzzy logic is presented to facilitate the selection of technologies to be used for integrating the information systems in a supply chain, and provides two examples of the model diagnosis phase impact in Cuban organizations.
Abstract: The need to satisfy a more demanding customer in a scenery where deadlines and costs must be ever smaller to maintain competitiveness, together with increased uncertainty about demand, has been leading organizations to collaborate to such a level that now the competition is not between isolated enterprises, but between supply chains. The information technology management and the integration of information systems in such environment are complex problems, aggravated by the selection complexity of a combination of technologies to support, to the greatest possible extent, the supply chain performance. This paper presents a decision support model based on compensatory fuzzy logic, to facilitate the selection of technologies to be used for integrating the information systems in a supply chain, and provides two examples of the model diagnosis phase impact in Cuban organizations.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: An innovative method and a computer system that allows real-time text, voice and video interaction among participants and data sharing of colonoscopy exams over the Internet and implements a medical database is presented.
Abstract: Telemedicine can facilitate the examination and diagnosis of patients in locations with lack of resources and medical experts. This paper presents an innovative method and a computer system that allows real-time text, voice and video interaction among participants and data sharing of colonoscopy exams over the Internet. The proposed method implements a medical database which will be further explored using data mining methods. The functionalities and performance of the method were evaluated, in a local network, with the development of a computational system. The results validated the solution showing its applicability in colonoscopy exams.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: The Crouch-Ritchie model is used to generate information to evaluate Destination Competitiveness and the outranking method generates a preferential model and obtains destination competitiveness based ranking of cities.
Abstract: This work aims to develop an Empirical Analysis of Destination Competitiveness of three main cities of Sinaloa, Mexico with a well-known Crouch-Ritchie model. This problem is approached as a multicriteria ranking problem with an outranking method and a multicriteria group decision support system to generate a ranking of main cities on Sinaloa. The Crouch-Ritchie model is used to generate information to evaluate Destination Competitiveness and the outranking method generates a preferential model and obtains destination competitiveness based ranking of cities.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, alternatives are grouped into pseudo-ignorant classes, where the alternatives are indifferent to at least one of the other alternatives in the class, and the resulting relation of strict preference over pseudo indifference classes turns out to be non-transitive.
Abstract: This work analyses decision making situations, where the quantity of the value function associated with the alternatives is a random number with known distribution. The main contribution of the paper is that alternatives are grouped into pseudo indifference classes, where the alternatives are indifferent to at least one of the other alternatives in the class. However, not all elements in the set are indifferent to each other, unlike classical indifference classes. Since the resulting relation of strict preference over pseudo indifference classes turns out to be non-transitive, it is demonstrated both in theory and in terms of an example that it is strongly dependent on the significance level of comparisons in order to allocate alternatives into groups.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This work redefine these operators based on compensatory fuzzy logic using a linguistic definition, compatible with previous definitions of Fuzzy Mathematical Morphology.
Abstract: Mathematical Morphology is a theory based on geometry, algebra, topology and set theory, with strong application to digital image processing. This theory is characterized by two basic operators: dilation and erosion. In this work we redefine these operators based on compensatory fuzzy logic using a linguistic definition, compatible with previous definitions of Fuzzy Mathematical Morphology. A comparison to previous definitions is presented, assessing robustness against noise.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This work modified the Non-dominated Sorting Genetic Algorithm 2 (NSGA2) to make selective pressure towards non-dominated solutions that belong to the most preferred category, and outperforms the standard NSGA2, achieving non- dominated solutions that best match the DM’s preferences.
Abstract: Nowadays, most approaches in the evolutionary multiobjective optimization literature concentrate mainly on adapting an evolutionary algorithm to generate an approximation of the Pareto frontier. However, this does not solve the problem. We present a new idea to incorporate into a MOEA the Decision Maker (DM) preferences, expressed in a set of solutions assigned to ordered categories. We modified the Non-dominated Sorting Genetic Algorithm 2 (NSGA2) to make selective pressure towards non-dominated solutions that belong to the most preferred category. In several instances, on the project portfolio problem, our proposal outperforms the standard NSGA2, achieving non-dominated solutions that best match the DM’s preferences.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, a logistic regression is used to estimate the probability of conformity of research grants to the financial obligations of the researcher analyzing the correlation between certain characteristics of the grant and the grant´s final status as approved or not.
Abstract: Governmental agencies, the back office of private firms and nongovernmental organizations experience bureaucratic processes that are often repetitive and out-of-date. These imperfections cause resource misuse and support activities that diminish to the value of the process. An important element of these bureaucratic processes is checking whether certain projects approved by the office have actually been successful in their proposed objectives. Banks and credit card companies must evaluate whether creditors have fulfilled their supposed financial worthiness, tax authorities need to classify sectors of the economy and types of tax payers for probable defaults, and research grants approved by government funding agencies should verify the use of public funds by grant recipients. In this study, logistic regression is used to estimate the probability of conformity of research grants to the financial obligations of the researcher analyzing the correlation between certain characteristics of the grant and the grant´s final status as approved or not. The logistic equation uncovers those characteristics that are most important in judging status, and supports the analysis of results as false positives and false negatives. A ROC curve is constructed which reveals not only an optimal cutoff separating conformity from nonconformity, but also discloses weak links in the chain of activities that could be easily corrected and consequently public resources preserved.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: The binary formulation of this problem is described and an alternative formulation, called Representation by Rules, is combined with a Genetic Algorithm and a Par-ticipatory Learning System, verifying the robustness of the devel-oped framework in a problem for which the corresponding binary representation demands 405,450,000 variables.
Abstract: This paper formulates the 3D Stochastic Stowage Plan-ning (3D SSP) problem. The key objective of 3D SSP is to minimize the number of container movements and maximize the ship´s stability considering multiple scenar-ios. The binary formulation of this problem is described and an alternative formulation, called Representation by Rules, is combined with a Genetic Algorithm and a Par-ticipatory Learning System. The robustness of the devel-oped framework is verified in a problem for which the corresponding binary representation demands 405,450,000 variables. Keywords : Stochastic Stowage Planning, Container Ship, Representation by Rules, Genetic Algorithm, Participa-tory Learning System. 1. Introduction Today, international sea freight container transportation and container terminals play a key role in the global transportation network. According to [21], over 60% of the world’s deep-sea general cargo is transported in con-tainers, and the routes between some countries are con-tainerized up to 90%. The improvement of the operational efficiency of con-tainer terminals is essential to handle the increasing flow of containers that has occurred over the last years. Re-ducing the time ships must spend in port makes container seaports more competitive because they can offer lower rates for loading and discharging. Therefore, an essential competitive advantage is the reduction of the time in port of the container ships, and of the costs of the transship-ment process itself. The optimization of seaport opera-tions problems should be defined and methods of solution proposed for such tasks.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: The aim of this paper is to prove the rationality of two new fuzzy solutions to cooperative n-person games, the Fuzzy Negotiation Solution by Knowledge Engineering and Compensatory Negotiation solution by Knowledge engineering, based on bargaining over statements coming from experts.
Abstract: The aim of this paper is to prove the rationality of two new fuzzy solutions to cooperative n-person games. They are the Fuzzy Negotiation Solution by Knowledge Engineering and Compensatory Negotiation Solution by Knowledge Engineering, which are based on bargaining over statements coming from experts. The proof of rationality consists in demonstrating that the elements of crisp solutions equivalent to these fuzzy solutions satisfy Imputation, Efficiency, Symmetry, Dummy Axiom and others.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, a new paradigm in economy and finance is formed with the incorporation of new models that allow a greater degree of accuracy to the reality of the environment of organizations based on the fuzzy logic theory.
Abstract: Since the introduction of the uncertainty theory, a new paradigm in economy and finance is formed with the incorporation of new models that allow a greater degree of accuracy to the reality of the environment of organizations based on the fuzzy logic theory. This article emphasizes the importance of the uncertainty present in the financial markets, which has provoked an increasing need of establishing models to determine its effect in pricing, as it is the case of the futures and derivatives markets. A proposal is developed to determine the price of an exchange option applying triangular fuzzy numbers to exchange rate variables, and to domestic interest rates, and foreign interest rates based on the classic Black-Scholes (B-S) model.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: The Fuzzy Negotiation Solution and Compensatory Negotiation solution involve a quantitative index, called Good Deal Index, which is the matrix solution of a recurrent equation.
Abstract: Fuzzy Negotiation Solution by Knowledge Engineering (FNSKE) and Compensatory Negotiation Solution by Knowledge Engineering (CNSKE) are two new solution concepts to n-person cooperative games. They involve a quantitative index, called Good Deal Index (GDI), which is the matrix solution of a recurrent equation. The existence and uniqueness of the GDI entail the existence and uniqueness of the solutions. Because of the strength of the hypotheses needed to proof uniqueness, and the convergence of the algorithm, those demonstrations are made statistically, using the Strong Law of Large Numbers . The proof of existence is made using the Schauder Theorem of fixed point. Keywords : Fuzzy solution to an n-person cooperative game, Fixed point, Knowledge Engineering examined statistically b 1. Introduction The Fuzzy Negotiation Solution by Knowledge Engineering , see [4,3], and the Compensatory Negotiation Solution by Knowledge Engineering

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This paper studies how the semantics of RDB can be described and used in an ontology-based method for building dynamic data views and shows that this approach increases the flexibility of information systems whilst decreases their maintenance costs.
Abstract: Traditionally, data views are seen as static, syntactically correct, data sets. The semantics of the data is not explicitly encoded in RDB (relational databases) but implicitly on the application level. However, the needs to create context-aware browsing methods in changing scenarios are demanding more flexible mechanisms. Data views need to be semantically enriched for capturing the real world changes. In this paper, we study how the semantics of RDB can be described and used in an ontology-based method for building dynamic data views. Our tests show that our approach increases the flexibility of information systems whilst decreases their maintenance costs.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: An aggregation method to design new indicators based on the statistic and compensatory fuzzy logic approach and a new indicator to measure the IT governance level are shown.
Abstract: Most of the information needed for the management in the decision making process is essentially based on subjective and imprecise concepts expressed primarily by "experts" in a natural language or based in simples indicators and it’s not capable to check the strategy in a more integral way. In the present research we show two application of the compensatory fuzzy logic to resolve the problem mentioned above. As the main contributions we show first an aggregation method to design new indicators based on the statistic and compensatory fuzzy logic approach and as second we define a new indicator to measure the IT governance level.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, a mixed research approach was used by conducting survey to 25 teachers, in order to measure the knowledge transfer made after taking a training course and conducting depth interviews with three managers.
Abstract: The present research analyzes the management of the knowledge with regard to its transfer in an educational institution located in Medellin, Colombia. A mixed research approach was used by conducting survey to 25 teachers, in order to measure the knowledge transfer made after taking a training course and conducting depth interviews with three managers. The dimensions of the knowledge transfer questionnaire with the highest grades, were those concerning to the design of the transfer process, and the opportunities to apply the knowledge, rated with a 4.1. In the other hand, the dimension averaged with the lowest grade, was the ability to transfer, with a score of 2.7. The findings obtained, have implications regarding the management of knowledge to achieve the intangible resource they have and to begin the construction of collective knowledge within educational institutions, going from the information transmission to the transfer, as well as to the knowledge construction.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, the authors considered a model of partially mixed duopoly with conjectured variations equilibrium (CVE), where the agents' conjectures concern the price variations depending upon their production output's increase or decrease.
Abstract: In this paper, we consider a model of partially mixed duopoly with conjectured variations equilibrium (CVE). The agents’ conjectures concern the price variations depending upon their production output’s increase or decrease. We establish existence and uniqueness results for the conjectured variations equilibrium (called an exterior equilibrium) for any set of feasible conjectures. To introduce the notion of an interior equilibrium, we develop a consistency criterion for the conjectures (referred to as influence coefficients) and prove the existence theorem for the interior equilibrium (understood as a CVE with consistent conjectures). To prepare the base for the extension of our results to the case of non-differentiable demand functions, we also investigate the behavior of the consistent conjectures in dependence upon a parameter representing the demand function’s derivative with respect to the market price.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: In this article, the authors present a MILP model that supports both full (all or nothing) and partial (a certain amount between a minimum and a maximum value) resource allocation policies to projects.
Abstract: In general, the problem of R&D project portfolio selection (RDPPS) is to choose a set of project proposals that optimize certain impact measures designated by the decision maker. In this paper we present a MILP model, that incorporates the most relevant aspects of the problem found in the literature, that supports both full (all or nothing) and partial (a certain amount between a minimum and a maximum value) resource allocation policies to projects. In most of the reviewed papers about RDPPS full allocation resources policies are implemented, a few implement partiall allocation policies, but most of them presents very simple models.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: This paper introduces an algorithm called Non-Outranked Ant Colony Optimization (NO-ACO) that optimizes portfolios with interprojects interactions whilst takes into account the DM’s preferences by incorporating a priori preferences articulation.
Abstract: One of the most important management issues lies in determining the best portfolio of a given set of investment proposals. This decision involves the pursuit of multiple criteria, and has been commonly addressed by implementing a two-phase procedure whose first step identifies the efficient solution space. In this paper we introduce our algorithm called Non-Outranked Ant Colony Optimization (NO-ACO) that optimizes portfolios with interprojects interactions whilst takes into account the DM’s preferences by incorporating a priori preferences articulation. Experimental tests show the advantages of our proposal over the two-phase approach. Also, NO-ACO performed particularly well for problems with high dimensionality.

Proceedings ArticleDOI
16 Oct 2013-Eureka
TL;DR: A hybridization of simulated annealing and variable neighborhood search to the geographic clustering problem and the results obtained show that the hybrid SA-VNS performs better than SA and VNS with respect to the compactness feature.
Abstract: In this work we present a new hybrid approach for solving the clustering problem for geographic data, which is known to be NP-hard. Two metaheuristics that have proven efficiency in combinatory optimization problems have been chosen for the comparison: Simulated Annealing (SA) and Variable Neighborhood Search (VNS). The proposed model is based on the partitioning around the medoids and on P-median. Previous test runs have shown satisfactory results (in terms of quality and time) for instances of 469 geographic objects, but when instances of greater size are used then variability in the results has been detected. In an effort to achieve better results for the clustering problem, we have incorporated a hybridization of simulated annealing and variable neighborhood search to the geographic clustering problem. We have considered different sizes in the tests runs for distinct groups observing that the solutions obtained with the hybrid approach, named SA-VNS hybrid, overcome SA and VNS when they have been implemented individually. Finally, with the aim of evaluating the benefits of the meta-heuristic proposed, we have measured the internal connection of the obtained clusters by means of the Dunn Index. The results obtained show that the hybrid SA-VNS performs better than SA and VNS with respect to the compactness feature.