scispace - formally typeset
Search or ask a question

Showing papers in "Research on computing science in 2014"


Journal ArticleDOI
TL;DR: The agents were enhanced to autonomously create local power market organizations and execute the series of online power auctions using Advanced Message Queuing Protocol (AMQP).
Abstract: The increasing penetration of distributed renewable gener- ation brings new power producers to the market (2). Rooftop photo- voltaic (PV) panels allow home owners to generate more power than personally needed and this excess production could be voluntarily sold to nearby homes, alleviating additional transmission costs especially in rural areas (24). Power is sold as a continuous quantity and power markets involve pricing that may change on a minute-to-minute basis. Forward markets assist with scheduling power in advance (25). The speed and complexity of the calculations needed to support online distributed auc- tions is a good t for intelligent agents (14). This paper describes the simulation of a two-tier double auction for short-term forward power exchanges between participants at the outer edges of a power distribution system (PDS). The paper describes the double auction algorithms and demonstrates online auction execution in a simulated distributed system of intelligent agents assisting with voltage/var control near distributed renewable generation (15). The agents were enhanced to autonomously create local power market organizations and execute the series of online power auctions using Advanced Message Queuing Protocol (AMQP).

14 citations


Journal ArticleDOI
TL;DR: Verilog is a hardware description better known as HDL and it was used in the work to implement and sim- ulate these communication protocols with the software version 14.7 of Xilinx ISE Design Suite.
Abstract: Currently, the most used serial communication protocols to exchange information between different electronic embedded devices are the SPI and I2C. This paper describes the development and implementation of these proto- cols using a FPGA card. For the implementation of each protocol, it was taken into account different modes of operation, such as master/slave mode sending or pending data mode. For the implementation of the I2C protocol was neces- sary to perform a tri-state buffer, which makes a bidirectional data line for a successful communication between devices, allowing to take advantage of these sources provided by the FPGA. Verilog is a hardware description lan- guage better known as HDL and it was used in the work to implement and sim- ulate these communication protocols with the software version 14.7 of Xilinx ISE Design Suite.

8 citations


Journal ArticleDOI
TL;DR: A system architecture based on hybrid optimization is proposed, based on mathematical programming and belief merging, aimed to combine the nutrition scientist advices and policies along with the user food desires to generate of a more agreeable menu.
Abstract: Menu planning is a process appearing to be straightforward but many complexities arise when it is tried to be solved by computer means. Actually, although there is evidence of previous work since 50 years ago, at present there is no wide know tool which can solve this task in an automated manner. Also, not all proposals deal with full recipes along with considering the user food preferences. In this paper we pro- pose a system architecture based on hybrid optimization: a rst module based on mathematical programming, a well known robust approach to this problem; and a second module based on belief merging, a lesser known framework aimed to combine the nutrition scientist advices and policies along with the user food desires. The association of numerical and symbolic approaches will allow us to generate of a more agreeable menu. In order to illustrate our proposal, we present a motivating example detailing the main aspects of the system.

7 citations


Journal ArticleDOI
TL;DR: The results suggest that this is a feasible approach for feature selection, obtaining solutions equal or similar to the optimal solution while evaluating a relatively small fraction of the search space.
Abstract: Feature selection aims to nd ways to single out the subset of features which best represents the phenomenon at hand and improves performance. This paper presents an approach based on evolutionary computation and the associative paradigm for classi cation. A wrapperstyle search guided by a genetic algorithm uses the Hybrid Associative Classi er to evaluate candidate solutions and thus approximate the optimal feature subset for di erent data sets. The results suggest that this is a feasible approach for feature selection, obtaining solutions equal or similar to the optimal solution while evaluating a relatively small fraction of the search space.

6 citations


Journal ArticleDOI
TL;DR: This technique analyzes the geometry of the hand to isolate the vein regions and extract some features (vein bifurcations and ending points) for being used as features in the training sets for classifiers.
Abstract: In this paper we propose a technique for both finding vein regions from thermal dorsal hand images and extracting features for biometric recogni- tion; our technique analyzes the geometry of the hand to isolate the vein regions and extract some features (vein bifurcations and ending points) for being used as features in the training sets for classifiers. Commonly, the features extracted are used as geometric and descriptive representation of the vein patterns which are matched with hand vein images in a database in order to determine/verify the person's identity. Keywor ds: Biometry, Hand vein recognition, Infrared image segmentation, Dorsal hand vein image processing

6 citations


Journal ArticleDOI
TL;DR: Results of queries show the feasibility of using ontological models as the supporting technology to implement ubiquitous and pervasive systems for academic environments.
Abstract: In this paper we describe an ontology model that was designed and implemented to represent academic and institutional contexts related with a research center in Mexico City. The ontology model aims at supporting logic- based query answering and reasoning regarding contexts such as geographical areas, time, persons, libraries, cultural and academic events, teaching and tutoring schedules. The type of questions that the ontology model is capable of answering range from academic issues such as tutoring and thesis supervision; those concerning the location of people, libraries, buildings, roads; those regarding time such as class schedules, event schedules; and even those about the food menu offered at the cafeteria of the institution. In order to evaluate the ontology model, a set of competency questions were translated into SQWRL rule-based query language. Results of queries show the feasibility of using ontological models as the supporting technology to implement ubiquitous and pervasive systems for academic environments.

6 citations


Journal ArticleDOI
TL;DR: Graphical Design Methodology (GODeM) methodology is used for designing an ontological model which main objective is to personalize learning activities consistent with the student's learning profile.
Abstract: In this paper we present a simple and didactic methodology to design an ontology for educational purposes. This methodology considers and incorporates the steps of the most outstanding methodologies for ontology design. Some of the reported methodologies specialize on the analysis of the knowledge domain, others in the formality of some of the language used to define it, others in the evaluation and documentation. Graphical Design Methodology (GODeM) is based on the methodological principles reported by Noy & McGuiness, the OntoDesign Methontology, Entreprise Ontology, Toronto Virtual Enterprise and graphical notations. GODeM methodology is used for designing an ontological model which main objective is to personalize learning activities consistent with the student's learning profile.

6 citations


Journal ArticleDOI
TL;DR: A way to generate diets using genetic algorithms is presented, which one considers the restrictions mentioned and also allows establish preferences for certain food groups.
Abstract: Overweight and obesity are dened as abnormal or excessive fat accumulation that may be harmful to health and even can even result in death. This problem can be reduced for a person if this follows a proper diet, in which the consumption of kilocalorie, carbohydrates, lipids and proteins per day are restricted. Given a database of foods to nd the 5 meals in each day of a week becomes a complex task. In this article a way to generate diets using genetic algorithms is presented, which one considers the restrictions mentioned and also allows establish preferences for certain food groups.

6 citations


Journal ArticleDOI
TL;DR: Esta metodologia logra the deteccion e identificacion of fallas simetricas y asimetricas, el cual sera clasificado por una red neuronal probabilistica para determinar el tipo of falla en el sistema.
Abstract: Resumen. En el presente trabajo se propone un sistema para la supervision de una red electrica con cambios de carga dinamicos propuesta por la IEEE. El sistema esta compuesto por dos etapas. La etapa de deteccion utiliza un sistema de logica difusa y la etapa de diagnostico hace uso de las distancias Euclidianas entre lineas con el fin de generar un patron dentro de los elementos del sistema, el cual sera clasificado por una red neuronal probabilistica para determinar el tipo de falla en el sistema. La combinacion de estas tecnicas inteligentes es para generar un sistema mas robusto y seguro. Esta metodologia logra la deteccion e identificacion de fallas simetricas y asimetricas.

5 citations


Journal ArticleDOI
TL;DR: This paper presents how to use reactive multi-agent systems to make decision, what are their advantages and their drawbacks as compared to classical methods, and illustrates the proposal with an example of application dealing with the platoon control issue.
Abstract: Decision processing is a key element in computer science, in automatic control and in robotics. The literature presents a lot of various approaches for decision processing. These approaches generally depend on both the adopted conceptual point of view and of the application eld. Some of the classical methods suers from several problems such as a limited adaptivity, or a high computational cost. In this context, reactive multi-agent systems can be good candidates to overcome some of these drawbacks. The goal of this paper is to present how to use reactive multi-agent systems to make decision, what are their advantages and their drawbacks as compared to classical methods. It also illustrates the proposal with an example of application dealing with the platoon control issue.

5 citations


Journal ArticleDOI
TL;DR: Experimental results suggest that abductive reasoning performed by humans has the tendency to coincide with the solutions computed by the algorithm Peirce.
Abstract: Abductive reasoning algorithms formulate possible hypotheses to explain observed facts using a theory as the basis. These algorithms have been applied to various domains such as diagnosis, planning and interpretation. In general, algorithms for abductive reasoning based on logic present the following disadvantages: (1) they do not allow the explicit declaration of conditions that may affect the reasoning, such as intention, context and belief; (2) they allow little or no consideration for criteria required to select good hypotheses. Using Propositional Logic as its foundation, this study proposes the algorithm Peirce, which operates with a framework that allows one to explicitly include conditions to conduct abductive reasoning and uses a criterion to select good hypotheses that employs metrics to define the explanatory power and complexity of the hypotheses. Experimental results suggest that abductive reasoning performed by humans has the tendency to coincide with the solutions computed by the algorithm Peirce.

Journal ArticleDOI
TL;DR: In this paper, a method is presented for the classification of an individual into two affective states: boredom and frustration, based on a classifier based on k-NN and feature vectors generated by the preprocessing of keystroke dynamics and mouse dynamics data.
Abstract: In this paper, a method is presented for the classification of an individual into two affective states: boredom and frustration. To gather the necessary data, the individual interacts with an Intelligent Tutoring System focused on the teaching of programming languages. The method involves a classifier based on k-NN, and feature vectors generated by the preprocessing of keystroke dynamics and mouse dynamics data. Accurate results are achieved by determining relevant subsets of the initial feature set, using genetic algorithms. These subsets facilitate the training of the classifiers for each affective state.

Journal ArticleDOI
TL;DR: Este artículo propone un modelo de ambientes educativos inteligentes inteligente así como una arquitectura conceptual para sistemas that soporten dicho modelo.
Abstract: Resumen. La Inteligencia Ambiental (Ambient Intelligence o AmI) es un área de la Computación que se dirige a tener espacios, tecnológicamente enriquecidos, que proactivamente apoyen a las personas en su vida diaria. Dada la riqueza de información y conocimientos existentes en ámbitos educativos, la AmI puede proveer soluciones que se adapten a las necesidades y generen beneficios para los usuarios de estos entornos. Este artículo propone un modelo de ambientes educativos inteligentes, así como una arquitectura conceptual para sistemas que soporten dicho modelo.

Journal ArticleDOI
TL;DR: Using statistical techniques and the K-Nearest Neighbors classifier, the classification of three types of dorsal fin is obtained with an accuracy of 71.66%, assessing the value of K = 7, taking reference catalog of species CICIMAR- IPN.
Abstract: The process of photo-identification images of the blue whale (Balaenoptera musculus), is made manually; this process classifies images blue whale through its dorsal fin characteristics. The features are extracted visually, which can generate errors at the moment of classified. In this work an image classifier blue whale is presented, which have features such as color pigmentation, background image, type of dorsal fin, among others; these common characteristics generate high statistical dependence. This statistical dependence causes the data extracted through a segmented image of the blue whale, are to be observed through a hyperplane. Using statistical techniques and the K-Nearest Neighbors classifier, the classification of three types of dorsal fin is obtained with an accuracy of 71.66%, assessing the value of K = 7, taking reference catalog of species CICIMAR- IPN.

Journal ArticleDOI
TL;DR: Los resultados obtenidos muestran that el modelo tiene un excelente desempeno en the prediccion and the control of the calidad del proceso industrial estudiado cuando se compara with tecnicas de sistemas expertos similares (ANFIS, RBFN).
Abstract: Resumen. En este articulo se presenta una metodologia hibrida basada en un modelo difuso del tipo 1 en version singleton usando un diseno factorial 2 que optimiza el modelo del sistema experto y sirve para realizar inspeccion en linea. El metodo de diseno factorial proporciona la base de datos necesaria para realizar la creacion de la base de reglas para el modelo difuso y tambien genera la base de datos para entrenar el sistema experto. El metodo propuesto ha sido validado en el proceso de verificacion de parametros dimensionales por medio de imagenes comparandolo con los modelos ANFIS y RBFN los cuales muestran mayores margenes de error en la aproximacion de la funcion que representa el sistema comparada con el modelo propuesto. Los resultados obtenidos muestran que el modelo tiene un excelente desempeno en la prediccion y el control de la calidad del proceso industrial estudiado cuando se compara con tecnicas de sistemas expertos similares (ANFIS, RBFN).

Journal ArticleDOI
TL;DR: La metodología implementada, permite identificar la patología de manera no supervisada and en tiempo real, apoyando the clasificación del triage médico del paciente, en forma operacional.
Abstract: Resumen. Aplicamos conceptos de la geometría fractal (GF) y procesamiento digital de imágenes (PDI) para caracterizar imágenes médicas de angioresonancia magnética cerebral (ARM) de pacientes normales y otros que presentan la patología de disminución del calibre arterial, con reducción de la luz interior de arterias. La caracterización de imágenes ARM permite obtener conocimiento para desarrollar metodologías alternativas para dotar de inteligencia a herramientas de apoyo al diagnóstico médico. Se efectuó un postprocesamiento a las imágenes de ARM y se aplicó el método de “Box-Counting” para obtener su dimensión fractal (DF). Se desarrolló con MATLAB V10 un análisis de la capacidad de ocupar espacio de acuerdo a la GF. Los resultados muestran que la metodología implementada, permite identificar la patología de manera no supervisada y en tiempo real, apoyando la clasificación del triage médico del paciente, en forma operacional, solicitando así la intervención inmediata de un experto clínico, para su adecuada canalización.

Journal ArticleDOI
TL;DR: The trainee model and the initial actions of the agent are presented and an animated pedagogical agent to guide trainees and provide instruction is proposed to be integrated to a virtual reality training system.
Abstract: An electrical test performing involves high risk therefore utility companies require high qualified electricians. Traditionally, training on electrical tests has been based on classroom courses; and recently it has been supported by virtual reality systems. These systems have improved training and reduced training time. However the training still depends on courses schedule and instructors, and training systems are not yet adaptive. We propose a model to support adaptive and distance training. The model consists mainly on a representation of the trainees' knowledge and affect. We also proposed an animated pedagogical agent to guide trainees and provide instruction. The agent has facial expressions conveying emotion and empathy to the trainee. This model is intended to be integrated to a virtual reality training system. In this paper, the trainee model and the initial actions of the agent are presented.

Journal ArticleDOI
TL;DR: This paper applies Rhetorical structure theory to determine discourse structure of why type questions and uses such structure to determine sentiment polarity of complex why type opinion questions that could be expressed in multiple sentences or could have mixed opinions expressed in them.
Abstract: Opinion questions expect answers from opinionated data available on social web. Opinion why-questions require answers to include reasons, elaborations, explanations for the users' sentiments expressed in the questions. Sentiment analysis has been recently used in answering why type opinion questions. In this paper, we propose an approach to determine the sentiment polarity of complex why type opinion questions that could be expressed in multiple sentences or could have mixed opinions expressed in them. We apply Rhetorical structure theory to determine discourse structure of why type questions. We use such structure to determine sentiment polarity of why type questions and conduct experiments which obtain better results as compared to baseline average scoring methods.

Journal ArticleDOI
TL;DR: La intencion de esto es poder tener un sistema that ayude a encontrar cuando una persona esta utilizando el doble sentido dentro of algun texto corto, como pueden ser los tweets and ademas lograr hacer mapas de Mexico that nos proporcione informacion los lugares donde se utilizan with mayor frecuencia el dobles sentido.
Abstract: Resumen. Se propone una metodologia para la deteccion frases obscenas y vulgares en los tweets, debido a que Mexico es uno paises donde se usa mucho el doble sentido para comunicarse. La metodologia propuesta se apoya en un diccionario de mexicanismos etiquetado manualmente por expertos. Se pudo detectar que las palabras obscenas y vulgares son las que mas se emplean y los estados del pais que mas las utilizan. Ademas en base al diccionario, se logra clasificar un conjunto de tweets, dichos tweet fueron tomados por zonas geograficas de Mexico, consideramos que dichas personas son mexicanas y por lo tanto pueden llegar a manejar el doble sentido. La intencion de esto es poder tener un sistema que ayude a encontrar cuando una persona esta utilizando el doble sentido dentro de algun texto corto, como pueden ser los tweets y ademas lograr hacer mapas de Mexico que nos proporcione informacion los lugares donde se utilizan con mayor frecuencia el doble sentido. Palabras clave: Palabras obscenas, palabras vulgares, albur, diccionarios, textos cortos.

Journal ArticleDOI
TL;DR: This paper describes a language and font-detection system for Gurmukhi and Devanagari and explains a font conversion system for converting the ASCII based text into Unicode.
Abstract: The digital text written in an Indian script is difficult to use as such. This is because, there are a number of font formats available for typing, and these font-formats are not mutually compatible. Gurmukhi alone has more than 225 popular ASCII-based fonts whereas this figure is 180 in case of Devanagari. To read the text written in a particular font, that font is required to be installed on that system. This paper describes a language and font-detection system for Gurmukhi and Devanagari. It also explains a font conversion system for converting the ASCII based text into Unicode. Therefore, the proposed system works in two stages: the first stage suggests a statistical model for automatic language-detection (i.e., Gurmukhi or Devanagari) and font- detection; the second stage converts the detected text into Unicode as per font detection. Though we could not train our systems for some fonts due to non- availability of font converters but system and its architecture is open to accept any number of languages/fonts in the future. The existing system supports around 150 popular Gurmukhi font encodings and more than 100 popular Devanagari fonts. We have demonstrated the effectiveness of font detection is 99.6% and Unicode conversion is 100% in all the cases.

Journal ArticleDOI
TL;DR: En this art´iculo se presenta una aproximacion computacional que permite reconocer automaticamente this tipo of estructuras lingu´isticas en corpus de diferentes dominios.
Abstract: Resumen Las locuciones verbales fijas designan un tipo particular de construcciones fijadas. En nuestro enfoque concebimos una secuencia verbal fija como un grupo de palabras en las que al menos una es un verbo que funciona como nucleo del predicado. En este art´iculo se presenta una aproximacion computacional que permite reconocer automaticamente este tipo de estructuras lingu´isticas en corpus de diferentes dominios. En el contexto de esta investigacion, cuando hablamos de "reconocer" nos referimos al hecho de identificar los limites inferior y superior que enmarcan una secuencia de palabras que tienen un alto grado de proba- bilidad de ser una expresion verbal fija. Palabras clave: Secuencia verbal fija, aprendizaje automatico, lexico

Journal ArticleDOI
TL;DR: Las interfaces cerebro-computadora (BCI) basadas en electroencefalograma (EEG) son una alternativa que pretende integrar a las personas con discapacidad motriz severa a su entorno por lo poco intuitivas que son las fuentes electrofisiologicas para controlarlas.
Abstract: Resumen. Las interfaces cerebro-computadora (BCI) basadas en electroencefalograma (EEG) son una alternativa que pretende integrar a las personas con discapacidad motriz severa a su entorno. Sin embargo, estas aun no son utilizadas en la vida cotidiana por lo poco intuitivas que son las fuentes electrofisiologicas para controlarlas. Para tratar este problema, se han realizado trabajos con el objetivo de clasificar las senales de EEG registradas durante el habla imaginada. En este trabajo se utilizo la tecnica de sonificacion de senales de EEG, la cual nos permite caracterizar la senal de EEG como una senal de audio. El objetivo es analizar si al aplicar el proceso de sonificacion de la senal de EEG se puede discriminar o resaltar patrones que mejoren los resultados de clasificacion de palabras no pronunciadas. Para ello se proceso la senal con y sin sonificacion. Se obtuvieron los resultados de los 4 canales mas cercanos a las areas de lenguaje de Broca y Wernicke. Los porcentajes de exactitud promedio para las senales sin aplicar sonificacion y aplicando sonificacion son 48.1% y 55.88%, respectivamente, por lo que se pudo observar que el metodo empleado de sonificacion de EEG mejora ligeramente los porcentajes de clasificacion. Palabras claves: Electroencefalogramas (EEG), interfaces cerebro-computadora (BCI), sonificacion (sonification), habla imaginada (imagined speech/unspoken speech), random forest.

Journal ArticleDOI
TL;DR: A ML approach to automatically annotate Spanish tweets dealing with the online-reputation of politicians is described, finding that a simple statistical NLP classifier without in-domain training can provide as reliable annotation as humans annotators and outperform more specific resources such as lexicon or in- domain data.
Abstract: Opinion mining on Twitter recently attracted research in- terest in politics using Information Retrieval (IR) and Natural Lan- guage Processing (NLP). However, getting domain-specific annotated data still remains a costly manual step. In addition, the amount and quality of these annotation may be critical regarding the performance of machine learning (ML) based systems. An alternative solution is to use cross-language and cross-domain sets to simulate training data. This paper describe a ML approach to automatically annotate Spanish tweets dealing with the online-reputation of politicians. Our main finding is that a simple statistical NLP classifier without in-domain training can provide as reliable annotation as humans annotators and outperform more specific resources such as lexicon or in-domain data.

Journal ArticleDOI
TL;DR: HATp is not only able to provide a parameter vector that improve the search ability of PSO to nd a solution but also to enhance its performance on resolving the spectrum sharing application problem than those parameters values suggested by empirical and analogical methodologies in the literature on some problem instances.
Abstract: In this work, an experimental study to evaluate the parame- ter vector utility brought by an automated tuning tool, so called Hybrid Automatized Tuning procedure (HATp) is given. The experimental work uses the inertia weight and number of iterations from the algorithm PSO; it compares those parameters from tuning by analogy and empirical studies. The task of PSO is to select users to exploit concurrently a channel as long as they achieve the Signal-to-Interference-Ratio (SINR) constraints to maximize throughput; however, as the number of users increases the interference also arises; making more challenging for PSO to converge or to nd a solution. Results show that, HATp is not only able to provide a parameter vector that improve the search ability of PSO to nd a solution but also to enhance its performance on resolving the spectrum sharing application problem than those parameters values suggested by empirical and analogical methodologies in the literature on some problem instances.

Journal ArticleDOI
TL;DR: Un modelo para resolver el problema de similitud semantica entre textos de diferente longitud fue desarrollado utilizando regresion loǵistica de the herramienta Weka, y los resultados obtenidos sobre los datos ofrecidos en el marco del Semeval 2014, han sido buenos para dos tipos de corpora.
Abstract: Resumen En el presente trabajo se desarrolla un modelo para resolver el problema de similitud semantica entre textos de diferente longitud. Se propone extraer caracteŕisticas lexicas, caracteŕisticas basadas en conocimiento y caracteŕisticas basadas en corpus, con el objetivo de desarrollar un modelo de aprendizaje supervisado. El modelo fue desarrollado utilizando regresion loǵistica de la herramienta Weka. Los resultados obtenidos sobre los datos ofrecidos en el marco del Semeval 2014, han sido buenos para dos tipos de corpora.

Journal ArticleDOI
TL;DR: A design of a wireless sensor network for a body area network (BAN) for continuous monitoring (Electroencephalogram (EEG), electrocardiogram (ECG), and events monitoring (electrogastro- gram (EGG)) dened by the expert.
Abstract: Epilepsy is considered a disorder in which a person has cer- tain episodes of disturbed brain activity. There are dierent studies that allow monitoring physiological signals associated with a possible epileptic episode. However, these require wired connections and therefore moni- toring is restricted to factors such as limited physical movements and loss of information by disconnecting sensors. Hence, there is not a sys- tem to monitor patients in a mobile (ambulatory) and wireless fashion. This paper describe a design of a wireless sensor network for a body area network (BAN ) for continuous monitoring (electroencephalogram (EEG), electrocardiogram (ECG)) and events monitoring (electrogastro- gram (EGG)) dened by the expert. The system is based on a TDMA protocol that allows sensor nodes to continuously transmit their infor- mation as well as the event nodes access to the network when needed. The system is mathematically studied and numerical results are veried through discrete event simulations.

Journal ArticleDOI
TL;DR: En this trabajo, el sistema acepta un documento de texto compuesto de una lista con los objetos graficos y la relacion al grupo que pertenecen, y el proceso e implementacion that hace posible observar resultados de opciones de particionamiento en mapas.
Abstract: Resumen Para problemas de Particionamiento Geografico (PG), se buscan agrupaciones de objetos de acuerdo a las condiciones geograficas Generalmente, las respuestas del particionamiento geografico en otros trabajos han sido mostradas textualmente o en un grafo, sin embargo, para problemas donde datos geograficos son los que se agrupan, representar graficamente las particiones resultantes es un proceso complicado pero necesario Esto implica que la agrupacion use recursos adecuados para este proposito como geometria computacional o herramientas de interfaz con un Sistema de Informacion Geografica (SIG) En este trabajo nos ocupamos de presentar una breve revision del particionamiento geografico y el proceso e implementacion que hace posible observar resultados de opciones de particionamiento en mapas Para este proposito se diseno un conjunto de modulos que se comunican con un SIG Este proceso suele iniciarse con la seleccion de datos que continua con la escogencia de un algoritmo de particionamiento geografico en distintas categorias (compacto, homogeneo para variables poblacionales, P-mediana, multiobjetivo, Relajacion Lagrangeana, homogeneo en el numero de objetos, etc) El resultado del particionamiento genera archivos de salida, sin embargo, el sistema acepta un documento de texto compuesto de una lista con los objetos graficos y la relacion al grupo que pertenecen El procedimiento final esta constituido de una interfaz con un SIG con el fin de distinguirlos resultados de las diferentes agrupaciones en mapas A este sistema le hemos llamado Sistema de Interfaz Grafico para Particionamiento (SIGP) Palabras clave: Particionamiento, Sistema de Informacion Geografica (SIG)

Journal ArticleDOI
TL;DR: This paper is focused on identifying and providing according to the experiences and requirements of the users, the best practices for Web development by using Grails and Django Web frameworks to develop more interactive and efficient Web applications.
Abstract: A best practice is a technique or an important aspect that helps to develop Web applications more efficiently. Best practices on Web frameworks reduce the development time and effort, saving money, increasing the quality of code, enabling to create friendly and interactive applications. This paper is focused on identifying and providing according to the experiences and requirements of the users, the best practices for Web development by using Grails and Django Web frameworks. With these b est practices, developers can develop more interactive and efficient Web ap- plications, integrating features of Web 2.0 technologies with less effort as well as exploiting the framework benefits. As proof of concept we have developed a set of Web applications by using best practices as HTML5, Comet support, AJAX, ORM, extensibility, among others.

Journal ArticleDOI
TL;DR: A method to determine political profiles of a voters population through the application of learning algorithms and game theory is presented, which allows for a zero-sum game approach to distribute the potential number of votes among the involved candidates.
Abstract: We present a method to determine political profiles of a voters population through the application of learning algorithms and game theory. Our process began with a collection of surveys gathered in two ways, from a website and directly from voters. Having a linear hierarchy on the attributes expressing the political preferences of the voters allow to apply a zero-sum game approach to distribute the potential number of votes among the involved candidates. We have applied our model to do electoral prospection in a case study of the Major’s election of our city, that took place in July 2013. The results were quite encouraging. Our approach has also the potential to be applied to new product advertisement campaigns.

Journal ArticleDOI
TL;DR: This work aims to develop a Head-driven Phrase Structure grammar (HPSG) representing all the different forms of Arabic coordination, based on a proposed typology, designed for grammars specified in Type Description Language (TDL).
Abstract: Researchers working in Natural Language processing (NLP) found many problems, at different levels. The main problem encountered is the treatment of complicated phenomena, essentially the coordination. This phenomenon is very important. In fact, it is very frequent in various corpora and has always been a center of interest in NLP. Unfortunately, the few works working on this structure treated only some coordinated forms using constructed parsers which are generally so heavy. In this context, our work aims to develop a Head-driven Phrase Structure grammar (HPSG) representing all the different forms of Arabic coordination, based on a proposed typology. The constructed grammar was validated with Linguistic Knowledge Building (LKB). This system is designed for grammars specified in Type Description Language (TDL).