scispace - formally typeset
Search or ask a question

Showing papers in "Inteligencia Artificial,revista Iberoamericana De Inteligencia Artificial in 2008"


Journal ArticleDOI
TL;DR: AWSC (Automatic Web Service Classification), an automatic classifier of Web service descriptions, exploits the connections between the category of a Web service and the information commonly found in standard descriptions.
Abstract: A Web service is a Web accessible software that can be published, located and invoked by using standard Web protocols. Automatically determining the category of a Web service, from several pre-defined categories, is an important problem with many applications such as service discovery, semantic annotation and service matching. This paper describes AWSC (Automatic Web Service Classification), an automatic classifier of Web service descriptions. AWSC exploits the connections between the category of a Web service and the information commonly found in standard descriptions. In addition, AWSC bridges different styles for describing services by combining text mining and machine learning techniques. Experimental evaluations show that this combination helps our classification system at improving its precision. In addition, we report an experimental comparison of AWSC with a related work.

56 citations


Journal ArticleDOI
TL;DR: This paper focuses on the implementational aspects of the multiagent system and specially on the T-Agent development, going from the theoretical agent model to a concrete agent implementation.
Abstract: In this paper a multiagent Tourism Recommender System is presented. This system has a multiagent architecture and one of its main agents, the Travel Assistant Agent (T-Agent), is modelled as a graded BDI agent. The graded BDI agent model allows us to specify an agent architecture able to deal with the environment uncertainty and with graded mental attitudes. We focus on the implementational aspects of the multiagent system and specially on the T-Agent development, going from the theoretical agent model to a concrete agent implementation.

28 citations


Journal Article
TL;DR: A MASINA, una metodologia que comprende las fases de conceptualizacion, analisis, diseno, codificacion y pruebas de sistemas de ingenieria basados en agentes, fundamentada en the metodOLOGia MultiAgent Systems for INtegrated Automation, desarrollada para especificar sistsemas multiagentes in ambientes de automatizacion industrial.
Abstract: En este articulo se presenta una metodologia que comprende las fases de conceptualizacion, analisis, diseno, codificacion y pruebas de sistemas de ingenieria basados en agentes, fundamentada en la metodologia MultiAgent Systems for INtegrated Automation (MASINA), desarrollada para especificar sistemas multiagentes en ambientes de automatizacion industrial. La metodologia propuesta usa el Lenguaje de Modelado Unificado (UML), ampliamente usado para modelar sistemas de software, y la Tecnica de Desarrollo de Sistemas de Objetos (TDSO), la cual es una herramienta para la especificacion formal de modelos orientados a objetos. Siguiendo los lineamientos metodologicos para la especificacion de sistemas de ingenieria, MASINA se inicia con la fase de conceptualizacion que permite identificar aquellos componentes del sistema que seran considerados agentes y proponer la arquitectura del sistema multiagentes correspondiente. Estos agentes y sus interrelaciones son especificados e implementados en las fases restantes de la metodologia propuesta, usando diagramas UML en la fase de analisis y diseno, y plantillas de TDSO en la fase de diseno.

25 citations


Journal ArticleDOI
TL;DR: The use of the formal derivative of Pearson correlation for gradientbased optimization of data models is demonstrated and a versatile method is presented to perform faithful multi-dimensional scaling from a high-dimensional space of source data to a low-dimensional target space.
Abstract: In biomedical analytics one of the major criteria for the characterization of similarities between measured data items is correlation. We demonstrate the use of the formal derivative of Pearson correlation for gradientbased optimization of data models. Firstly, individual data attributes can be rated according to their impact on pairwise data relationships, analogous to the variance measure in Euclidean space. Secondly, a versatile method is presented to perform faithful multi-dimensional scaling from a high-dimensional space of source data to a low-dimensional target space, a method driven by maximization of correlation between distances of static source data and adaptive target vectors. As shown for mass spectroscopy data, a combination of attribute rating and data visualization helps revealing interesting data properties.

23 citations


Journal ArticleDOI
TL;DR: This paper presents the winning strategy in the 2nd Spanish ART competition together with an analysis of the factors that have contributed to this success and the results obtained using the same strategy.
Abstract: In multi-agent systems where agents compete among themselves, trust is an important topic to have in mind. The ART Testbed Competition has been created with the objective of evaluating objectively different trust strategies that agents can use in this kind of environments. In this paper we present the winning strategy in the 2nd Spanish ART competition together with an analysis of the factors that have contributed to this success. We also present the results obtained using the same strategy in the 2nd International ART competition.

13 citations


Journal ArticleDOI
TL;DR: A new method to evaluate distances in such spaces that naturally extend the application of many clustering algorithms to these cases, and both new metrics allow diverse algorithms to easily find clusters of arbitrary shape.
Abstract: Many successful clustering techniques fail to handle data with a manifold structure, i.e. data that is not shaped in the form of compact point clouds, forming arbitrary shapes or paths through a high-dimensional space. In this paper we present a new method to evaluate distances in such spaces that naturally extend the application of many clustering algorithms to these cases. Our algorithm has two stages. Following ISOMAP, it searches for sets of locally-uniform manifolds, which could be disjoint. These manifolds are then connected using two slightly dierent strategies. We compare these strategies between them and with a state of the art algorithm using three artificial problems, obtaining encouraging result. Both new metrics allow diverse algorithms to easily find clusters of arbitrary shape.

6 citations


Journal ArticleDOI
TL;DR: A novel learning architecture for sequences analyzed on a short-term basis, but not assuming stationarity within each frame is proposed, which can be useful for a wide range of applications.
Abstract: Hidden Markov models have been found very useful for a wide range of applications in artificial intelligence. The wavelet transform arises as a new tool for signal and image analysis, with a special emphasis on nonlinearities and nonstationarities. However, learning models for wavelet coecients have been mainly based on fixed-length sequences. We propose a novel learning architecture for sequences analyzed on a short-term basis, but not assuming stationarity within each frame. Long-term dependencies are modeled with a hidden Markov model which, in each internal state, deals with the local dynamics in the wavelet domain using a hidden Markov tree. The training algorithms for all the parameters in the composite model are developed using the expectation-maximization framework. This novel learning architecture can be useful for a wide range of applications. We detail experiments with real data for speech recognition. In the results, recognition rates were better than the state of the art technologies for this task.

6 citations


Journal ArticleDOI
TL;DR: This paper proposes a trust model that uses the discrepancy between the information provided by other agents and its own experience in order to anticipate their actions, instead of using that discrepancy as a source of dishonesty and distrust and proposes a cognitive model based on motivational attitudes to implement adaptive behaviors.
Abstract: Trust modelling is a challenging issue due to the dynamic nature of distributed systems and the unreliability of self-interested agents. In this context, the Agent Reputation and Trust (ART) Testbed has been used to compare trust models in two Spanish competitions in 2006 and 2007 and several international competitions. In this paper we describe the model we have presented to the Spanish competitions. We propose a trust model that uses the discrepancy between the information provided by other agents and its own experience in order to anticipate their actions, instead of using that discrepancy as a source of dishonesty and distrust. In addition we propose a cognitive model based on motivational attitudes to implement adaptive behaviors. Various implementations of this trust model have participated in the competitions; one was the winner of the first Spanish competition in 2006, the other won a combined game that included the participants in the two Spanish competitions.

5 citations


Journal ArticleDOI
TL;DR: This paper uses Defeasible Logic Programming (DeLP), a general-purpose defeasible argumentation formalism based on logic programming, to model the notion of trust and helps identify antagonism among sources of news and facilitates the analysis of opposing positions.
Abstract: Deciding whether to trust an information sources on the Web has been recognized as one of the main problems in today's Information Society. In particular, assessing the credibility of news is a major research challenge. Typically, criteria such as freshness, relevance and viewer profile have been used by news services to rank news. However, these services do not deal with credibility from a qualitative perspective, and do not provide mechanisms to cope with controversial news reports. To fill this gap, this paper proposes a novel framework that brings the notions of trust and pluralism into play. In our proposal, we integrate dialectical reasoning into a news recommender system. The system is based on a set of basic principles characterizing the nature of trust. We use Defeasible Logic Programming (DeLP), a general-purpose defeasible argumentation formalism based on logic programming, to model the notion of trust. Our approach helps identify antagonism among sources of news and facilitates the analysis of opposing positions.

5 citations


Journal ArticleDOI
TL;DR: A novel model of an artificial immune system (AIS) based on the process that suffers the T-Cell is presented, used for solving constrained (numerical) optimization problems and a new mutation operator is developed which incorporates knowledge of the problem.
Abstract: In this paper, we present a novel model of an artificial immune system (AIS), based on the process that suffers the T-Cell. The proposed model is used for solving constrained (numerical) optimization problems. The model operates on three populations: Virgins, Effectors and Memory. Each of them has a different role. Also, the model dynamically adapts the tolerance factor in order to improve the exploration capabilities of the algorithm. We also develop a new mutation operator which incorporates knowledge of the problem. We validate our proposed approach with a set of test functions taken from the specialized literature and we compare our results with respect to Stochastic Ranking (which is an approach representative of the state-of-the-art in the area) and with respect to an AIS previously proposed.

5 citations


Journal ArticleDOI
TL;DR: The Social Behavior Simulator as mentioned in this paper is a herramienta for analyzing the interaccion social of humans in an encuentro interactivo diadico, where data from a real person interactua with a simulated one.
Abstract: Este articulo presenta el programa Social Behavior Simulator, una herramienta para poder estudiar la interaccion social de las personas en un encuentro interactivo diadico. La peculiaridad de este modelo, respecto a los presentados anteriormente, es que utiliza datos obtenidos de una persona real por medio de una encuesta. Se introduce una posible sistematizacion de la secuencia interactiva, un conjunto de categorias relevantes en una situacion social y un modelo de toma de decisiones probabilistico basado en variables internas y del sexo de la persona con quien interactua el simulador. Los resultados indican que el factor clave en la interaccion con seres simulados es la inmersion y que la inmersion depende de la familiaridad y del interface del programa, principalmente. Es posible que el uso de seres simulados produzca en un futuro cercano una revolucion en los metodos de investigacion en las ciencias humanas por sus ventajas metodologicas, economicas y clinicas que posee, pero es necesaria la cooperacion multidisciplinar para conseguirlo.

Journal ArticleDOI
TL;DR: Dichos coeflcientes determinan la independencia entre los resultados que brinda el sistema, el nivel de profundidad en el razonamiento seguido para alcanzar estos resultados and the cantidad of conocimiento asociado al proceso de razonmiento.
Abstract: La evaluacion de un Sistema Basado en Conocimiento es una fase del ciclo de desarrollo en este paradigma que comunmente busca que el sistema tenga una sintaxis correcta, una semantica valida y que el grado de usabilidad y utilidad sea alto. Sin embargo, en esta etapa no se hace una valoracion de la organizacion que tiene la Base de Conocimiento, lo cual pudiera repercutir en la eflciencia de la aplicacion resultante, independientemente de que cumpla con los aspectos considerados para su evaluacion. El presente trabajo propone un conjunto de coeflcientes para medir, cuantitativamente, la estructura de un Sistema Experto basado en Reglas. Dichos coeflcientes determinan la independencia entre los resultados que brinda el sistema, el nivel de profundidad en el razonamiento seguido para alcanzar estos resultados y la cantidad de conocimiento asociado al proceso de razonamiento. El valor obtenido para cada coeflciente, constituye la base para la interpretacion de las caracter¶‡sticas analizadas, permitiendo predecir la calidad del sistema bajo estudio.

Journal ArticleDOI
TL;DR: This work introduces a modified SVM classifier created using multiple hyperplanes valid only at small temporal intervals (windows), and learns all hyperplanes in a global way, minimizing a cost function that evaluates the error committed by this family of local classifiers plus a measure associated to the VC dimension of the family.
Abstract: In this work we propose an adaptive classification method able both to learn and to follow the temporal evolution of a drifting concept. With that purpose we introduce a modified SVM classifier, created using multiple hyperplanes valid only at small temporal intervals (windows). In contrast to other strategies proposed in the literature, our method learns all hyperplanes in a global way, minimizing a cost function that evaluates the error committed by this family of local classifiers plus a measure associated to the VC dimension of the family. We also show how the idea of slowly changing classifiers can be applied to non-linear stationary concepts with results similar to those obtained with normal SVMs using gaussian kernels.

Journal ArticleDOI
TL;DR: An evolutionary-based approach for descriptor selection aimed to physicochemical property prediction is presented, and a genetic algorithm with a fitness function based on decision trees, which evaluates the relevance of a set of descriptors is proposed.
Abstract: Feature selection methods look for the selection of a subset of features or variables in a data set, such that these features are the most relevant for predicting a target value. In chemoinformatics context, the determination of the most significant set of descriptors is of great importance due to their contribution for improving ADMET prediction models. In this paper, an evolutionary-based approach for descriptor selection aimed to physicochemical property prediction is presented. In particular, we propose a genetic algorithm with a fitness function based on decision trees, which evaluates the relevance of a set of descriptors. Other fitness functions, based on multivariate regression models, were also tested. The performance of the genetic algorithm as a feature selection technique was assessed for predicting logP (octanol-water partition coefficient), using an ensemble of neural networks for the prediction task. The results showed that the evolutionary approach using decision trees is a promising technique for this bioinformatic application.

Journal ArticleDOI
TL;DR: En el presente trabajo se discuten y proponen los ajustes necesarios a un algoritmo genetico canonico, fundamentalmente en lo que se refiere a la funcion of fitness y a las tecnicas para garantizar the convergencia hacia multiples soluciones.
Abstract: El reconocimiento de patrones en el trafico de red es uno de los componentes fundamentales de los sistemas de deteccion de intrusos. En este trabajo se estudian las posibilidades de aplicacion de un algoritmo genetico para obtener reglas que permitan reconocer las instancias de trafico normales. El enfoque propuesto es distinto respecto a otros trabajos en donde se busca obtener los patrones de trafico de las instancias que presentan anomaĺias. En el presente trabajo se discuten y proponen los ajustes necesarios a un algoritmo genetico canonico, fundamentalmente en lo que se refiere a la funcion de fitness y a las tecnicas para garantizar la convergencia hacia multiples soluciones.

Journal ArticleDOI
TL;DR: This paper applies clustering techniques to obtain coarse-grained subcategorization classes from an annotated corpus of Spanish, then evaluates these classes and uses them to learn a classier to assign subc categorization frames to the verbs of previously unseen sentences.
Abstract: In this paper we introduce a method for automatically assigning subcategorization frames to previously unseen verbs of Spanish, as an aid to syntactical analysis. Since there is not a consensus on the classes of subcategorization frames, we combine supervised and unsupervised learning. We apply clustering techniques to obtain coarse-grained subcategorization classes from an annotated corpus of Spanish, then evaluate these classes and we nally use them to learn a classier to assign subcategorization frames to the verbs of previously unseen sentences.

Journal ArticleDOI
TL;DR: The Agent Reputation and Trust (ART) Testbed initiative was launched with the goal of establishing a testbed for agent reputation- and trust-related technologies and several issues have arisen that require some attention.
Abstract: The Agent Reputation and Trust (ART) Testbed initiative was launched with the goal of establishing a testbed for agent reputation- and trust-related technologies. This initiative has lead to the delivery of a flexible/modular prototype platform to the community. After two years and four successful competitions, several issues have arisen that require some attention to maintain the focus of the testbed as a research oriented tool as well as the interest of the participants. This paper presents these issues and suggests some possible modifications that we think can improve the testbed.

Journal ArticleDOI
TL;DR: In this paper, a knowledge-based greedy seeding procedure was used for creating the initial population, motivated by the expectation that the seeding will speed up the GA by starting the search in promising regions of the search space.
Abstract: In this paper, the two-dimensional strip packing problem with 3-stage level patterns is tackled using genetic algorithms (GAs). We evaluate the usefulness of a knowledge-based greedy seeding procedure used for creating the initial population. This is motivated by the expectation that the seeding will speed up the GA by starting the search in promising regions of the search space. An analysis of the impact of the seeded initial population is offered, together with a complete study of the influence of these modifications on the genetic search. The results show that the use of an appropriate seeding of the initial population outperforms existing GA approaches on all the used problem instances, for all the metrics used, and in fact it represents the new state of the art for this problem.

Journal ArticleDOI
TL;DR: In this article, a sistema of procesamiento de senales ultrasonicas captadas a traves of una estructura of 2 Transmisores and 2 Receptores is presented.
Abstract: Estudiando algunos de los principales trabajos sobre reconocimiento de formas basicas de objetos en un ambiente estructurado, puede concluirse que la mayoria utiliza caracteristicas en el tiempo de los ecos captados. Si bien, el analisis en frecuencia ha sido bastante util en el procesamiento de senales acusticas en la banda audible, no ha representado igual papel en la ultrasonica aplicada a la robotica debido al uso de transductores de reducido ancho de banda. Por otro lado, la utilizacion creciente de la Transformada Wavelet, como herramienta que permite realizar un analisis en el tiempo de las componentes de frecuencia de una senal, no tiene aun un nicho amplio en los desarrollos con ultrasonido. En el presente articulo se propone entonces un sistema de procesamiento de senales ultrasonicas captadas a traves de una estructura de 2 Transmisores y 2 Receptores, de tal manera que extrayendo diferentes caracteristicas del espacio temporal, de los espectros de Fourier y de los coeficientes Wavelet, eligiendo las mas adecuadas mediante Analisis en Componentes Principales (ACP), y usandolas posteriormente para el entrenamiento de Redes Neuronales (RNA) Especializadas, sea posible el reconocimiento de planos, esquinas, filos y cilindros, en una sola exploracion y con un alto porcentaje de aciertos. Estudiando algunos de los principales trabajos sobre reconocimiento de formas basicas de objetos en un ambiente estructurado, puede concluirse que la mayoria utiliza caracteristicas en el tiempo de los ecos captados. Si bien, el analisis en frecuencia ha sido bastante util en el procesamiento de senales acusticas en la banda audible, no ha representado igual papel en la ultrasonica aplicada a la robotica debido al uso de transductores de reducido ancho de banda. Por otro lado, la utilizacion creciente de la Transformada Wavelet, como herramienta que permite realizar un analisis en el tiempo de las componentes de frecuencia de una senal, no tiene aun un nicho amplio en los desarrollos con ultrasonido. En el presente articulo se propone entonces un sistema de procesamiento de senales ultrasonicas captadas a traves de una estructura de 2 Transmisores y 2 Receptores, de tal manera que extrayendo diferentes caracteristicas del espacio temporal, de los espectros de Fourier y de los coeficientes Wavelet, eligiendo las mas adecuadas mediante Analisis en Componentes Principales (ACP), y usandolas posteriormente para el entrenamiento de Redes Neuronales (RNA) Especializadas, sea posible el reconocimiento de planos, esquinas, filos y cilindros, en una sola exploracion y con un alto porcentaje de aciertos.

Journal ArticleDOI
TL;DR: El sistema de exploracion ha sido ampliamente probado tanto en entornos simulados como reales, habiendo demostrado su correcto funcionamiento, ya que se alcanza the cobertura completa de una manera totalmente autonoma.
Abstract: En esta Tesis se desarrolla un sistema eficiente de navegacion basado en comportamientos para la exploracion completa de entornos total o parcialmente desconocidos por parte de un agente autonomo movil. El sistema de exploracion ha sido ampliamente probado tanto en entornos simulados como reales, habiendo demostrado su correcto funcionamiento, ya que se alcanza la cobertura completa de una manera totalmente autonoma.

Journal ArticleDOI
TL;DR: The concept of reliability as a generalization of trust is introduced, and Fuzzy Contextual Filters (FCF) are presented as reliability modeling methods loosely based on system identification and signal processing techniques.
Abstract: Trust modelling is widely recognized as an aspect of essential importance in the construction of agents and multi agent systems (MAS). As a consequence, several trust formalisms have been developed over the last years. All of them have, in our opinion a limitation: they can determine the trustworthiness or untrustworthiness of the assertions expressed by a given agent, but they don�t supply mechanisms for correcting this information in order to extract some utility from it. In order to overcome this limitation, we introduce the concept of reliability as a generalization of trust, and present Fuzzy Contextual Filters (FCF) as reliability modeling methods loosely based on system identification and signal processing techniques. Finally we illustrate their applicability to the appraisal variance estimation problem in the Agent Reputation and Trust (ART) testbed.


Journal Article
TL;DR: Un entorno multi-agente hace uso de ontoloǵıas para facilitar la integración de Servicios Web Semánticos and Agentes Inteligentes y AgentesInteligentes.
Abstract: espanolEn esta tesis se ha desarrollado un marco de trabajo que hace uso de las tecnologias de agentes y de Servicios Web Semanticos para la elaboracion de aplicaciones que puedan tratar con el dinamismo de la Web, al tiempo que se pueden beneficiar de caracteristicas como la autonomia, el aprendizaje y el razonamiento. Este es el punto en que cobra relevancia la Ingenieria Ontologica. Las ontologias son los componentes que permiten que la comunicacion entre agentes y Servicios Web, situados a distintos niveles de abstraccion, se produzca de forma fluida y sin interpretaciones erroneas. La arquitectura del marco de trabajo desarrollado consta, fundamentalmente, de un entorno multi-agente, un conjunto de bases de conocimiento y diversas interfaces que permiten al sistema comunicarse, de forma efectiva, con las entidades externas identificadas, a saber, Servicios Web y proveedores de servicios, entidades (usuarios) consumidores de servicios, y desarrolladores. EnglishIn this thesis, a knowledge-based Semantic Web Services framework that successfully integrates Intelligent Agents and Semantic Web Services technologies has been developed. For achieving this combination, the framework takes an ontology-centred approach. Ontologies are the facilitating technology that enables a seamlessly communication between agents and services.

Journal ArticleDOI
TL;DR: This article introduces the basics for understanding the mechanisms of Argument Theory Change and reify it using Defeasible Logic Programming, which generalizes this technique in order to handle extended arguments, i.e., arguments containing also strict rules.
Abstract: In this article we introduce the basics for understanding the mechanisms of Argument Theory Change. In particular we reify it using Defeasible Logic Programming. In this formalism, knowledge bases are represented through defeasible logic programs. The main change operation we deflne over a defeasible logic program is a special kind of revision that inserts a new argument and then modifles the resulting program seeking for the argument's warrant. Since the notion of argument refers to a set of defeasible rules, we generalize this technique in order to handle extended arguments, i.e., arguments containing also strict rules. Hence, revision using extended arguments allows us to consider program-independent arguments, which brings about new issues. A single notion of minimal change is analyzed, which refers to keep the contents of the program as much as possible. Finally, a brief discussion about the relation between our approach and the basic theory of belief revision is exposed, along with a description of other possible (more complex) minimal change principles.

Journal Article
TL;DR: In this article, the problem of clasificación de false positives and false negatives (falsos positivos) and false negativos (false negatives) is considered.
Abstract: En los problemas de clasificacion de patrones se busca minimizar el numero de patrones mal clasificados (error de clasificacion). Sin embargo, en muchas aplicaciones reales hay que tener en cuenta por separado el error tipo I (falsos positivos) y el tipo II (falsos negativos), lo que hace el problema mas complejo ya que un intento de minimizar uno de ellos, hace que el otro crezca. Es mas, en ocasiones uno de estos tipos de error puede ser mas importante que el otro, y se debe buscar un compromiso que minimice el mas importante de los dos. A pesar de la importancia de los errores tipo II, la mayoria de los metodos de clasificacion solo tienen en cuenta el error de clasificacion global. En este trabajo se propone la optimizacion de ambos tipos de error de clasificacion utilizando un algoritmo multiobjetivo en el que cada tipo de error y el tamano de red son objetivos de la funcion de evaluacion (fitness). Se ha utilizado una version modificada del metodo G-Prop de diseno y optimizacion de perceptrones multicapa usando un algoritmo evolutivo para optimizar simultaneamente la estructura de la red neuronal y los errores tipo I y II. Debido a la carga computacional que supone la ejecucion de un algoritmo evolutivo para el diseno de redes neuronales, se propone la paralelizacion utilizando el modelo isla como forma de distribuir la carga en una red heterogenea