scispace - formally typeset
Search or ask a question

Showing papers in "International Journal on Artificial Intelligence Tools in 1998"


Journal ArticleDOI
TL;DR: It was found that the use of a cultural framework to support self-adaptation in Evolutionary Programming can produce substantial performance improvements over population-only systems as expressed in terms of systems success ratio, execution CPU time, and mean best solution for a given set of 34 function minimization problems.
Abstract: Cultural Algorithms are computational self-adaptive models which consist of a population and a belief space. The problem-solving experience of individuals selected from the population space by the acceptance function is generalized and stored in the belief space. This knowledge can then control the evolution of the population component by means of the influence function. Here, we examine the role that different forms of knowledge can play in the self-adaptation process within cultural systems. In particular, we compare various approaches that use normative and situational knowledge in different ways to guide the function optimization process. The results in this study demonstrate that Cultural Algorithms are a naturally useful framework for self-adaptation and that the use of a cultural framework to support self-adaptation in Evolutionary Programming can produce substantial performance improvements over population-only systems as expressed in terms of (1) systems success ratio, (2) execution CPU time, and (3) convergence (mean best solution) for a given set of 34 function minimization problems. The nature of these improvements and the type of knowledge that is most effective in producing them depend on the problem's functional landscape. In addition, it was found that the same held true for the population-only self-adaptive EP systems. Each level of self-adaptation (component, individual, and population) outperformed the others for problems with particular landscape features.

101 citations


Journal ArticleDOI
TL;DR: These algorithms are based on the same principle: to exploit minimal supports as AC-6 and PC-{5|6} do, but without recording them, and it is shown that for path-consistency, this new approach allows to outperform significantly existing algorithms.
Abstract: Recently, efficient algorithms have been proposed to achieve arc- and path-consistencey in constraint networks. For example, for arc-consistency, there are linear time algorithms (in the size of the problem) which are efficient in practice (e.g. AC-6 and AC-7). The best path-consistency algorithm proposed is PC-{5|6} which is a natural generalization of AC-6 to path-consistency. While its theoretical complexity is the best, experimentations show clearly that it is not very efficient in practice. In this paper, we propose two algorithms, one for arc-consistency, AC-8, and the second for path-consistency, PC-8. These algorithms are based on the same principle: to exploit minimal supports as AC-6 and PC-{5|6} do, but without recording them. While for AC-8, this approach is of limited interest, we show that for path-consistency, this new approach allows to outperform significantly existing algorithms.

45 citations


Journal ArticleDOI
TL;DR: The share-confidence framework for knowledge discovery from databases is proposed which addresses the problem of mining characterized association rules from market basket data (i.e., itemsets) and shows how characterized itemsets can be generalized according to concept hierarchies associated with the characteristic attributes.
Abstract: We propose the share-confidence framework for knowledge discovery from databases which addresses the problem of mining characterized association rules from market basket data (i.e., itemsets). Our goal is to not only discover the buying patterns of customers, but also to discover customer profiles by partitioning customers into distinct classes. We present a new algorithm for classifying itemsets based upon characteristic attributes extracted from census or lifestyle data. Our algorithm combines the A priori algorithm for discovering association rules between items in large databases, and the A O G algorithm for attribute-oriented generalization in large databases. We show how characterized itemsets can be generalized according to concept hierarchies associated with the characteristic attributes. Finally, we present experimental results that demonstrate the utility of the share-confidence framework.

32 citations


Journal ArticleDOI
TL;DR: A method for tackling the course timetable construction problem for academic departments is presented, which is based on Constraint Logic Programming (CLP) for the early pruning of the search space and on the usage of intelligent heuristics in order to guide the search to the generation of nearly optimum solutions.
Abstract: The course timetable construction is a procedure that every academic department has to carry out at least twice annually, more times if some of the requirements change. These requirements indicate that a collection of elements must be taken in mind in order for an acceptable solution to be found. They come either from the inherent constraints of the problem or from the involved parties, namely teachers and students. A requirement that is very difficult to statisfy is the one of optimality, which means that the constructed timetable should be the best among the legal ones, according to some quantified quality criteria. In this paper, a method for tackling the course timetable construction problem for academic departments is presented, which is based on Constraint Logic Programming (CLP) for the early pruning of the search space and on the usage of intelligent heuristics in order to guide the search to the generation of nearly optimum solutions. A specific system is presented, named ACTS (Automated Course Timetabling System), which has been implemented in the ECLiPSe language. This system is currently in use by the Department of Informatics of the University of Athens for the purpose of aiding the semester course timetable construction.

17 citations


Journal ArticleDOI
TL;DR: A fuzzy Petri nets for modeling fuzzy rule-based reasoning is proposed to bring together the possibilistic entailment and the fuzzy reasoning to handle uncertain and imprecise information.
Abstract: In this paper, a fuzzy Petri nets for modeling fuzzy rule-based reasoning is proposed to bring together the possibilistic entailment and the fuzzy reasoning to handle uncertain and imprecise information. The three key components in our fuzzy rule-based reasoning: fuzzy propositions, truth-qualified fuzzy rules, and truth-qualified fuzzy facts, can be formulated as fuzzy places, uncertain transitions, and uncertain fuzzy tokens, respectively. Four types of uncertain transitions, inference, aggregation, duplication and aggregation-duplication transitions, are introduced to meet the mechanism of fuzzy rule-based reasoning. A reasoning algorithm based on fuzzy Petri nets is also presented to improve the efficiency of fuzzy rule-based reasoning. The reasoning algorithm is consistent with not only the rule-based reasoning but also the execution of Petri nets.

16 citations


Journal ArticleDOI
TL;DR: This paper proposes a means for schema versioning in a temporal, object-oriented data model called TOODM, which incorporates the concept of time sequences to store histories of a schema's properties.
Abstract: This paper proposes a means for schema versioning in a temporal, object-oriented data model called TOODM. A data definition language (DDL) is developed to describe evolving schema definitions. This DDL incorporates the concept of time sequences to store histories of a schema's properties. Update and retrieval operators for the manipulation of the schema are also developed. The data model has been implemented as a graphical front-end to POSTGRES.

9 citations


Journal ArticleDOI
J. Sukarno Mertoguno1
TL;DR: This paper describes an approach of using Multi-agents theory to construct an adaptive knowledge base system and frame base (graph) knowledge representation has been chosen.
Abstract: This paper describes an approach of using Multi-agents theory to construct an adaptive knowledge base system To represent the knowledge, frame base (graph) knowledge representation has been chosen The driving force of this study is the intention of having a distributed (possibly across the net) adaptive knowledge base The challenge of developing an adaptive knowledge base lays on how to evolve the knowledge (adaptability) and how to control the evolution (maintain the quality of the knowledge) In this paper the manifestation of both of the issues above on our model will be addressed

9 citations


Journal ArticleDOI
TL;DR: The model is characterized by its strong support of association types and its incorporation of temporal knowledge rules for specifying temporal and other types of semantic constraints associated with object classes and their temporal object instances, and an underlying temporal algebra, TA-algebra, and some implementation techniques.
Abstract: There has been a considerable amount of work on object-oriented databases, active databases, and deductive databases. The common objective of these efforts is to produce highly intelligent and active systems for supporting the next generation of database applications. These future systems must be capable of capturing the concepts of time and managing not just temporal data but temporal knowledge expressed by knowledge rules. In this paper, we describe our efforts on a temporal object-oriented knowledge model, OSAM*/T, its associated temporal query language, OQL/T, an underlying temporal algebra, TA-algebra, and some implementation techniques. In addition to the features of the traditional object-oriented paradigm, the model is characterized by its strong support of association types and its incorporation of temporal knowledge rules for specifying temporal and other types of semantic constraints associated with object classes and their temporal object instances. The query language is featured by its pattern-based specification of temporal object associations, which allows complex queries with various time constraints to be formulated in a relatively simple way. The temporal algebra provides a set of primitive operators for manipulating homogeneous and/or heterogeneous patterns of temporal object associations, thus providing the needed mathematical foundation for processing and optimizing temporal queries. The implementation techniques include a Delta-Instance and Multi-Snapshot Storage Model, as well as data partitioning and clustering schemes for storage management of temporal knowledge bases.

8 citations


Journal ArticleDOI
TL;DR: The aim of the proposed method is to find a minimum set of fuzzy rules that can correctly classify all training patterns, and computer simulation results are shown.
Abstract: This paper proposes a GA method for choosing an appropriate set of fuzzy rules for classification problems The aim of the proposed method is to find a minimum set of fuzzy rules that can correctly classify all training patterns The number of inference rules and the shapes of the membership functions in the antecedent part of the fuzzy rules are determined by the genetic algorithms The real numbers in the consequent parts of the fuzzy rules are obtained through the use of the descent method A fitness function is used to maximize the number of correctly classified patterns, and to minimize the number of fuzzy rules A solution obtained by the genetic algorithm is a set of fuzzy rules, and its fitness is determined by the two objectives, in a combinatorial optimization problem In order to demonstrate the effectiveness of the proposed method, computer simulation results are shown

6 citations


Journal ArticleDOI
TL;DR: This paper presents a new method for learning feature weights in a similarity function from the given similarity information and applies genetic algorithms to learn feature weights from these two kinds of information respectively.
Abstract: When employing a similarity function to measure the similarity between two cases, one large problem is how to determine the feature weights. This paper presents a new method for learning feature weights in a similarity function from the given similarity information. The similarity information can be divided into two kinds: One is called qualitative similarity information which represents the similarities between cases. The other is called relative similarity information which represents the relation between similarities of two case pairs both including a same case. We apply genetic algorithms to learn feature weights from these two kinds of information respectively. The proposed genetic algorithms are applicable to both linear and nonlinear similarty functions. Our experiments show the learning results are better even if the given similarity information include errors.

3 citations


Journal ArticleDOI
TL;DR: Within this work a method for knowledge based fuzzy image segmentation, description and recognition is introduced from the field of automated medical MRI segmentation where the well-known standard methods have proven insufficient to solve the task.
Abstract: Within this work a method for knowledge based fuzzy image segmentation, description and recognition is introduced. The basic idea comes from the field of automated medical MRI segmentation where the well-known standard methods have proven insufficient to solve the task. Therefore, a method especially for the problems concerning vagueness in medical imaging has been developed. Beside the central aspect of the improved segmentation procedures, the development has a general impact on the conventional model of image analysis, that includes object description and recognition.

Journal ArticleDOI
TL;DR: The proposed mechanism utilizes the neural network characteristics in understanding the functional mapping between input and output, to estimate the interconnection wire-length to be superior to the results obtained using Bounding Rectangle procedure.
Abstract: Neural Network is used as a tool for estimating interconnection wire-length in VLSI standard cell placement problem. Conventional methods for estimating the interconnection wire-length viz., Bounding Rectangle method, provide inaccurate estimate of the interconnection wire-length and does not depict the interconnection procedure in a layout and separates routing and placement tasks distinctly. The proposed mechanism utilizes the neural network characteristics in understanding the functional mapping between input and output, to estimate the interconnection wire-length. Experiments were performed for different number of cells with varying complexity of interconnections. In all the cases, the performance of the Neural Network is found to be superior to the results obtained using Bounding Rectangle procedure.

Journal ArticleDOI
TL;DR: An object-oriented lexical representation language that encodes linguistic and semantic information uniformly as classes and objects and an efficient bottom-up parsing method for UCG using selection sets technique is described.
Abstract: This paper describes an object-oriented lexical representation language based on Unification Categorial Grammar (UCG) that encodes linguistic and semantic information uniformly as classes and objects and an efficient bottom-up parsing method for UCG using selection sets technique. The lexical representation language, implemented in the logic and object-oriented programming language LIFE, introduces several new information sharing mechanisms to enable natural, declarative, modular and economial construction of large and complex computational lexicons. The selection sets are deduced from a transformation between UCG and Context-Free Grammar (CFG) and used to reduce search space for the table-driven algorithm. The experimental tests on a spoken English corpus show that the hierarchical lexicon achieves a dramatic reduction on redundant information and that selection sets significantly improve parsing UCG with a polynomial time complexity.

Journal ArticleDOI
TL;DR: The development of a novel neural network algorithm that provides a methodology of selecting nodes in a meaningful way from the infinite set of possibilities and synthesizes an n node single hidden layer network with empirical and analytical results that strongly indicate an O(1/n) mean squared training error bound under certain assumptions.
Abstract: A study of the function approximation capabilities of single hidden layer neural networks strongly motivates the investigation of constructive learning techniques as a means of realizing established error bounds. Learning characteristics employed by constructive algorithms provide ideas for development of new algorithms applicable to the function approximation problem. In addition, constructive techniques offer efficient methods for network construction and weight determination. The development of a novel neural network algorithm, the Constructive Locally Fit Sigmoids (CLFS) function approximation algorithm, is presented in detail. Basis functions of global extent (piecewise linear sigmoidal functions) are locally fit to the target function, resulting in a pool of candidate hidden layer nodes from which a function approximation is obtained. This algorithm provides a methodology of selecting nodes in a meaningful way from the infinite set of possibilities and synthesizes an n node single hidden layer network with empirical and analytical results that strongly indicate an O(1/n) mean squared training error bound under certain assumptions. The algorithm operates in polynomial time in the number of network nodes and the input dimension. Empirical results demonstrate its effectiveness on several multidimensional function approximate problems relative to contemporary constructive and nonconstructive algorithms.

Journal ArticleDOI
TL;DR: An agent-oriented language for the development of multi-agent systems, called ALL (Agent Level Language), that integrates object-oriented and rule based programming to simplify the definition of agent systems and the management of interactions between agents.
Abstract: To facilitate the success of agent technology, we need suitable tools both to reduce the effort in the development of agent systems and to obtain efficient implementations of them. This paper presents an agent-oriented language for the development of multi-agent systems, called ALL (Agent Level Language), that should offer both the two features. In fact, on the one hand, ALL integrates object-oriented and rule based programming to simplify the definition of agent systems and the management of interactions between agents. On the other hand, ALL can be translated into C++, Common Lisp and Java code and can encapsulate objects written in these three languages allowing the reuse of a large amount of pre-existent software and the realization of efficient applications in a large set of (different) domains. ALL rules derive from YACC rules and can simplify the management of agents interaction because they are suitable for the parsing of messages and because they can be reused thanks to object-oriented inheritance.

Journal ArticleDOI
TL;DR: The paper specializes the discussion to a proposed neuro-fuzzy multi-agent architecture, which is then applied to design the local path planning system of an indoor mobile robot.
Abstract: In this paper, some fundamental issues of modern multi-agent robot architectures are discussed. It is argued that the multi-agent approach provides the necessary flexibility and adaptivity for such architectures, and that the primary issue in designing a multi-agent robot architecture is the selection of the granularity level, i.e., the decision on decomposing the overall desired functionality physically or across tasks. It is explained why at the various system levels different decomposition grains are needed; physical components, tasks or hybrid. This granularity decision is made on the basis of specific criteria of control localization, knowledge decoupling and interaction minimization so as to identify the decision points of the overall functionality. The above criteria lead to a dual composition-decomposition relation, which provides a good basis for system scaling. The paper specializes the discussion to a proposed neuro-fuzzy multi-agent architecture, which is then applied to design the local path planning system of an indoor mobile robot.

Journal ArticleDOI
TL;DR: The domain of transportation planning was chosen to illustrate the proposed concept of DSS generator for ustructured problems and the constraint logic programming approach of network flow problems is presented.
Abstract: In this paper we propose a model of a decision support systems (DSS) generator for unstructured problems. The model is developed within the constraint logic programming (CLP) paradigm. At the center of the generator there is an ontology defining the concepts and relationships necessary and sufficient to describe the domain to be reasoned about, in a manner suitable for a particular class of tasks. The constraint solver of the constraint logic programming host language has to be extended with constraints which are relevant to the domain studied, but can not be found among the general constraints provided by the constraint solver. The domain of transportation planning was chosen to illustrate the proposed concept of DSS generator for ustructured problems. In this case we need to extend the constraint solver with constraint manipulation techniques specific to network flow problems. This paper presents in detail our constraint logic programming approach of network flow problems.

Journal ArticleDOI
TL;DR: A temporal mediator provides a simple interface that supports uniform accesses to heterogeneous temporal databases and chooses suitable conversion functions which convert the responses to the desired time granularities.
Abstract: In order to support uniform access to heterogeneous temporal information, we introduce the concept of a temporal mediator. A temporal mediator consists of three components: (i) a repository for windowing functions and conversion functions, (ii) a time granularity thesaurus and (iii) a query interpreter. There are two types of windowing functions: one associates each time point to a set of tuples, and the other associates each tuple to a set of time points. A conversion function transforms information in terms of one time granularity into that in terms of another time granularity. The time granularity thesaurus stores the knowledge about time granularities (e.g., names of time granularities and relationships among them). Users pose queries using the windowing functions and in terms of desired time granularities. (A query language, which can be used to form such queries, is given in the paper.) To answer such a user query, the query interpreter first employs the windowing functions together with the time granularity thesaurus to retrieve needed temporal data from the underlying databases and then uses the time granularity thesaurus to select suitable conversion functions which convert the responses to the desired time granularities. Thus, a temporal mediator provides a simple interface that supports uniform accesses to heterogeneous temporal databases.

Journal ArticleDOI
TL;DR: This paper shows how various issues in strategy learning are affected by the nature of performance tasks, problem solvers, and learning environments.
Abstract: Problem solvers employ strategies in searching for solutions to given problem instances. Strategies have traditionally been designed by experts using prior knowledge and refined manually using trial and error. Recent attempts to automate these processes have produced strategy-learning systems. This paper shows how various issues in strategy learning are affected by the nature of performance tasks, problem solvers, and learning environments. Surveyed learning systems are grouped by the commonality of their approaches into four general architectures.

Journal ArticleDOI
TL;DR: This paper proposes a technique called structure subtraction to construct a set of candidates for adding literals, single-literal or multiple-literals in any ILP system using top-down specilization and is not restricted to relational domains.
Abstract: One remarkable progress of recent research in machine learning is inductive logic programming (ILP). In most ILP system, clause specialization is one of the most important tasks. Usually, the clause specialization is performed by adding a literal at a time using hill-climbing heuristics. However, the single-literal addition can be caught by local pits when more than one literal needs to be added at a time increase the accuracy. Several techniques have been proposed for this problem but are restricted to relational domains. In this paper, we propose a technique called structure subtraction to construct a set of candidates for adding literals, single-literal or multiple-literals. This technique can be employed in any ILP system using top-down specilization and is not restricted to relational domains. A theory revision system is described to illustrate the use of structural subtraction.

Journal ArticleDOI
TL;DR: Though developed for Greek, the presented scheme can be easily transferred to other languages and might prove useful in the automatic processing of special kings of documents which require the extraction of time information.
Abstract: This paper describes a temporal parser (TEMPO) introducing a semantic approach to the detection and analysis of quantifiers of time within simple texts. TEMPO processes expressions which implicitly contain a date, like "last Sunday", "two days ago" etc., and extracts the exact date. It uses a keyword-driven parsing technique to spot the quantifiers and then separates segments of the sentence around the keywords and passes them to a semantic DCG parser. Though developed for Greek, the presented scheme can be easily transferred to other languages. TEMPO might prove useful in the automatic processing of special kings of documents which require the extraction of time information, such as applications, legal documents, contracts, etc.