scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Knowledge-based and Intelligent Engineering Systems in 2006"


Journal ArticleDOI
TL;DR: This paper proposes an efficient sequential access pattern mining algorithm, known as CSB-mine (Conditional Sequence Base mining algorithm), which is based directly on the conditional sequence bases of each frequent event which eliminates the need for constructing WAP-trees.
Abstract: Sequential access pattern mining discovers interesting and frequent user access patterns from web logs. Most of the previous studies have adopted Apriori-like sequential pattern mining techniques, which faced the problem on requiring expensive multiple scans of databases. More recent algorithms that are based on the Web Access Pattern tree (or WAP-tree) can achieve an order of magnitude faster than traditional Apriori-like sequential pattern mining techniques. However, the use of conditional search strategies in WAP-tree based mining algorithms requires re-construction of large numbers of intermediate conditional WAP-trees during mining process, which is also very costly. In this paper, we propose an efficient sequential access pattern mining algorithm, known as CSB-mine (Conditional Sequence Base mining algorithm). The proposed CSB-mine algorithm is based directly on the conditional sequence bases of each frequent event which eliminates the need for constructing WAP-trees. This can improve the efficiency of the mining process significantly compared with WAP-tree based mining algorithms, especially when the support threshold becomes smaller and the size of database gets larger. In this paper, the proposed CSB-mine algorithm and its performance will be discussed. In addition, we will also discuss a sequential access-based web recommender system that has incorporated the CSB-mine algorithm for web recommendations.

36 citations


Journal ArticleDOI
TL;DR: An incremental and domain independent learning methodology modelled over a multi-agent system that crawls the Web composing knowledge structures (ontologies) from the interrelation of several automatically obtained taxonomies of terms according to the user’s interests is proposed.
Abstract: Accessing up-to-date information in a fast and easy way implies the necessity of information management tools to explore and analyse the huge number of available electronic resources. The Web offers a large amount of valuable information for every possible domain, but its human-oriented representation and its size makes difficult and extremely time consuming any kind of centralised computer-based processing. In this paper, a combination of distributed AI and knowledge acquisition techniques is proposed to tackle this problem. In particular, we have designed an incremental and domain independent learning methodology modelled over a multi-agent system that crawls the Web composing knowledge structures (ontologies) from the interrelation of several automatically obtained taxonomies of terms according to the user’s interests. Moreover, the obtained ontologies are used to represent, in a structured way, the currently available web resources for the corresponding domain. The paper also presents examples of the potential results over medical and technological domains and compares the results, whenever it is possible, against publicly available taxonomic web search engines obtaining, in all cases, a considerable improvement.

34 citations


Journal ArticleDOI
TL;DR: The proposed Genetic Perceptual Model is a combination of frequency, luminance sensitivity and contrast masking, enabling us to shape the watermark according to the cover image.
Abstract: In this paper, we present a method for developing a Genetic Perceptual Model (GPM) applicable to a watermarking system. The proposed technique exploits the characteristics of human visual system using a Genetic Programming (GP) approach. We employ a tradeoff between watermark robustness and imperceptibility, as an optimization criterion in the GP search. The resultant GPM is a combination of frequency, luminance sensitivity and contrast masking, enabling us to shape the watermark according to the cover image. Our investigations have shown that the evolved GPM provides maximum allowable imperceptible alterations to the Discrete Cosine Transform coefficients of a cover image. Comparative studies in terms of watermark imperceptibility and bit correct ratio performance have been carried out. The performance of the GPM has been analyzed for various watermarking schemes.

31 citations


Journal ArticleDOI
TL;DR: This work introduces a framework for distributed knowledge management in competitive environments that combines a general role model enabling distributed, flexible agent-based knowledge management services and a set of general decision parameters for rational agents.
Abstract: The trends and recent changes in logistics lead to complex and partially conflicting requirements on logistic planning and control systems. Due to the lack of efficiency of currently available strategies and methodologies, a new paradigm for logistics planning and control is required. An emerging approach is the analysis and design of autonomous logistic processes. Agents represent a modern approach for implementing autonomous systems. The challenge for the design of agent systems is to integrate the complex and dynamic knowledge required for reliable decision-making in logistics. To address this problem, we introduce a framework for distributed knowledge management in competitive environments. Our approach combines a general role model enabling distributed, flexible agent-based knowledge management services and a set of general decision parameters for rational agents.

21 citations


Journal ArticleDOI
TL;DR: A framework to analyse, classify and choose knowledge-based applications for diagnosis is proposed, in which an application may be represented by a point, whose coordinates are defined on each of the 3 axes corresponding to the conceptual, the functional and the phenomenological dimensions of it.
Abstract: This paper proposes a framework to analyse, classify and choose knowledge-based applications for diagnosis. It defines a three-dimensional space in which an application may be represented by a point, whose coordinates are defined on each of the 3 axes corresponding to the conceptual, the functional and the phenomenological dimensions of it. Describing the applications according to this framework allows us to easily observe and analyze the similarities and differences among them. The conceptual dimension focuses on the problem solving method, while the functional dimension relates to the way in which causality is represented in the models. Finally, the phenomenological dimension describes the nature of the phenomena to be diagnosed.

17 citations


Journal ArticleDOI
TL;DR: A computer based Feature-Recognition (FR) process is being developed to extract critical manufacturing features from engineering product CAD models to minimise redundant user interaction with a product model.
Abstract: A computer based Feature-Recognition (FR) process is being developed to extract critical manufacturing features from engineering product CAD models. Feature-recognition technology is used for automating the extraction of data from CAD product models to minimise redundant user interaction with a product model. The feature-recognition process was developed using rule-based methods with wire-frame geometry extracted from an IGES neutral file format. Use of wire-frame models simplifies product geometry and has the potential to support rapid manufacturing shape evaluation at the conceptual design stage. The FR process is demonstrated using a range of typical metallic aerospace components.

16 citations


Journal ArticleDOI
TL;DR: A method of automatically modifying the decoder structure in accordance to the given cover image and conceivable attack is illustrated, and simulation results show that the resultant genetic decoder has superior performance as compared to the conventional decoder against the attacks of Checkmark benchmark.
Abstract: In a watermarking system, the decoder structures are mostly fixed. They do not account for the normal processing or intentional attacks. In the present work, a method of automatically modifying the decoder structure in accordance to the given cover image and conceivable attack is illustrated. The proposed Genetic Programming based watermark decoding scheme is a blind one. It exploits the search space regarding types of dependencies of the decoder on different factors. Especially, information pertaining to watermarked cover coefficients is utilized to reduce host interference, while the conceivable-attack information is utilized to circumvent the anticipated distortion. The actual performance of the genetic decoder is assessed through experiments, which justify the use of intelligent search techniques in signal detection/decoding. Simulation results show that the resultant genetic decoder has superior performance as compared to the conventional decoder against the attacks of Checkmark benchmark.

15 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient algorithm, called Tkcp, for mining top-k strongly correlated pairs without minimum correlation threshold, based on the FP-tree data structure, and shows that it outperforms the Taper algorithm, one efficient algorithm for mining correlated item pairs.
Abstract: Given a user-specified minimum correlation threshold and a transaction database, the problem of mining strongly correlated item pairs is to find all item pairs with Pearson's correlation coefficients above the threshold. However, setting such a threshold is by no means an easy task. In this paper, we consider a more practical problem: mining top-k strongly correlated item pairs, where k is the desired number of item pairs that have largest correlation values. Based on the FP-tree data structure, we propose an efficient algorithm, called Tkcp, for mining such patterns without minimum correlation threshold. Our experimental results show that Tkcp algorithm outperforms the Taper algorithm, one efficient algorithm for mining correlated item pairs, even with the assumption of an optimally chosen correlation threshold. Thus, we conclude that mining top-k strongly correlated pairs without minimum correlation threshold is more preferable than the original correlation threshold based mining.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a distributed Guided Genetic Algorithm (DGGA) dealing with Maximal Constraint Satisfaction Problems is presented, which consists of agents dynamically created and cooperating in order to satisfy the maximal number of constraints.
Abstract: This paper presents, studies and betters distributed Guided Genetic Algorithm (DGGA) dealing with Maximal Constraint Satisfaction Problems. This algorithm consists of agents dynamically created and cooperating in order to satisfy the maximal number of constraints. Each agent performs its own GA, guided by both the template concept and the Min-conflict-heuristic, on a sub-population composed of chromosomes violating the same number of constraints. D^{2}G^{2}A is a new multi-agent approach, which in addition to DGGA will be enhanced by a new parameter called guidance operator. The latter allows not only diversification but also an escaping from local optima. D^{2}G^{2}A is improved in the second part. This improvement is based on the NEO-DARWINISM theory and on the laws of nature. In fact, the new algorithm will let the species agent able to count its cross-over probability and its mutation probability. This approach is called D^{3}G^{2}A. In this paper, newer algorithms and their global dynamics are furnished, and experimental results are provided.

12 citations


Journal ArticleDOI
TL;DR: A hidden Markov model (HMM) approach is proposed to achieve the result sharing for distributed fault diagnosis and an agent-based software demonstrator on gas turbine engine fault diagnosis is presented for investigating agent implementation issues.
Abstract: A multi-agent system is demonstrated to be a suitable architecture for solving distributed problems An agent-based system for distributed fault diagnosis is introduced in this paper Within the multi-agent framework, the dynamic system is monitored by a group of agents and each agent can make its own detection decisions To coordinate the low-level monitoring results, a high-level inference is then deployed to assess the state of system and isolate the potential fault The structure of the agent framework is described in this paper and the role of each agent in the framework is summarised Some intelligent model-based fault diagnosis methods used by the local diagnostic agents are discussed A hidden Markov model (HMM) approach is proposed to achieve the result sharing for distributed fault diagnosis Given the monitoring report from different agents, HMM based algorithms can be applied to coordinate the partial diagnostic results from different agents and find the most likely state evolution for fault isolation Some simulation results are demonstrated and an agent-based software demonstrator on gas turbine engine fault diagnosis is presented for investigating agent implementation issues

7 citations


Journal ArticleDOI
TL;DR: An image based design support system for web page design, which enables the designed web page to have a unified impression in terms of text colors, background colors and text fonts.
Abstract: This paper proposes an image based design support system for web page design, which enables the designed web page to have a unified impression in terms of text colors, background colors and text fonts. Previous approaches for web page design support often focus on the automated composition of HTML (HyperText Markup Language) codes and easy insertion of graphic figures and/or icons. However, even when the same textual information is contained in web pages, their impressions can vary depending on the utilized colors and fonts. Image scales, which are originally proposed for capturing human impression of colors, are incorporated in our approach to support web page design by reflecting user's image for web pages. Furthermore, the relationship between fonts and impressions is assessed to construct a font image scale so that colors and fonts can be uniformly dealt with in terms of their images. In our approach a user is required to register preferable pages, which are used to acquire his/her image for web pages. The acquired image is then used to modify colors and fonts so that the designed web page comes to have a consistent image. A prototype system with the proposed method has been implemented using Java and Prolog. Experiments were conducted to investigate the effectiveness of our system. The results are encouraging and show that it is worthwhile following this path.

Journal ArticleDOI
TL;DR: An automated construction of knowledge based artificial neural networks (KBANN) for the holistic recognition of handwritten Arabic words in limited lexicons by providing the network with theoretical knowledge and reducing the training stage, which remains necessary because of variability in styles and writing conditions.
Abstract: In this article, we suggest an automated construction of knowledge based artificial neural networks (KBANN) for the holistic recognition of handwritten Arabic words in limited lexicons. First, ideal samples of the considered lexicon words are submitted to a feature extraction module which describes them using structural primitives. The analysis of these descriptions generates a symbolic knowledge base reflecting a hierarchical classification of the words. The rules are then translated into a multilayer neural network by determining precisely its architecture and initializing its connections with specific values. This construction approach provides the network with theoretical knowledge and reduces the training stage, which remains necessary because of variability in styles and writing conditions. After this empirical training stage using real examples, the network reaches its final topology, which enables it to generalize. The proposed method has been tested on the automated construction of neuro-symbolic classifiers for two Arabic lexicons: literal amounts and city names. We suggest the generalization of this approach to the recognition of handwritten words or characters in different scripts and languages.

Journal ArticleDOI
TL;DR: The main goal of Knowledge Management (KM) is to provide relevant knowledge to assist users in executing knowledge intensive tasks, and to be effective, KM must provide users with relevant knowledge, at the right time and in the right form, that enables users to better perform their tasks.
Abstract: The main goal of Knowledge Management (KM) is to provide relevant knowledge to assist users in executing knowledge intensive tasks. That is, KM aims at facilitating an environment where work critical information can be created, structured, shared, distributed and used. To be effective, KM must provide users with relevant knowledge, at the right time and in the right form, that enables users to better perform their tasks. Knowledge Management (KM) has been a predominant trend in business in the recent years. Though KM is primarily a management discipline (with a background in human resource management, strategy, and organizational behavior), the role of information technology as an enabling factor is widely recognized, and – after a first phase where merely general purpose technology like Internet/Intranets or email were found to be useful to facilitate KM – variety of proposals exist to support KM with specialized information systems [3]. Often, IT research for KM focused on the comprehensive use of an organization’s knowledge, thus aiming at the completeness of distribution of relevant information. Technically, this is typically supported by centralized approaches: knowledge about people, processes or domain knowledge is represented and maintained in global repositories which serve as sources to meet a knowledge worker’s (potentially complex) information needs. Such repositories may be structured by global ontologies (e.g., in form of knowledge portals) or they may be rather flat and processed by weak (i.e. not knowledge–intensive) methods like statistics–based information retrieval or collaborative filtering. However, as is often mentioned in the literature, knowledge tasks have a collaborative aspect, that is, an individual can best acquire and use knowledge by making use of existing relations among people (communities) or by reusing and personalizing information already collected and annotated by others. Furthermore, a KM system must be able to adapt to changes in the environment, to the different needs and preferences of users, and to integrate naturally with existing work methods, tools and processes. That is, KM systems must be reactive (able to respond to user requests or environment changes) and proactive (able to take initiatives to attend to user needs). These aspects also characterise intelligent software agents, what seems to indicate the applicability of agent technology in the KM area. Intelligent agents as a paradigm for developing software applications are currently the focus of intense interest on the part of many fields of computer science and artificial intelligence. A software agent is an autonomous entity that perceives and acts on its environment in order to achieve its goals. Wooldridge and Jennings [4] defined four properties that form a weak definition of Agency:

Journal ArticleDOI
TL;DR: In order to measure the quality of an enhanced image, an index of fuzziness is presented in this paper to evaluate the performance of the fuzzy relaxation scheme, which is used as a criterion for automatically stopping the relaxation process.
Abstract: This paper proposes a new fuzzy relaxation scheme for image contrast enhancement, which has desirable convergence properties of relaxation operations on both fuzzy domain and image spatial domain. Furthermore, in order to measure the quality of an enhanced image, an index of fuzziness is presented in this paper to evaluate the performance of the fuzzy relaxation scheme, which is used as a criterion for automatically stopping the relaxation process. Experimental results have shown the validity of the proposed fuzzy relaxation algorithm.

Journal ArticleDOI
TL;DR: The paper introduces state recognition recipes that drive groups within organizations to create common awareness, explains the exploitation of these recipes and presents an example that shows the potential of the approach.
Abstract: Groups of collaborative agents need to have a single view of the world to act as single entities. Building common awareness in agent groups involves reconciling different views of the world and deciding a single view that every agent within the group accepts. The notion of collective belief has been used extensively in formal models for collaborative activity to deal with group awareness. However, collective belief alone is not sufficient for organized groups to act as single entities. In human organizations, members of groups accept that certain states hold based on shared group practices/policies and beliefs of individual agents. These acceptances are formed even if some members of the group do not believe that the corresponding states hold. This paper distinguishes between individual beliefs and group acceptances in multi-agent systems in well-organized settings. It introduces state recognition recipes that drive groups within organizations to create common awareness, explains the exploitation of these recipes and presents an example that shows the potential of the approach.

Journal ArticleDOI
TL;DR: A generic inference structure for knowledge-based diagnosis applications named COPE, acronym for Compare, Observe, Predict and Evaluate, is described, that is a canonical representation of this kind of applications at knowledge level.
Abstract: This paper describes a methodology for recognizing equivalent knowledge-based diagnosis applications at knowledge level. This methodology is based on the notion of Conceptual Equivalence, that is formally developed. This notion of Conceptual Equivalence induces the definition of a generic inference structure for knowledge-based diagnosis applications named COPE, acronym for Compare, Observe, Predict and Evaluate, that is a canonical representation of this kind of applications at knowledge level.

Journal ArticleDOI
TL;DR: A novel self organizing map (SOM) based channel predictor for the downlink of an orthogonal frequency-division multiple access (OFDMA) system that uses a Kalman trained-SOM backed mixtures of experts (ME) modular neural network.
Abstract: Channel prediction is the key requirement in adaptive transmission techniques such as adaptive modulation, adaptive coding and adaptive power control. This paper presents a novel self organizing map (SOM) based channel predictor for the downlink of an orthogonal frequency-division multiple access (OFDMA) system. The proposed predictor uses a Kalman trained-SOM backed mixtures of experts (ME) modular neural network. The performance of the predictor is evaluated on an OFDMA system with a system delay where a channel prediction is needed.

Journal ArticleDOI
TL;DR: The positive results obtained here showed that the neural activity during perception of visual stimulus was different across individuals, and this method could be explored further as a biometric tool to identify individuals as the brain signals are difficult to be forged.
Abstract: In previous studies, identification of individuals using 61 channel Visual Evoked Potential (VEP) signals from the brain has been shown to be feasible. These studies used neural network classification of gamma band spectral power of VEP signals from 20 individuals. This paper explores our continuing work in this area to include more subjects in the experiment and to reduce the number of required channels using Fisher Discriminant Ratio function. The experimental study showed that 27 optimal channels were sufficient to yield an average classification rate of 90.97% across 800 test VEP patterns from 40 subjects. Being fewer in number than 61 channels, it is less cumbersome, requires lower computational time, design complexity and cost. This was achieved without loss of performance as 61 channels gave an average classification result of 89.11%. The positive results obtained here showed that the neural activity during perception of visual stimulus was different across individuals. This method could be explored further as a biometric tool to identify individuals as the brain signals are difficult to be forged.

Journal ArticleDOI
TL;DR: The results show that the absolute capacities approach to 0 rapidly as N increases in both models and they decrease rapidly in the correlation model and slowly in the differential correlation model as ¯ increases.
Abstract: This paper describes some results on transition properties and the absolute capacities of higher order correlation and differential correlation associative memories. First, it is shown that the absolute capacities are (1−¯) 2 2(1−¯2)1−k log N and {2(1−¯ s2)k−1}2 2k+2(1−¯2)1−k logN for correlation and differential correlation models, respectively, where N is the number of neurons, ¯ is the rate of correlation for patterns and k is the dimension. The results show that the absolute capacities approach to 0 rapidly as N increases in both models and they decrease rapidly in the correlation model and slowly in the differential correlation model as ¯ increases. Further, it is clarified that the correlation model is superior in storage capacities and inferior in robustness than the differential correlation one.

Journal ArticleDOI
TL;DR: This work presents an approach to evaluate intelligent mediation techniques for Distributed Knowledge and Information Management using agent-based modeling and simulation, and can be used to gain insight into the problems that heterogeneity poses on such mediation methods.
Abstract: Knowledge and Information Management can often be applied successfully only, if it allows for a certain degree of heterogeneity and organizational distribution. Distributed Knowledge and Information Management enable loosely coupled collaboration in heterogeneous domains by intelligent automatic mediation. One major obstacle in the development of such mediation methods is systematic evaluation. This work presents an approach to evaluate intelligent mediation techniques for Distributed Knowledge and Information Management using agent-based modeling and simulation. On the one hand, the proposed model can be used to gain insight into the problems that heterogeneity poses on such mediation methods. On the other hand, it can be used to predict the performance of a system, before actually introducing it in an organization. The framework is instantiated for two different tasks. Firstly, it is applied to the problem of collaborative filtering, a relatively well studied approach to mediate ratings among heterogeneous users and user groups. This area serves as a test-bed for the simulation framework and is used to analyze some prototypical problems faced in many Distributed Information and Knowledge Management systems. In a second instantiation, the model is applied to the problem of matching concepts in different ontologies.


Journal ArticleDOI
TL;DR: In this article, an argumentative approach based on integrating the JITIK multi-agent system with defeasible logic programming (DeLP) is presented to cope with the problem of information distribution in large, usually distributed organizations.
Abstract: Distributing pieces of knowledge in large, usually distributed organizations is a central problem in Knowledge and Organization management. Policies for distributing knowledge and information are mostly incomplete or in potential conflict with each other. As a consequence, decision processes for information distribution may be difficult to formalize on the basis of a rationally justified procedure. This article presents an argumentative approach to cope with this problem based on integrating the JITIK multiagent system with Defeasible Logic Programming (DeLP), a logic programming formalism for defeasible argumentation. We show how power relations, as well as delegation and trust, can be embedded within our framework in terms of DeLP, in such a way that a dialectical argumentation process works as a decision core. Conflicts among policies are solved on the basis of a dialectical analysis whose outcome determines to which specific users different pieces of knowledge are to be delivered.

Journal ArticleDOI
TL;DR: The Gallery system is introduced and the results revealed that placing personal photo collections in a semantically meaningful layout on a zoomable surface considerably improved the efficiency of image information retrieval.
Abstract: Photographs are snapshots of personal memories and image information is an indispensable medium for knowledge dissemination. In recent years, the widespread use of the Web and the rapidly dropping prices of digital cameras have led to the creation of large-scale personal image repositories. As a result, the importance of managing large image collections is constantly increasing. This paper describes Gallery -- an experimental system under development. It is intended to support the management of personal image repositories that capture the user's valuable memories. This paper introduces the Gallery system and discusses the results of the two experiments that evaluated its feasibility. The results revealed that placing personal photo collections in a semantically meaningful layout on a zoomable surface considerably improved the efficiency of image information retrieval.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed potential-field-simulated-annealing-neuro-fuzzy technique is most efficient against potential- field and potential-Field-simulating-annealed techniques.
Abstract: This paper describes potential field navigation method optimized by simulated annealing technique. In the current analysis a new potential field function is developed so as to take care of unknown obstacles and targets during navigation. The final aims of the robots are to reach targets. Local minimum obtained due to potential field being avoided by simulated annealing technique. After utilizing simulated annealing technique for optimizing potential field method, it is observed that, there is a unique global minimum for an attractive node i.e., target. Afterwards potential field method being optimized by simulated annealing technique is hybridized with neuro-fuzzy technique to improve the navigation of robots in a highly cluttered environment. Simulation results show that the proposed potential-field-simulated-annealing-neuro-fuzzy technique is most efficient against potential-field and potential-field-simulated-annealing techniques.


Journal ArticleDOI
TL;DR: An agent-based intrastream synchronization scheme, which adapts to network delay fluctuations for continuous and smooth playout of a multimedia stream, which is simulated in several network scenarios for verifying its operation effectiveness.
Abstract: Multimedia synchronization control is essential to overcome the network delay variance problem in order to provide continuous and smooth playout of a multimedia stream. This paper proposes an agent-based intrastream synchronization scheme, which adapts to network delay fluctuations for continuous and smooth playout of a multimedia stream. The scheme employs three types of agents: application manager agent (AMA), user agent (UA) and negotiation/renegotiation mobile agents (NMAs/RMAs). AMA creates UA, NMAs and RMAs. UA monitors the synchronization parameters (delays, rate of change of delays, losses, etc.) as well as studies the user expectations. The NMAs/RMAs are used to negotiate/renegotiate synchronization parameters. The scheme operates in two phases: start-up synchronization and resynchronization. In start-up synchronization phase, a NMA is created by AMA to negotiate the delays and rate of change of delays with the intermediate nodes in the network on the basis of application requirements. In resynchronization phase, a RMA is created by AMA to renegotiate the delay parameters and resynchronize the presentation units of a stream, whenever UA reports violation of application requirements. Also, resynchronization phase takes care of link failures. The scheme is simulated in several network scenarios for verifying its operation effectiveness. It maintained the synchronization parameters well within the sustainable values. The benefits of the scheme are: asynchronous delay negotiation and adaptation, flexibility, adaptability and supports component based software development.

Journal ArticleDOI
TL;DR: This paper presents, studies and betters distributed Guided Genetic Algorithm (DGGA) dealing with Maximal Constraint Satisfaction Problems and consists of agents dynamically created and created.
Abstract: This paper presents, studies and betters distributed Guided Genetic Algorithm (DGGA) dealing with Maximal Constraint Satisfaction Problems. This algorithm consists of agents dynamically created and...

Journal ArticleDOI
TL;DR: In this paper, the role of probability distribution is empirically investigated on unimodal and multi-modal test problems and it is observed that the operator based on polynomial distribution achieves superior performance on unimodium test problems.
Abstract: The neighborhood-based crossover operators used in real coded genetic algorithm (RCGA) are based on some probability distribution. It is observed that each crossover operator directs the search towards a different zone in the neighborhood of the parents. The quality of the elements that belong to the visited region depends on the particular problems to be solved. Different crossover operators perform differently with respect to the problems, even at the different stages of the genetic process in the same problem. In this paper, the role of probability distribution is empirically investigated on unimodal and multi-modal test problems. It is observed that the operator based on polynomial distribution achieves superior performance on unimodal test problems. The lognormal distribution based operator is efficient in solving multi-modal problems.

Journal ArticleDOI
TL;DR: This work uses an extension of the CommonKADS specification language to specify the behavior of reactive Knowledge Based Systems as in an expertise model and automatically operationalizes this behavioral part of the expertise model into a model expressed with the DEVS formalism, validating the system behavior by the simulation of the obtained operational model.
Abstract: We propose an approach for specifying, validating and verifying the behavior of reactive Knowledge Based Systems that consists in: 1) using an extension of the CommonKADS specification language to specify the behavior of this kind of system as in an expertise model; 2) automatically operationalizing (and verifying) this behavioral part of the expertise model into a model expressed with the DEVS formalism; 3) validating the system behavior by the simulation of the obtained operational model. In this way, the system behavior is specified in the expert terminology and it is validated by simulation, before even designing and implementing the system. We illustrate our approach with the Jaspar bank system of service allocation with waiting queues managing. First, we define a template for dynamic assignment, like those of the CommonKADS library, to be used in the definition of any service providing system. Then, we describe the behavior of the Jaspar bank system by reusing this template and we show how translating the obtained Jaspar bank model into the DEVS formalism makes its validation by simulation feasible.

Journal ArticleDOI
TL;DR: This work considers a novel class of applications where a set of activities conducted by a group of people over a time period needs to be planned, taking into account each member's preference, using a combination of activity grammar, heuristic search, decision theoretic graphical models, and dynamic preference aggregation.
Abstract: We consider a novel class of applications where a set of activities conducted by a group of people over a time period needs to be planned, taking into account each member's preference. We refer to the decision process that leads to such a plan as package planning. The problem differs from a number of well-studied AI problems including standard AI planning and decision-theoretic planning. We present a computational framework using a combination of activity grammar, heuristic search, decision theoretic graphical models, and dynamic preference aggregation. We show that the computation is tractable when the problem parameters are reasonably bounded.