scispace - formally typeset
Search or ask a question

Showing papers in "Applied Intelligence in 1998"


Journal ArticleDOI
TL;DR: Experimental results suggest that the probabilistic algorithm is effective in obtaining optimal/suboptimal feature subsets and its incremental version expedites feature selection further when the number of patterns is large and can scale up without sacrificing the quality of selected features.
Abstract: Feature selection is a problem of finding relevant features. When the number of features of a dataset is large and its number of patterns is huge, an effective method of feature selection can help in dimensionality reduction. An incremental probabilistic algorithm is designed and implemented as an alternative to the exhaustive and heuristic approaches. Theoretical analysis is given to support the idea of the probabilistic algorithm in finding an optimal or near-optimal subset of features. Experimental results suggest that (1) the probabilistic algorithm is effective in obtaining optimal/suboptimal feature subsets; (2) its incremental version expedites feature selection further when the number of patterns is large and can scale up without sacrificing the quality of selected features.

200 citations


Journal ArticleDOI
TL;DR: In this article, the SANE (Symbiotic, Adaptive Neuro-Evolution) method was used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge.
Abstract: Go is a difficult game for computers to master, and the best go programs are still weaker than the average human player. Since the traditional game playing techniques have proven inadequate, new approaches to computer go need to be studied. This paper presents a new approach to learning to play go. The SANE (Symbiotic, Adaptive Neuro-Evolution) method was used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge. On a 9 × 9 go board, networks that were able to defeat a simple computer opponent were evolved within a few hundred generations. Most significantly, the networks exhibited several aspects of general go playing, which suggests the approach could scale up well.

98 citations


Journal ArticleDOI
TL;DR: A new approach to the construction of neural networks based on evolutionary computation is presented, where a linear chromosome combined to a graph representation of the network are used by genetic operators, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization.
Abstract: Evolutionary computation is a class of global search techniques based on the learning process of a population of potential solutions to a given problem, that has been successfully applied to a variety of problems. In this paper a new approach to the construction of neural networks based on evolutionary computation is presented. A linear chromosome combined to a graph representation of the network are used by genetic operators, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization. This paper describes the approach, the operators and reports results of the application of this technique to several binary classification problems.

87 citations


Journal ArticleDOI
TL;DR: The aim of this research is to establish a new design model of an adaptive distributed system (ADS) to deal with various changes occurred in the system environment and to evaluate the adaptive functions of the ADS realized on the basis of the proposed architecture.
Abstract: A next generation distributed system is expected to adapt to various changes of both the users‘ requirements and the operational conditions of environment where the distributed system operates. The aim of our research is to establish a new design model of an adaptive distributed system (ADS) to deal with various changes occurred in the system environment. In this paper, we propose an agent-based architecture of ADS, based on the agent-based computing paradigm. Then, we implement a prototype of the ADS with respect to videoconferencing applications and also evaluate the adaptive functions of the ADS realized on the basis of the proposed architecture.

81 citations


Journal ArticleDOI
TL;DR: This paper proposes an approach to overcome the cautiousness of System P and the problems encountered by the rational closure inference, and takes advantage of (contextual) independence assumptions of the form: the fact that γ is true (or is false) does not affect the validity of the rule “normally if α then β”.
Abstract: This paper provides a survey of possibilistic logic as a simple and efficient tool for handling nonmonotonic reasoning, with some emphasis on algorithmic issues. In our previous works, two well-known nonmonotonic systems have been encoded in the possibility theory framework: the preferential inference based on System P, and the rational closure inference proposed by Lehmann and Magidor which relies on System P augmented with a rational monotony postulate. System P is known to provide reasonable but very cautious conclusions, and in particular, preferential inference is blocked by the presence of “irrelevant” properties. When using Lehmann‘s rational closure, the inference machinery, which is then more productive, may still remain too cautious, or on the contrary, provide counter -intuitive conclusions. The paper proposes an approach to overcome the cautiousness of System P and the problems encountered by the rational closure inference. This approach takes advantage of (contextual) independence assumptions of the form: the fact that γ is true (or is false) does not affect the validity of the rule “normally if α then β”. The modelling of such independence assumptions is discussed in the possibilistic framework. Moreover, we show that when a counter-intuitive conclusion of a set of defaults can be inferred, it is always possible to repair the set of defaults by adding suitable information so as to produce the desired conclusions and block unsuitable ones.

58 citations


Journal ArticleDOI
TL;DR: This paper presents the approach adopted in modelling teams and team tactics as part of the development of the Smart Whole AiR Mission Model (SWARMM) for the Air Operations Division of the Australian Defence Science and Technology Organization.
Abstract: The problem of modelling air missions is part of a larger problem—simulating possible war-like scenarios in the air, sea, and on land. In modelling such military systems one is required to model the behaviour of various actors and the resources that are available to them. One aspect of this problem is the modelling of a group of actors as a team and then modelling the coordinated behaviour of such a team to achieve a joint goal. In the domain of air mission modelling the actors are pilots who control aircraft and their behaviour is referred to as tactics. In this paper we present the approach we adopted in modelling teams and team tactics as part of the development of the Smart Whole AiR Mission Model (SWARMM) for the Air Operations Division of the Australian Defence Science and Technology Organization. In our approach teams are composed of sub-teams and adopt organizational structures. Such structures define the responsibilities of the sub-teams towards the mission to be achieved as well as towards the control and coordination of the sub-teams. We also describe how communication is used when adopting a variety of control and coordination strategies and how one could reason about the choice of organizational structures for a given mission and situation.

55 citations


Journal ArticleDOI
TL;DR: The concepts and methodologies for the evolvable model of modular neural networks, which might not only develop new functionality spontaneously, but also grow and evolve its own structure autonomously, are described.
Abstract: Evolutionary design of neural networks has shown a great potential as a powerful optimization tool. However, most evolutionary neural networks have not taken advantage of the fact that they can evolve from modules. This paper presents a hybrid method of modular neural networks and genetic programming as a promising model for evolutionary learning. This paper describes the concepts and methodologies for the evolvable model of modular neural networks, which might not only develop new functionality spontaneously, but also grow and evolve its own structure autonomously. We show the potential of the method by applying an evolved modular network to a visual categorization task with handwritten digits. Sophisticated network architectures as well as functional subsystems emerge from an initial set of randomly-connected networks. Moreover, the evolved neural network has reproduced some of the characteristics of natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways.

50 citations


Journal ArticleDOI
TL;DR: A method for multiple agent integration which is applied to the automated highway system domain and can competently drive a vehicle, both in terms of the user-defined evaluation metric, and as measured by their behavior on several driving situations culled from real-life experience.
Abstract: Recent research in automated highway systems has ranged from low-level vision-based controllers to high-level route-guidance software However, there is currently no system for tactical-level reasoning Such a system should address tasks such as passing cars, making exits on time, and merging into a traffic stream Many previous approaches have attempted to hand construct large rule-based systems which capture the interactions between multiple input sensors, dynamic and potentially conflicting subgoals, and changing roadway conditions However, these systems are extremely difficult to design due to the large number of rules, the manual tuning of parameters within the rules, and the complex interactions between the rules Our approach to this intermediate-level planning is a system which consists of a collection of autonomous agents, each of which specializes in a particular aspect of tactical driving Each agent examines a subset of the intelligent vehicle‘s sensors and independently recommends driving decisions based on their local assessment of the tactical situation This distributed framework allows different reasoning agents to be implemented using different algorithms When using a collection of agents to solve a single task, it is vital to carefully consider the interactions between the agents Since each reasoning object contains several internal parameters, manually finding values for these parameters while accounting for the agents‘ possible interactions is a tedious and error-prone task In our system, these parameters, and the system‘s overall dependence on each agent, is automatically tuned using a novel evolutionary optimization strategy, termed Population-Based Incremental Learning (PBIL) Our system, which employs multiple automatically trained agents, can competently drive a vehicle, both in terms of the user-defined evaluation metric, and as measured by their behavior on several driving situations culled from real-life experience In this article, we describe a method for multiple agent integration which is applied to the automated highway system domain However, it also generalizes to many complex robotics tasks where multiple interacting modules must simultaneously be configured without individual module feedback

38 citations


Journal ArticleDOI
TL;DR: An agent software architecture using a model of agent is presented, composed of three abstract levels over which the complexity is distributed and reduced, which helps in understanding how non-determinist behavior can emerge from interactions between agents.
Abstract: This paper’s object is to present the results of the GEAMAS project which aims at modeling and simulating natural complex systems. GEAMAS is a generic architecture of agents used to study the behavior emergence in such systems. It is a multiagent program meant to develop simulation applications. Modeling complex systems requires to reduce, to organize the system complexity and to describe suitable components. Complexity of the system can then be tackled with an agent-oriented approach, where interactions lead to a global behavior. This approach helps in understanding how non-determinist behavior can emerge from interactions between agents, which is near of self-organized criticality used to explain natural phenomena. In the Applied Artificial Intelligence context, this paper presents an agent software architecture using a model of agent. This architecture is composed of three abstract levels over which the complexity is distributed and reduced. The architecture is implemented in ReActalk, an open agent-oriented development tool, which was developed on top of Smalltalk-80. To illustrate our purpose and to validate the architecture, a simulation program to help in predicting volcanic eruptions was investigated. This program was run over a period of one year and has given many satisfying results unattainable up to there with more classical approaches.

34 citations


Journal ArticleDOI
TL;DR: Simulation results which display the ability of the neural approach to provide IAV with capability to intelligently navigate in partially structured environments are presented and discussion dealing with the suggested approach and how it relates to some other works is given.
Abstract: The use of Neural Networks (NN) is necessary to bring the behavior of Intelligent Autonomous Vehicles (IAV) near the human one in recognition, learning, decision-making, and action. First, current navigation approaches based on NN are discussed. Indeed, these current approaches remedy insufficiencies of classical approaches related to real-time,autonomy , and intelligence. Second, a neural navigation approach essentially based on pattern classification to acquire target localization and obstacle avoidance behaviors is suggested. This approach must provide vehicles with capability, after supervised Gradient Backpropagation learning, to recognize both six (06) target location and thirty (30) obstacle avoidance situations using NN1 and NN2 Classifiers, respectively. Afterwards, the decision-making and action consist of two association stages, carried out by reinforcement Trial and Error learning, and their coordination using a NN3. Then, NN3 allows to decide among five (05) actions (move towards 30°, move towards 60°, move towards 90°, move towards 120°, and move towards 150°). Third, simulation results which display the ability of theneural approach to provide IAV with capability to intelligently navigate in partially structured environments are presented. Finally, a discussion dealing with the suggested approach and how it relates to some other works is given.

34 citations


Journal ArticleDOI
TL;DR: Genetic programming is shown to be capable of producing robust wall-following navigation algorithms that perform well in each of the test environments used.
Abstract: This paper demonstrates the use of genetic programming (GP) for the development of mobile robot wall-following behaviors. Algorithms are developed for a simulated mobile robot that uses an array of range finders for navigation. Navigation algorithms are tested in a variety of differently shaped environments to encourage the development of robust solutions, and reduce the possibility of solutions based on memorization of a fixed set of movements. A brief introduction to GP is presented. A typical wall-following robot evolutionary cycle is analyzed, and results are presented. GP is shown to be capable of producing robust wall-following navigation algorithms that perform well in each of the test environments used.

Journal ArticleDOI
TL;DR: The fuzzy diagnostic model has been implemented in a fuzzy diagnostic system for the End-of-Line test at automobile assembly plants and the implemented system has been tested extensively and its performance is presented.
Abstract: This paper describes a fuzzy diagnostic model that contains a fast fuzzy rule generation algorithm and a priority rule based inference engine. The fuzzy diagnostic model has been implemented in a fuzzy diagnostic system for the End-of-Line test at automobile assembly plants and the implemented system has been tested extensively and its performance is presented.

Journal ArticleDOI
TL;DR: A stand alone inference engine is developed that uses a connectionist knowledge base, seeks to reduce the amount of data requested in order to reach a conclusion, and explains how a particular conclusion was reached.
Abstract: Knowledge-based neural networks (KBNNs) can be used as expert system knowledge bases. This approach shifts the interests in using connectionist knowledge bases for inferencing in an interactive fashion and giving reasonable justifications for their conclusions. The primary goal of this article is to present a good inference and control mechanism for such knowledge bases. For this purpose, the article develops a stand alone inference engine that uses a connectionist knowledge base, seeks to reduce the amount of data requested in order to reach a conclusion, and explains how a particular conclusion was reached. The inference engine was evaluated on illustrative example applications. Results obtained demonstrate that in spite of its simplicity the presented technique is superior to other techniques over sparse input knowledge bases.

Journal ArticleDOI
TL;DR: It is found that, contrary to common beliefs, VE is often more efficient than CTP, especially in complex networks.
Abstract: This paper studies computational properties of two exact inference algorithms for Bayesian networks, namely the clique tree propagation algorithm (CTP)^1 and the variable elimination algorithm (VE). VE permits pruning of nodes irrelevant to a query while CTP facilitates sharing of computations among different queries. Experiments have been conducted to empirically compare VE and CTP. We found that, contrary to common beliefs, VE is often more efficient than CTP, especially in complex networks.

Journal ArticleDOI
TL;DR: The limitation of current accident databases is discussed and the issue of finding and ranking of information that relates to a query that is implemented in an accident database is focused on.
Abstract: This paper is concerned with accessing information from accident databases. It discusses the limitation of current accident databases and focuses on the issue of finding and ranking of information that relates to a query. A user or system initiates an interaction with a database by specifying what is of interest in the form of a query. The query does not have to be treated as a precise description of what is of interest, but a vague or “fuzzy” one. Fuzzy database techniques make it possible to exploit all available information by returning not only items that match the query exactly, but also items that bear some relation to the query. A domain model for accident reports in the process industries was developed. It consists of four classification hierarchies for the attributesoperation , equipment, {\it cause and consequence. A common approach for assessing how closely two terms are related is based on the number of links between the two terms on a hierarchy. This approach is not appropriate for the accident database domain. Instead, the relationship between any two nodes on a hierarchy is classified into four different types. Methods for determining similarities for the different types of relationships are discussed and have been implemented in an accident database. The ranking of the retrieved information is much more satisfactory then the “distance” based approach.

Journal ArticleDOI
TL;DR: The behavioral parameters of each agent that need to be computed, and the quantitative solution to the problem of controlling these parameters are provided, and a dynamic, adaptive communication strategy for multiagent systems is described.
Abstract: In this paper we describe a dynamic, adaptive communication strategy for multiagent systems. We discuss the behavioral parameters of each agent that need to be computed, and provide a quantitative solution to the problem of controlling these parameters. We also describe the testbed we built and the experiments we performed to evaluate the effectiveness of our methodology. Several experiments using varying populations and varying organizations of agents were performed and are reported. A number of performance measurements were collected as each experiment was performed so the effectiveness of the adaptive communications strategy could be measured quantitatively. The adaptive communications strategy proved effective for fully connected networks of agents. The performance of these experiments improved for larger populations of agents and even approached optimal performance levels. Experiments with non-fully connected networks showed that the adaptive communications strategy is extremely effective, but does not approach optimality. Other experiments investigated the ability of the adaptive communications strategy to compensate for “distracting” agents, for systems where agents are required to assume the role of information routers, and for systems that must decide between routing paths based on cost information.

Journal ArticleDOI
TL;DR: The use of the natural phenomenon known as the Baldwin effect (or cross-generational learning) as an enhancement to the standard Genetic Algorithm is described, implemented by using an artificial neural network to store aspects of the population's history.
Abstract: The standard Genetic Algorithm, originally inspired by natural evolution, has displayed its effectiveness in solving a wide variety of complex problems. This paper describes the use of the natural phenomenon known as the {\it Baldwin\ effect} (or cross-generational learning) as an enhancement to the standard Genetic Algorithm. This is implemented by using an artificial neural network to store aspects of the population‘s history. It also describes a method by which the negative side effects of a large elite sub-population can be counter-balanced by using an ageing coefficient in the fitness calculation.

Journal ArticleDOI
TL;DR: This paper considers neural-fuzzy models for multispectral image analysis that are both supervised and unsupervised classification, and develops software for these models.
Abstract: In this paper, we consider neural-fuzzy models for multispectral image analysis. We consider both supervised and unsupervised classification. The model for supervised classification consists of six layers. The first three layers map the input variables to fuzzy set membership functions. The last three layers implement the decision rules. The model learns decision rules using a supervised gradient descent procedure. The model for unsupervised classification consists of two layers. The algorithm is similar to competitive learning. However, here, for each input sample, membership functions of output categories are used to update weights. Input vectors are normalized, and Euclidean distance is used as the similarity measure. In this model if the input vector does not satisfy the “similarity criterion,” a new cluster is created; otherwise, the weights corresponding to the winner unit are updated using the fuzzy membership values of the output categories. We have developed software for these models. As an illustration, the models are used to analyze multispectral images.

Journal ArticleDOI
TL;DR: New approaches to the discrete point data selection problem are described, including hillclimbing, genetic algorithms, and Population-Based Incremental Learning (PBIL).
Abstract: Object localization has applications in many areas of engineering and science. The goal is to spatially locate an arbitrarily shaped object. In many applications, it is desirable to minimize the number of measurements collected while ensuring sufficient localization accuracy. In surgery, for example, collecting a large number of localization measurements may either extend the time required to perform a surgical procedure or increase the radiation dosage to which a patient is exposed. Localization accuracy is a function of the spatial distribution of discrete measurements over an object when measurement noise is present. In previous work (J. of Image Guided Surgery, Simon et al., 1995), metrics were presented to evaluate the information available from a set of discrete object measurements. In this study, new approaches to the discrete point data selection problem are described. These include hillclimbing, genetic algorithms (GAs), and Population-Based Incremental Learning (PBIL). Extensions of the standard GA and PBIL methods that employ multiple parallel populations are explored. The results of extensive empirical testing are provided. The results suggest that a combination of PBIL and hillclimbing result in the best overall performance. A computer-assisted surgical system that incorporates some of the methods presented in this paper is currently being evaluated in cadaver trials.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the effectiveness of the sGA-evolved neuro-controllers for the task—to keep the pole upright and the cart within the limits of the given track.
Abstract: This paper describes the application of the Structured Genetic Algorithm (sGA) to design neuro-controllers for an unstable physical system. In particular, the approach uses a single unified genetic process to automatically evolve complete neural nets (both architectures and their weights) for controlling a simulated pole-cart system. Experimental results demonstrate the effectiveness of the sGA-evolved neuro-controllers for the task—to keep the pole upright (within a specified vertical angle) and the cart within the limits of the given track.

Journal ArticleDOI
TL;DR: It is shown how (depending on the complexity of the required behaviour) a simulation model for animal behaviour can be designed at a conceptual level in an agent-based manner.
Abstract: In the biological literature on animal behaviour, in addition to real experiments and field studies, also simulation experiments are a useful source of progress. Often specific mathematical modelling techniques are adopted and directly implemented in a programming language. Modelling more complex agent behaviours is less adequate using the usually adopted mathematical modelling techniques. The literature on AI and Agent Technology offers more specific methods to design and implement (also more complex) intelligent agents and agent societies on a conceptual level. One of these methods is the compositional multi-agent system design method DESIRE. In this paper it is shown how (depending on the complexity of the required behaviour) a simulation model for animal behaviour can be designed at a conceptual level in an agent-based manner. Different models are shown for different types of behaviour, varying from purely reactive behaviour to pro-active, social and adaptive behaviour. The compositional design method for multi-agent systems DESIRE and its software environment supports the conceptual and detailed design, and execution of these models. A number of experiments reported in the literature on animal behaviour have been simulated for different agent models.

Journal ArticleDOI
TL;DR: This paper proposes a novel evolutionary computation model: Organizational-Learning Oriented Classifier System (OCS), and describes its application to Printed Circuit Boards (PCBs) redesign problems in a computer aided design (CAD).
Abstract: This paper proposes a novel evolutionary computation model: Organizational-Learning Oriented Classifier System (OCS), and describes its application to Printed Circuit Boards (PCBs) redesign problems in a computer aided design (CAD). Using the conventional CAD systems which explicitly decide the parts‘ placements by a knowledge base, the systems cannot effectively place the parts as done by human experts. Furthermore, the supports of human experts are intrinsically required to sa tisfy the constraints and to optimize a global objective function. However, in the proposed model OCS, the parts generate and acquire adaptive behaviors for an appropriate placement without explicit control. In OCS, we focus upon emergent processes in which the parts dynamically form an organized group with autonomously generating adaptive behaviors through local interaction among them. Using the model OCS, we have conducted intensive experiments on a practical PCB redesign problem for electric appliances. The experimental results have shown that: (1) it has found the feasible solutions of the same level as the ones by human experts, (2) solutions are locally optimal, and also globally better than the ones by human experts with regard to the total wiring length, and (3) the solutions are more preferable than those in the conventional CAD systems.

Journal ArticleDOI
TL;DR: This paper focuses upon the development of three new electronic architectures of inference engines as a part of a hardware expert system applied to very high-speed faults detection in industrial processes.
Abstract: This paper focuses upon the development of three new electronic architectures of inference engines as a part of a hardware expert system applied to very high-speed faults detection in industrial processes. The architecture of this expert system consists of an inference engine (a dedicated processor that is necessary due to the high-speed requirements and the repetitiveness of the operation), which uses a pattern-directed inference system; a fact base, which stores the status of the signals at each moment, and a static knowledge base, which contains the inference rules compiled from expert knowledge. A circuit for analyzing time is also presented. This allows time to be taken as another variable of the process and carries out a redundancy analysis simultaneously with the fault detection module.

Journal ArticleDOI
TL;DR: The aim of the paper is to present the new neural model and its use in optimizing the performance of FMSs, using a novel neural model obtained by making significant changes to a network that is well known in literature: the Hopfield network.
Abstract: The performance of a Flexible Manufacturing System (FMS) is generally linked to its productivity and is often limited by poor use of available resources. One of the main goals in the automated factory environment is, therefore, the exploitation of resources to the full, in such a way as to optimize productivity. As widely documented in literature, this is a hard task on account of its computational complexity. For this reason a number of heuristic techniques are currently available, the best known of which are based on Event Graphs, which are a particular class of Petri Nets. The paper proposes a performance optimization technique which, although it is based on Event Graphs, applies algorithms which are different from traditional heuristic ones. More specifically, a novel neural model is used to solve the optimization problem. The neural model was obtained by making significant changes to a network that is well known in literature: the Hopfield network. The solution proposed is an original one and features several advantages against the most known heuristic approaches to the problem, the most important of which is the possibility of optimal or close to optimal solutions in a polynomial time, proportional to the size of the FMS. In addition, the possibility of simple, economical hardware implementation of the neural model favours its integration in the automated factory environment, allowing real-time supervision and optimization of productivity. The aim of the paper is to present the new neural model and its use in optimizing the performance of FMSs. A comparison of the neural approach with classical heuristic solutions and its real-time calculation capability, will also be treated in the paper.

Journal ArticleDOI
TL;DR: A biologically motivated computer model, called the artificial neuromolecular (ANM) system, that demonstrates long-term evolutionary learning capability for complex problem solving and has been applied to Chinese character recognition.
Abstract: The capability of learning in an indefinite amount of time renders biological systems highly adaptable. We have developed a biologically motivated computer model, called the artificial neuromolecular (ANM) system, that demonstrates long-term evolutionary learning capability for complex problem solving. The major elements of the system are neurons whose input-output behavior is controlled by significant internal dynamics. The dynamics are modeled by cellular automata, structured to represent the neuronal cytoskeleton (a subneuronal network found in every neuron). Neurons of this type are linked into a multilayer network that abstracts some features of visual circuitry. Multiple copies of these networks are controlled by neurons with memory manipulation capabilities. The ANM system combines these two types of neurons into a single, closely integrated architecture. The system is educated to perform desired tasks by evolutionary algorithms. These algorithms act at the intraneuronal level to generate a repertoire of neurons with different pattern processing capabilities. They also act at the interneuronal level (through the memory manipulation system) to orchestrate different pattern processing neurons into a group suitable for performing desired tasks. The system has been applied to Chinese character recognition. Experiments were emphasized on long-term evolutionary learning, relearning capability, self-organizing dynamics, malleability, gradual transformability, multidimensional fitness surface, co-evolutionary learning, and cross-level synergy.

Journal ArticleDOI
Brian J. Ross1
TL;DR: A genetic programming system based on Koza's model has been implemented, and experimental runs of the system successfully evolved a number of non–iterative CCS systems, proving the potential of evolutionary approaches to concurrent system development.
Abstract: Process algebra are formal languages used for the rigorous specification and analysis of concurrent systems. By using a process algebra as the target language of a genetic programming system, the derivation of concurrent programs satisfying given problem specifications is possible. A genetic programming system based on Koza‘s model has been implemented. The target language used is Milner‘s CCS process algebra, and is chosen for its conciseness and simplicity. The genetic programming environment needs a few adaptations to the computational characteristics of concurrent programs. In particular, means for efficiently controlling the exponentially large computation spaces that are common with process algebra must be addressed. Experimental runs of the system successfully evolved a number of non–iterative CCS systems, hence proving the potential of evolutionary approaches to concurrent system development.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed evolution strategy can produce results comparable to that of competitive neural networks.
Abstract: Color quantization of image sequences is a case of non-stationary clustering problem. The approach we adopt to deal with this kind of problems is to propose adaptive algorithms to compute the cluster representatives. We have studied the application of Competitive Neural Networks and Evolution Strategies to the one-pass adaptive solution of this problem. One-pass adaptation is imposed by the near real-time constraint that we try to achieve. In this paper we propose a simple and effective evolution strategy for this task. Two kinds of competitive neural networks are also applied. Experimental results show that the proposed evolution strategy can produce results comparable to that of competitive neural networks.

Journal ArticleDOI
TL;DR: CODMAPS, a COst Distribution Method for Agent Planning Systems, is presented, based on individual distribution of cost and competitive behavior, and introduces the concept of reluctance as a regulation mechanism to facilitate it.
Abstract: In this paper we present CODMAPS, a COst Distribution Method for Agent Planning Systems. The strategy is based on individual distribution of cost and competitive behavior. Our model emulates how human agents work in expert groups. They all share a common objective, however, they also have individual interests and try to steer the planning process towards their own goals. Two opposing trends coexist within the set: global co-operation and individual utility maximization. External evaluation must guarantee the validity of the final plan at global level, but a negotiation and cost distribution strategy must ensure that cost is adequately shared throughout the agent set. We introduce the concept of reluctance as a regulation mechanism to facilitate it. A statistical model allows agents to adapt their attitude towards negotiation depending on their negotiation state vector , which encompasses all history of previous negotiations by the agent. Previous research into this problem had taken the “rational” approach. A group of agents choose the “best” alternative given the current possibilities. This not only forces the agents to exchange and “understand” other agents‘ proposals (which is computationally expensive), but also neglects the past negotiation history of each individual agent. Our approach facilitates distribution of cost across the agent set given the agents‘ past history and the importance of their constraints. The more taxed an agent becomes the more reluctant it will be to relax, thus pushing other agents less taxed to accept to compromise. It does not need explicit constraint information exchange, thus simplifying the negotiation process.

Journal ArticleDOI
TL;DR: A method of discovering incidences under these circumstances which produces a unique output comparing with the large number of outputs from other approaches is proposed and the completeness of the result generated from it is proved.
Abstract: Incidence calculus is a probabilistic logic in which incidences, standing for the situations in which formulae may be true, are assigned to some formulae, and probabilities are assigned to incidences. However, numerical values may be assigned to formulae directly without specifying the incidences. In this paper, we propose a method of discovering incidences under these circumstances which produces a unique output comparing with the large number of outputs from other approaches. Some theoretical aspects of this method are thoroughly studied and the completeness of the result generated from it is proved. The result can be used to calculate mass functions from belief functions in the Dempster-Shafer theory of evidence (DS theory) and define probability spaces from inner measures (or lower bounds) of probabilities on the relevant propositional language set.

Journal ArticleDOI
TL;DR: The modular architecture of a generic dialogue system that assists a user/operator in performing a task with a tool is described, named CALLIOPE after the Greek goddess of eloquence, which aims at being an active partner in an intelligent man-machine dialogue.
Abstract: We describe the modular architecture of a generic dialogue system that assists a user/operator in performing a task with a tool. This coaching system is named CALLIOPE after the Greek goddess of eloquence. It aims at being an active partner in an intelligent man-machine dialogue. The intelligent dimension of the coaching system is reflected by its ability to adapt to the user and the situation at hand. The CALLIOPE system contains an explicit user model and world model to situate its dialogue actions. A plan library allows it to follow loosely predetermined dialogue scenarios. The heart of the coaching system is an AI planning module, which plans a series of dialogue actions. We present a coherent set of three dialogue or speech actions that will make up the physical form of the man-machine communication.The use of the AI planning paradigm as a basis for man-machine interaction is motivated by research in various disciplines, as e.g., AI, Cognitive Science and Social Sciences. Starting from the man-man communication metaphor, we can view the “thinking before speaking” of a human communication partner as constructing an underlying plan which is responsible for the purposiveness, the organisation and the relevance of the communication. CALLIOPE has been fully implemented and tested on theoretical examples. At present, also three tailored versions of CALLIOPE are in operational use in different industrial application domains: operator support for remedying tasks in chemical process industry, operator support for a combined task of planning, plan execution and process control in the area of chemical process development, and thirdly decision support in production scheduling.