scispace - formally typeset
Search or ask a question

Showing papers in "Applied Intelligence in 2001"


Journal ArticleDOI
TL;DR: A range of new memetic approaches for the rostering problem are introduced, which use a steepest descent improvement heuristic within a genetic algorithm framework and a hybrid which is greater than the sum of its component algorithms is presented.
Abstract: Constructing timetables of work for personnel in healthcare institutions is known to be a highly constrained and difficult problem to solve. In this paper, we discuss a commercial system, together with the model it uses, for this rostering problem. We show that tabu search heuristics can be made effective, particularly for obtaining reasonably good solutions quickly for smaller rostering problems. We discuss the robustness issues, which arise in practice, for tabu search heuristics. This paper introduces a range of new memetic approaches for the problem, which use a steepest descent improvement heuristic within a genetic algorithm framework. We provide empirical evidence to demonstrate the best features of a memetic algorithm for the rostering problem, particularly the nature of an effective recombination operator, and show that these memetic approaches can handle initialisation parameters and a range of instances more robustly than tabu search algorithms, at the expense of longer solution times. Having presented tabu search and memetic approaches (both with benefits and drawbacks) we finally present an algorithm that is a hybrid of both approaches. This technique produces better solutions than either of the earlier approaches and it is relatively unaffected by initialisation and parameter changes, combining some of the best features of each approach to create a hybrid which is greater than the sum of its component algorithms.

249 citations


Journal ArticleDOI
TL;DR: This work highlights important CCBR problems, evaluates approaches for solving them, and suggest alternatives to be considered for future research.
Abstract: Conversational case-based reasoning (CCBR) was the first widespread commercially successful form of case-based reasoning. Historically, commercial CCBR tools conducted constrained human-user dialogues and targeted customer support tasks. Due to their simple implementation of CBR technology, these tools were almost ignored by the research community (until recently), even though their use introduced many interesting applied research issues. We detail our progress on addressing three of these issues: simplifying case authoring, dialogue inferencing, and interactive planning. We describe evaluations of our approaches on these issues in the context of NaCoDAE and HICAP, our CCBR tools. In summary, we highlight important CCBR problems, evaluate approaches for solving them, and suggest alternatives to be considered for future research.

184 citations


Journal ArticleDOI
TL;DR: The dynamic genetic algorithm simultaneously uses more than one crossover and mutation operators to generate the next generation, and it is expected that the really good operators will have an increasing effect in the genetic process.
Abstract: Traditional genetic algorithms use only one crossover and one mutation operator to generate the next generation. The chosen crossover and mutation operators are critical to the success of genetic algorithms. Different crossover or mutation operators, however, are suitable for different problems, even for different stages of the genetic process in a problem. Determining which crossover and mutation operators should be used is quite difficult and is usually done by trial-and-error. In this paper, a new genetic algorithm, the dynamic genetic algorithm (DGA), is proposed to solve the problem. The dynamic genetic algorithm simultaneously uses more than one crossover and mutation operators to generate the next generation. The crossover and mutation ratios change along with the evaluation results of the respective offspring in the next generation. By this way, we expect that the really good operators will have an increasing effect in the genetic process. Experiments are also made, with results showing the proposed algorithm performs better than the algorithms with a single crossover and a single mutation operator.

101 citations


Journal ArticleDOI
TL;DR: In this article, the authors relate three different ways of specifying preferences in decision theory, namely by means of a set of particular types of constraints on the utility function, by mean of an ordered set of prioritized goals expressed by logical propositions, and by means, an ordered subset of subsets of possible choices reaching the same level of satisfaction.
Abstract: The classical way of encoding preferences in decision theory is by means of utility or value functions. However agents are not always able to deliver such a function directly. In this paper, we relate three different ways of specifying preferences, namely by means of a set of particular types of constraints on the utility function, by means of an ordered set of prioritized goals expressed by logical propositions, and by means of an ordered set of subsets of possible choices reaching the same level of satisfaction. These different expression modes can be handled in a weighted logical setting, here the one of possibilistic logic. The aggregation of preferences pertaining to different criteria can then be handled by fusing sets of prioritized goals. Apart from a better expressivity, the benefits of a logical representation of preferences are to put them in a suitable format for reasoning purposes, or for modifying them.

84 citations


Journal ArticleDOI
TL;DR: A mechanism for learning lexical correspondences between two languages from sets of translated sentence pairs is presented and has been implemented and tested on a set of sample training datasets and produced promising results for further investigation.
Abstract: A mechanism for learning lexical correspondences between two languages from sets of translated sentence pairs is presented. These lexical level correspondences are learned using analogical reasoning between two translation examples. Given two translation examples, the similar parts of the sentences in the source language must correspond to the similar parts of the sentences in the target language. Similarly, the different parts must correspond to the respective parts in the translated sentences. The correspondences between similarities and between differences are learned in the form of translation templates. A translation template is a generalized translation exemplar pair where some components are generalized by replacing them with variables in both sentences and establishing bindings between these variables. The learned translation templates are obtained by replacing differences or similarities by variables. This approach has been implemented and tested on a set of sample training datasets and produced promising results for further investigation.

80 citations


Journal ArticleDOI
David McSherry1
TL;DR: It is argued that mixed-initiative dialogue, explanation of reasoning, and sensitivity analysis are essential to meet the needs of experienced as well as novice users in CBR.
Abstract: Interactive trouble-shooting and customer help-desk support, both activities that involve sequential diagnosis, represent the majority of applications of case-based reasoning (CBR). An analysis is presented of the user-interface requirements of intelligent systems for sequential diagnosis. We argue that mixed-initiative dialogue, explanation of reasoning, and sensitivity analysis are essential to meet the needs of experienced as well as novice users. Other issues to be addressed by system designers include relevance and consistency in dialogue, tolerance of missing data, and timely provision of feedback to users. Many of these issues have previously been addressed by the developers of expert systems and the lessons learned may have important implications for CBR. We present a prototype environment for interactive CBR in sequential diagnosis, called CBR Strategist, which is designed to meet the identified requirements.

71 citations


Journal ArticleDOI
TL;DR: This work uses an agent-based architecture called Asynchronous Team (A-Team), in which each agent encapsulates a different problem solving strategy and agents cooperate by exchanging results, which has been successfully implemented in an industrial scheduling system.
Abstract: We present a new agent-based solution approach for the problem of scheduling multiple non-identical machines in the face of sequence dependent setups, job machine restrictions, batch size preferences, fixed costs of assigning jobs to machines and downstream considerations. We consider multiple objectives such as minimizing (weighted) earliness and tardiness, and minimizing job-machine assignment costs. We use an agent-based architecture called Asynchronous Team (A-Team), in which each agent encapsulates a different problem solving strategy and agents cooperate by exchanging results. Computational experiments on large instances of real-world scheduling problems show that the results obtained by this approach are significantly better than any single algorithm or the scheduler alone. This approach has been successfully implemented in an industrial scheduling system.

63 citations


Journal ArticleDOI
TL;DR: This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data based on a novel forward constrained regression procedure.
Abstract: This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data. The algorithm is based on a novel forward constrained regression procedure. Given a full set of the experts as potential model bases, the structure construction algorithm, formed on the forward constrained regression procedure, selects the most significant model base one by one so as to minimise the overall system approximation error at each iteration, while the gate parameters in the mixture of experts network system are accordingly adjusted so as to satisfy the convex constraints required in the derivation of the forward constrained regression procedure. The procedure continues until a proper system model is constructed that utilises some or all of the experts. A pruning algorithm of the consequent mixture of experts network system is also derived to generate an overall parsimonious construction algorithm. Numerical examples are provided to demonstrate the effectiveness of the new algorithms. The mixture of experts network framework can be applied to a wide variety of applications ranging from multiple model controller synthesis to multi-sensor data fusion.

58 citations



Journal ArticleDOI
TL;DR: An efficient method of estimating a variable threshold of reliability for an HMM, which is found to be useful in rejecting unreliable patterns, is proposed.
Abstract: We present a glove-based hand gesture recognition system using hidden Markov models (HMMs) for recognizing the unconstrained 3D trajectory gestures of operators in a remote work environment. A Polhemus sensor attached to a PinchGlove is employed to obtain a sequence of 3D positions of a hand trajectory. The direct use of 3D data provides more naturalness in generating gestures, thereby avoiding some of the constraints usually imposed to prevent performance degradation when trajectory data are projected into a specific 2D plane. We use two kinds of HMMs according to the basic units to be modeled: gesture-based HMM and stroke-based HMM. The decomposition of gestures into more primitive strokes is quite attractive, since reversely concatenating stroke-based HMMs makes it possible to construct a new set of gesture-based HMMs. Any deterioration in performance and reliability arising from decomposition can be remedied by a fine-tuned relearning process for such composite HMMs. We also propose an efficient method of estimating a variable threshold of reliability for an HMM, which is found to be useful in rejecting unreliable patterns. In recognition experiments on 16 types of gestures defined for remote work, the fine-tuned composite HMM achieves the best performance of 96.88% recognition rate and also the highest reliability.

54 citations


Journal ArticleDOI
TL;DR: This paper presents the system architecture, algorithms as well as empirical evaluations of the interactive user- interface component of the CaseAdvisor system, which helps compress a large case base into several small ones.
Abstract: In interactive case-based reasoning, it is important to present a small number of important cases and problem features to the user at one time. This goal is difficult to achieve when large case bases are commonplace in industrial practice. In this paper we present our solution to the problem by highlighting the interactive user- interface component of the CaseAdvisor system. In CaseAdvisor, decision forests are created in real time to help compress a large case base into several small ones. This is done by merging similar cases together through a clustering algorithm. An important side effect of this operation is that it allows up-to-date maintenance operations to be performed for case base management. During the retrieval process, an information-guided subsystem can then generate decision forests based on users' current answers obtained through an interactive process. Possible questions to the user are carefully analyzed through information theory. An important feature of the system is that case-base maintenance and reasoning are integrated in a seamless whole. In this article we present the system architecture, algorithms as well as empirical evaluations.

Journal ArticleDOI
TL;DR: Research on supporting aerospace design by integrating a case-based design support framework with interactive tools for capturing expert design knowledge through “concept mapping” is described.
Abstract: Aerospace design is a complex task requiring access to large amounts of specialized information Consequently, intelligent systems that support and amplify the abilities of human designers by capturing and presenting relevant information can profoundly affect the speed and reliability of design generation This article describes research on supporting aerospace design by integrating a case-based design support framework with interactive tools for capturing expert design knowledge through “concept mapping” In the integrated system, interactive concept mapping tools provide crucial functions for generating and examining design cases and navigating their hierarchical structure, while CBR techniques facilitate retrieval and aid interactive adaptation of designs Our goal is both to provide a useful design aid and to develop general interactive techniques to facilitate case acquisition and adaptation Experiments illuminate the performance of the system's context-sensitive retrieval during interactive case adaptation and the conditions under which it provides the most benefit

Journal ArticleDOI
TL;DR: The necessity of user interaction during the CBR process and how this decision enhances the capabilities and the usability of the system are discussed.
Abstract: In this paper we present an extension of an existing system, called i>SaxEx, capable of generating expressive musical performances based on Case-Based Reasoning (CBR) techniques. The previous version of i>SaxEx used pre-fixed criteria within the different CBR steps and, therefore, there was no room for user interaction. This paper discusses the necessity of user interaction during the CBR process and how this decision enhances the capabilities and the usability of the system. The set of evaluation experiments conducted show the advantages of i>SaxEx's new interactive functionality, particularly for future educational applications of the system.

Journal ArticleDOI
TL;DR: An approach to planning based on various state models that handle various types of action dynamics (deterministic and probabilistic) and sensor feedback (null, partial, and complete) is presented.
Abstract: The problem of selecting actions in environments that are dynamic and not completely predictable or observable is a central problem in intelligent behavior. In AI, this translates into the problem of designing controllers that can map sequences of observations into actions so that certain goals are achieved. Three main approaches have been used in AI for designing such controllers: the i>programming approach, where the controller is programmed by hand in a suitable high-level procedural language, the i>planning approach, where the control is automatically derived from a suitable description of actions and goals, and the i>learning approach, where the control is derived from a collection of experiences. The three approaches exhibit successes and limitations. The focus of this paper is on the i>planning approach. More specifically, we present an approach to planning based on various i>state models that handle various types of i>action dynamics (deterministic and probabilistic) and i>sensor feedback (null, partial, and complete). The approach combines i>high-level representations languages for describing actions, sensors, and goals, i>mathematical models of sequential decisions for making precise the various planning tasks and their solutions, and i>heuristic search algorithms for computing those solutions. The approach is supported by a computational tool we have developed that accepts high-level descriptions of actions, sensors, and goals and produces suitable controllers. We also present empirical results and discuss open challenges.

Journal ArticleDOI
TL;DR: This paper defines parameterized desires in an extension of Lang's framework of qualitative decision theory, in which utility functions are constructed from desires, and introduces three parameters, which help to implement different facets of risk.
Abstract: In qualitative decision-theoretic planning, desires—qualitative abstractions of utility functions—are combined with defaults—qualitative abstractions of probability distributions—to calculate the expected utilities of actions. This paper is inspired from Lang's framework of qualitative decision theory, in which utility functions are constructed from desires. Unfortunately, there is no consensus about the desirable logical properties of desires, in contrast to the case for defaults. To do justice to the wide variety of desires we define parameterized desires in an extension of Lang's framework. We introduce three parameters, which help us to implement different facets of risk. The strength parameter encodes the importance of the desire, the lifting parameter encodes how to determine the utility of a set (proposition) from the utilities of its elements (worlds), and the polarity parameter encodes the relation between gain of utility for rewards and loss of utility for violations. The parameters influence how desires interact, and they thus increase the control on the construction process of utility functions from desires.

Journal ArticleDOI
TL;DR: This paper first analyzes both theoretically and empirically how the step size control was lost, then two schemes of dynamic lower bound are proposed that enable the EP algorithm to adjust the lower bound dynamically during evolution.
Abstract: The lognormal self-adaptation has been used extensively in evolutionary programming (EP) and evolution strategies (ES) to adjust the search step size for each objective variable. However, it was discovered in our previous study (K.-H. Liang, X. Yao, Y. Liu, C. Newton, and D. Hoffman, in i>Evolutionary Programming VII. Proc. of the Seventh Annual Conference on Evolutionary Programming, vol. 1447, edited by V. Porto, N. Saravanan, D. Waagen, and A. Eiben, Lecture Notes in Computer Science, Springer: Berlin, pp. 291–300, 1998) that such self-adaptation may rapidly lead to a search step size that is far too small to explore the search space any further, and thus stagnates search. This is called i>the loss of step size control. It is necessary to use a lower bound of search step size to avoid this problem. Unfortunately, the optimal setting of lower bound is highly problem dependent. This paper first analyzes both theoretically and empirically how the step size control was lost. Then two schemes of dynamic lower bound are proposed. The schemes enable the EP algorithm to adjust the lower bound dynamically during evolution. Experimental results are presented to demonstrate the effectiveness and efficiency of the dynamic lower bound for a set of benchmark functions.

Journal ArticleDOI
TL;DR: ExpertGuide tightly integrates CBR techniques and provides a user interface for users who want advice from mentors and is currently being used when building WWW-based mentor systems with scales ranging from division-wide to corporate-wide and nationwide.
Abstract: Case-based reasoning (CBR) has been used to improve knowledge management in corporate activities It was initially used for problem solving and then for facilitating the distribution of knowledge and experiences Since conversational CBR appeared, it has also been used to develop mentor systems in complex knowledge spaces ExpertGuide was designed as a tool for developing WWW-based mentor systems It was designed during the development of CBR systems used at NEC Multilink retrieval provides a method for searching a case library from several viewpoints Question selection by entropy finds most effective questions in discriminating cases It does this by calculating the information gain of candidate questions Indexing with scripts is a case-indexing method using Schank's scripts and is effective when situations and problems are hard to express and assess ExpertGuide tightly integrates these techniques and provides a user interface for users who want advice from mentors It is currently being used when building WWW-based mentor systems with scales ranging from division-wide to corporate-wide and nationwide

Journal ArticleDOI
TL;DR: It is demonstrated that a population cannot practically move to other better points, because strategy parameters attain minute values at an early stage, when too small a lower bound is adopted.
Abstract: Evolution Strategies (ES) are an approach to numerical optimization that shows good optimization performance. However, it is found through our computer simulations that the performance changes with the lower bound of strategy parameters, although it has been overlooked in the ES community. We demonstrate that a population cannot practically move to other better points, because strategy parameters attain minute values at an early stage, when too small a lower bound is adopted. This difficulty is called the lower bound problem in this paper. In order to improve the “self-adaptive” property of strategy parameters, a new extended ES called RES is proposed. RES has redundant neutral strategy parameters and adopts new mutation mechanisms in order to utilize selectively neutral mutations so as to improve the adaptability of strategy parameters. Computer simulations of the proposed approach are conducted using several test functions.

Journal ArticleDOI
TL;DR: A new symbiotic evolutionary algorithm to solve complex optimization problems that are composed of multiple sub- problems interrelated with each other and based on fitness, as it can increase the adaptability of the individuals and the search efficiency.
Abstract: This paper proposes a new symbiotic evolutionary algorithm to solve complex optimization problems. This algorithm imitates the natural evolution process of endosymbionts, which is called endosymbiotic evolutionary algorithm. Existing symbiotic algorithms take the strategy that the evolution of symbionts is separated from the host. In the natural world, prokaryotic cells that are originally independent organisms are combined into an eukaryotic cell. The basic idea of the proposed algorithm is the incorporation of the evolution of the eukaryotic cells into the existing symbiotic algorithms. In the proposed algorithm, the formation and evolution of the endosymbionts is based on fitness, as it can increase the adaptability of the individuals and the search efficiency. In addition, a localized coevolutionary strategy is employed to maintain the population diversity. Experimental results demonstrate that the proposed algorithm is a promising approach to solving complex problems that are composed of multiple sub- problems interrelated with each other.

Journal ArticleDOI
TL;DR: This paper shows the conversion of VHDL programs into a logical representation, a model that can be directly used by a model-based diagnosis engine for computing diagnoses and presents some arguments showing that the proposed debugging technique scales up to large designs.
Abstract: In this paper we describe the use of model-based diagnosis for locating bugs in hardware designs. Nowadays hardware designs are written in a programming language. We restrict our view to hardware designs written in a subset of the commonly used hardware description language VHDL. This subset includes all synthesize-able (register transfer level (RTL)) programs, i.e., programs that can be automatically converted into a gate level representation. Therefore almost all VHDL programs are RTL programs. We show the conversion of VHDL programs into a logical representation. This representation is a model that can be directly used by a model-based diagnosis engine for computing diagnoses. The resulting diagnoses are mapped back to the VHDL code fragments of the original program explaining a misbehavior. In addition, we specify some rules optimizing the obtained results. We further present some arguments showing that the proposed debugging technique scales up to large designs.

Journal ArticleDOI
TL;DR: The developed technique and the results of the use of this technique through the application of this system for several domains, including distributed fault detection in an electrical power distribution network are described.
Abstract: This paper documents an early effort to develop an experimental, collaborative data analysis technique for learning classifiers from a collection of heterogeneous datasets distributed over a network. The proposed technique makes use of a scalable evolutionary algorithm, called the GEMGA to classify datasets. This paper describes the developed technique and the results of the use of this technique through the application of this system for several domains, including distributed fault detection in an electrical power distribution network.

Journal ArticleDOI
TL;DR: The proposed Gradient Forecasting Search Method (GFSM) can accurately predict the precise searching direction and trend of the gradient descent method via the universal DDEPM and can adjust prediction steps dynamically using the golden section search algorithm.
Abstract: Optimization theory and method profoundly impact numerous engineering designs and applications. The gradient descent method is simpler and more extensively used to solve numerous optimization problems than other search methods. However, the gradient descent method is easily trapped into a local minimum and slowly converges. This work presents a Gradient Forecasting Search Method (GFSM) for enhancing the performance of the gradient descent method in order to resolve optimization problems. GFSM is based on the gradient descent method and on the universal Discrete Difference Equation Prediction Model (DDEPM) proposed herein. In addition, the concept of the universal DDEPM is derived from the grey prediction model. The original grey prediction model uses a mathematical hypothesis and approximation to transform a continuous differential equation into a discrete difference equation. This is not a logical approach because the forecasting sequence data is invariably discrete. To construct a more precise prediction model, this work adopts a discrete difference equation. GFSM proposed herein can accurately predict the precise searching direction and trend of the gradient descent method via the universal DDEPM and can adjust prediction steps dynamically using the golden section search algorithm. Experimental results indicate that the proposed method can accelerate the searching speed of gradient descent method as well as help the gradient descent method escape from local minima. Our results further demonstrate that applying the golden section search method to achieve dynamic prediction steps of the DDEPM is an efficient approach for this search algorithm.

Journal ArticleDOI
Lluís Godo1, Adriana Zapico1
TL;DR: This paper extends the original Dubois and Prade's decision model to cope with partially inconsistent descriptions of belief states, represented by non-normalised possibility distributions, and provides axiomatic characterizations of the preference orderings induced by these utility functions.
Abstract: A qualitative counterpart to Von Neumann and Morgenstern's Expected Utility Theory of decision under uncertainty was recently proposed by Dubois and Prade. In this model, belief states are represented by normalised possibility distributions over an ordinal scale of plausibility, and the utility (or preference) of consequences of decisions are also measured in an ordinal scale. In this paper we extend the original Dubois and Prade's decision model to cope with partially inconsistent descriptions of belief states, represented by non-normalised possibility distributions. Subnormal possibility distributions frequently arise when adopting the possibilistic model for case-based decision problems. We consider two qualitative utility functions, formally similar to the original ones up to modifying factors coping with the inconsistency degree of belief states. We provide axiomatic characterizations of the preference orderings induced by these utility functions.

Journal ArticleDOI
TL;DR: The innovative approach behind the CASCADE authoring system is described, which allows case authors to interact with, and be guided by, a model of case competence through a variety of novel visualisation tools, and it is argued that this mode of interaction facilitates the more rapid development of high quality case bases.
Abstract: Case-based reasoning (CBR) offers many opportunities for human interaction as part of its reasoning cycle. In particular, one of the main advantages of case-based methods is their use of real case data, the sort of data that humans are intrinsically comfortable with—this is typically in contrast to the rule-based and model-based knowledge of more traditional first-principles reasoning systems. As a result, human participation has been a key factor in a number of case-based systems, particularly when it comes to assisting in the retrieval and adaptation processes. In this article we consider the case authoring process and note that, although the authoring process has always been driven by human involvement, it is probably the least well developed CBR process when it comes to offering real-time assistance to the human author. Many conventional CBR authoring tools provide editing and auditing facilities only. In this article we describe the innovative approach behind the CASCADE authoring system, which allows case authors to interact with, and be guided by, a model of case competence through a variety of novel visualisation tools. We argue that this mode of interaction facilitates the more rapid development of high quality case bases.

Journal ArticleDOI
TL;DR: In this paper, the DESIRE framework is used to develop a conceptual specification of the simple agents discussed by A. Cesta, M. Miceli and P. Rizzo.
Abstract: Much research concerning the design of multi-agent systems (at a conceptual level) addresses complex agents that exhibit complex interaction patterns. Due to this complexity, it is difficult to perform rigorous experimentation. On the other hand, systematic experimental work regarding behaviour of societies of more simple agents, while reporting valuable results, often lacks conceptual specification of the system under consideration. In this paper, the compositional multi-agent modelling framework DESIRE is not only successfully used to develop a conceptual specification of the simple agents discussed by A. Cesta, M. Miceli and P. Rizzo (i>Lecture Notes in Artificial Intelligence, vol. 1038, Springer-Verlag: Berlin, pp. 128–138, 1996), but also to simulate the behaviour in a dynamical environment. In the DESIRE framework, a conceptual specification, which provides a high-level view of an agent, has enough detail for automatic prototype generation. The prototype implementation of the con-ceptual specification of the simple agents has been used to replicate, and extend, one of the experiments reported by Cesta et al.

Journal ArticleDOI
TL;DR: A conditional logic, DL, is presented that is suitable for diagnostic reasoning and allows us to represent and reason with assumptions and is presented as a model-based diagnostic approach with component-oriented device models.
Abstract: The model-based diagnostic approach was first introduced to overcome the limitations of heuristic systems. However, research on model-based systems showed that the model-based diagnosis approaches resort to assumptions that can be viewed as the return, though controlled, of heuristics into diagnostic reasoning. In this paper we focus on diagnosis with component-oriented device models. We argue for the need to represent and reason with these assumptions. We present a conditional logic, DL, That is suitable for diagnostic reasoning and allows us to represent and reason with assumptions.

Journal ArticleDOI
TL;DR: A generalisation of the A* search algorithm is described which uses a way of partially ordering solutions satisfying a set of prioritised soft constraints and it is proved that under certain reasonable assumptions the algorithm is complete and optimal.
Abstract: This paper addresses two issues: how to choose between solutions for a problem specified by multiple criteria, and how to search for solutions in such situations. We argue against an approach common in decision theory, reducing several criteria to a single ‘cost’ (e.g., using a weighted sum cost function) and instead propose a way of partially ordering solutions satisfying a set of prioritised soft constraints. We describe a generalisation of the Aa search algorithm which uses this ordering and prove that under certain reasonable assumptions the algorithm is complete and optimal.

Journal ArticleDOI
TL;DR: It is shown how the architecture can be exploited to design an intelligent Website for insurance, developed in co-operation with the software company Ordina Utopics and an insurance company.
Abstract: In this paper a reusable multi-agent architecture for intelligent Websites is presented and illustrated for an electronic department store. The architecture has been designed and implemented using the compositional design method for multi-agent systems DESIRE. The agents within this architecture are based on a generic information broker agent model. It is shown how the architecture can be exploited to design an intelligent Website for insurance, developed in co-operation with the software company Ordina Utopics and an insurance company.

Journal ArticleDOI
TL;DR: A decision-theoretic strategy for surveillance as a first step towards automating the planning of the movement of an autonomous surveillance robot and to compare this strategy with other proposed strategies is introduced.
Abstract: In this paper, we introduce a decision-theoretic strategy for surveillance as a first step towards automating the planning of the movement of an autonomous surveillance robot. In our opinion, this particular application is interesting in its own right, but it also provides a test-case for formalisms aimed at dealing both with (low-level) sensor, localisation, and navigation uncertainty and with uncertainty at a more abstract planning level. After a brief discussion of our view on surveillance, we describe a very simple formal model of an environment in which the surveillance task has to be performed. We use this model to illustrate our decision-theoretic strategy and to compare this strategy with other proposed strategies. We treat several simple examples and obtain some general results.

Journal ArticleDOI
TL;DR: An overview of optical neural networks is presented, with emphasis on the holographic neural networks, and the mathematical basis of holography in terms of the Fresnel Zone Plate is taken and how it can be utilized in making computer generated holograms (CGHs).
Abstract: While numerous artificial neural network (ANN) models have been electronically implemented and simulated by conventional computers, optical technology provides a far superior mechanism for the implementation of large-scale ANNs. The properties of light make it an ideal carrier of data signals. With optics, very large and high speed neural network architectures are possible. Because light is a predictable phenomenon, it can be described mathematically and its behavior can be simulated by conventional computers. A hologram is in essence a capture of the light field at a particular moment in time and space. Later, the hologram can be used to reconstruct the three dimensional light field carrying optical data. This makes a hologram an ideal medium for capturing, storing, and transmitting data in optical computers, such as optical neural networks (ONNs). Holograms can be created using conventional methods, but they can also be computer generated. In this paper, we will present an overview of optical neural networks, with emphasis on the holographic neural networks. We will take a look at the mathematical basis of holography in terms of the Fresnel Zone Plate and how it can be utilized in making computer generated holograms (CGHs). Finally, we will present various methods of CGH implementation in a two layer holographic ONN.