scispace - formally typeset
Search or ask a question

Showing papers on "Applications of artificial intelligence published in 2004"


Book
01 Feb 2004
TL;DR: The author revealed that genetic algorithms in the multi-objective optimisation of fault detection observers resulted in a significant reduction in the number of errors in diagnostic systems.
Abstract: 1. Introduction.- 2. Models in the diagnostics of processes.- 3. Process diagnostics methodology.- 4. Methods of signal analysis.- 5. Control theory methods in designing diagnostic systems.- 6. Optimal detection observers based on eigenstructure assignment.- 7. Robust H?-optimal synthesis of FDI systems.- 8. Evolutionary methods in designing diagnostic systems.- 9. Artificial neural networks in fault diagnosis.- 10. Parametric and neural network Wiener and Hammerstein models in fault detection and isolation.- 11. Application of fuzzy logic to diagnostics.- 12. Observers and genetic programming in the identification and fault diagnosis of non-linear dynamic systems.- 13. Genetic algorithms in the multi-objective optimisation of fault detection observers.- 14. Pattern recognition approach to fault diagnostics.- 15. Expert systems in technical diagnostics.- 16. Selected methods of knowledge engineering in systems diagnosis.- 17. Methods of acqusition of diagnostic knowledge.- 18. State monitoring algorithms for complex dynamic systems.- 19. Diagnostics of industrial processes in decentralised structures.- 20. Detection and isolation of manoeuvres in adaptive tracking filtering based on multiple model switching.- 21. Detecting and locating leaks in transmission pipelines.- 22. Models in the diagnostics of processes.- 23. Diagnostic systems.

356 citations


Journal ArticleDOI
TL;DR: Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.
Abstract: INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications. RESULTS: The proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings. DISCUSSION: Artificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.

269 citations


Book
23 Jul 2004
TL;DR: AI for Game Developers introduces you to techniques such as finite state machines, fuzzy logic, neural networks, and many others, in straightforward, easy-to-understand language, supported with code samples throughout the entire book (written in C/C++).
Abstract: Advances in 3D visualization and physics-based simulation technology make it possible for game developers to create compelling, visually immersive gaming environments that were only dreamed of years ago. But today's game players have grown in sophistication along with the games they play. It's no longer enough to wow your players with dazzling graphics; the next step in creating even more immersive games is improved artificial intelligence, or AI. Fortunately, advanced AI game techniques are within the grasp of every game developer--not just those who dedicate their careers to AI. If you're new to game programming or if you're an experienced game programmer who needs to get up to speed quickly on AI techniques, you'll find AI for Game Developers to be the perfect starting point for understanding and applying AI techniques to your games. Written for the novice AI programmer, AI for Game Developers introduces you to techniques such as finite state machines, fuzzy logic, neural networks, and many others, in straightforward, easy-to-understand language, supported with code samples throughout the entire book (written in C/C++). From basic techniques such as chasing and evading, pattern movement, and flocking to genetic algorithms, the book presents a mix of deterministic (traditional) and non-deterministic (newer) AI techniques aimed squarely at beginners AI developers. Other topics covered in the book include: Potential function based movements: a technique that handles chasing, evading swarming, and collision avoidance simultaneouslyBasic pathfinding and waypoints, including an entire chapter devoted to the A* pathfinding algorithmAI scriptingRule-based AI: learn about variants other than fuzzy logic and finite state machinesBasic probabilityBayesian techniques Unlike other books on the subject, AI for Game Developers doesn't attempt to cover every aspect of game AI, but to provide you with usable, advanced techniques you can apply to your games right now. If you've wanted to use AI to extend the play-life of your games, make them more challenging, and most importantly, make them more fun, then this book is for you.

183 citations


Journal ArticleDOI
TL;DR: Pegasus, an AI planning system which is integrated into the grid environment that takes a user's highly specified desired results, generates valid workflows that take into account available resources, and submits the workflows for execution on the grid.
Abstract: A key challenge for grid computing is creating large-scale, end-to-end scientific applications that draw from pools of specialized scientific components to derive elaborate new results. We develop Pegasus, an AI planning system which is integrated into the grid environment that takes a user's highly specified desired results, generates valid workflows that take into account available resources, and submits the workflows for execution on the grid. We also begin to extend it as a more distributed and knowledge-rich architecture.

170 citations


Proceedings Article
25 Jul 2004
TL;DR: The AI architecture and associated explanation capability used by Full Spectrum Command, a training system developed for the U.S. Army by commercial game developers and academic researchers are described.
Abstract: As the artificial intelligence (AI) systems in military simulations and computer games become more complex, their actions become increasingly difficult for users to understand. Expert systems for medical diagnosis have addressed this challenge though the addition of explanation generation systems that explain a system's internal processes. This paper describes the AI architecture and associated explanation capability used by Full Spectrum Command, a training system developed for the U.S. Army by commercial game developers and academic researchers.

168 citations


Journal ArticleDOI
TL;DR: The quality of AI (artificial intelligence) is a high-ranking feature for game fans in making their purchase decisions and an area with incredible potential to increase players’ immersion and fun.
Abstract: If you’ve been following the game development scene, you’ve probably heard many remarks such as: "The main role of graphics in computer games will soon be over; artificial intelligence is the next big thing!" Although you should hardly buy into such statements, there is some truth in them. The quality of AI (artificial intelligence) is a high-ranking feature for game fans in making their purchase decisions and an area with incredible potential to increase players’ immersion and fun.

105 citations


01 Jan 2004
TL;DR: This position paper discusses AI challenges in the area of real‐time strategy games and presents a research agenda aimed at improving AI performance in these popular multi‐ player computer games.
Abstract: This position paper discusses AI challenges in the area of real‐time strategy games and presents a research agenda aimed at improving AI performance in these popular multi‐ player computer games.

83 citations


Proceedings ArticleDOI
20 Sep 2004
TL;DR: This paper presents a Web-based intelligent tutoring system for computer programming that can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.
Abstract: Web Intelligence is a direction for scientific research that explores practical applications of Artificial Intelligence to the next generation of Web-empowered systems. In this paper, we present a Web-based intelligent tutoring system for computer programming. The decision making process conducted in our intelligent system is guided by Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.

68 citations


Proceedings ArticleDOI
02 Sep 2004
TL;DR: In this paper, a First Person Shooter Artificial Intelligence system that makes use of machine learning capabilities to achieve more human-like behavior and strategies was developed and tested in the Quake 3 Arena game engine.
Abstract: This paper presents a First Person Shooter Artificial Intelligence system that makes use of machine learning capabilities to achieve more human-like behavior and strategies. The AI is trained with a supervised learning paradigm using example recorded during the observation of expert human players. The Machine Learning section of the AI is based on various Feed Forward Multi-layer Neural Networks trained by Genetic Algorithms. The AI system is developed and tested in the Quake 3 Arena game engine. The system is able to learn certain behaviors but still lack on some others. The results are evaluated and possible improvements are proposed.

48 citations


Journal ArticleDOI
Amruth N. Kumar1
01 Sep 2004
TL;DR: The results of the evaluation of robots used for open-laboratory projects are discussed and the lessons learned are listed.
Abstract: We have been using robots in our artificial intelligence course since fall 2000. We have been using the robots for open-laboratory projects. The projects are designed to emphasize high-level knowledge-based AI algorithms. After three offerings of the course, we paused to analyze the collected data and to see if we could answer the following questions: (i) Are robot projects effective at helping students learn AI concepts? (ii) What advantages, if any, can be attributed to using robots for AI projects? (iii) What are the downsides of using robots for traditional projects in AI? In this article we discuss the results of our evaluation and list the lessons learned.

48 citations



Journal ArticleDOI
TL;DR: A temporal model, TemPro, is developed, based on the Allen interval algebra, to express and manage time information in terms of qualitative and quantitative temporal constraints, and the efficiency of the MCRW approximation method to deal with under constrained and middle constrained problems while Tabu Search and SDRW are the methods of choice for over constrained problems.
Abstract: Representing and reasoning about time is fundamental in many applications of Artificial Intelligence as well as of other disciplines in computer science, such as scheduling, planning, computational linguistics, database design and molecular biology. The development of a domain-independent temporal reasoning system is then practically important. An important issue when designing such systems is the efficient handling of qualitative and metric time information. We have developed a temporal model, TemPro, based on the Allen interval algebra, to express and manage such information in terms of qualitative and quantitative temporal constraints. TemPro translates an application involving temporal information into a Constraint Satisfaction Problem (CSP). Constraint satisfaction techniques are then used to manage the different time information by solving the CSP. In order for the system to deal with real time applications or those applications where it is impossible or impractical to solve these problems completely, we have studied different methods capable of trading search time for solution quality when solving the temporal CSP. These methods are exact and approximation algorithms based respectively on constraint satisfaction techniques and local search. Experimental tests were performed on randomly generated temporal constraint problems as well as on scheduling problems in order to compare and evaluate the performance of the different methods we propose. The results demonstrate the efficiency of the MCRW approximation method to deal with under constrained and middle constrained problems while Tabu Search and SDRW are the methods of choice for over constrained problems.

Book
14 Dec 2004
TL;DR: The result of the selection of papers presented at a special session entitled "Applications of Artificial Intelligence in Economics and Finance" at the '2003 International Conference on Artificial Intelligence' is presented in this article.
Abstract: The result of the selection of papers presented at a special session entitled 'Applications of Artificial Intelligence in Economics and Finance' at the '2003 International Conference on Artificial Intelligence' This volume will appeal to economists interested in adopting an interdisciplinary approach to the study of economic problems

Journal ArticleDOI
TL;DR: This Special Issue attempts to cover the different aspects of platform product development, which identifies a fertile area not only for applying AI to contemporary issues of design and manufacturing systems but also for enriching the methodology for developing AI systems in manufacturing enterprises.
Abstract: Platform product development is a contemporary approach to agile product development for mass customization. Distinctive product variants are derived or customized from a platform that is defined as components and subsystems commonly shared across a product family. A well-organized platform is essential to connect different parties of an enterprise including soliciting customer needs through order fulfillment to field service. Hence, it is also critical to achieve the economy of scale by identifying repetitive applications of share tooling, knowledge, and other resources. In this Special Issue, we attempt to cover the different aspects of platform product development. The topics span a wider range of artificial intelligence (AI) disciplines, from knowledge representation to knowledge support systems. It identifies a fertile area not only for applying AI to contemporary issues of design and manufacturing systems but also for enriching the methodology for developing AI systems in manufacturing enterprises. Indeed, we consider that numerous successful applications in the industrial sectors highlight a future direction for productive AI applications in industry.

Proceedings ArticleDOI
21 Nov 2004
TL;DR: This paper investigates the use of AI in game development with a focus on an intelligent camera system and path-finding in a 3D application.
Abstract: The role of artificial intelligence (AI) in games is gaining importance and often affects the success or failure of a game. In this paper, we investigate the use of AI in game development. Research is done on how AI can be applied in games, and the advantages it brings along. As the fields of AI in game development are too wide to be covered, the focus of this project is placed on certain areas. Two programs are implemented through this project - (i) an intelligent camera system, and (ii) path-finding in a 3D application.

07 Mar 2004
TL;DR: The paper develops a terminology to describe sensors which have been enhanced in some way by the integration of some additional processing circuitry, including 'smart sensors' and 'intelligent sensors', and the ‘cogent’ sensor is introduced.
Abstract: The paper develops a terminology to describe sensors which have been enhanced in some way by the integration of some additional processing circuitry. Several terms, current in the literature, including 'smart sensors' and 'intelligent sensors' are discussed and the ‘cogent’ sensor is introduced. This is followed by a brief review of existing and potential applications of Artificial Intelligence (AI) to microsystems, in terms of technology integration, device level performance enhancement and system level added functionality.

Proceedings ArticleDOI
26 Aug 2004
TL;DR: A concept of extended discemibility miitrix is introduced, which makes use of an algorithm “ROUSTIDA” to analyze incomplete data, which is based on “rough set theory”.
Abstract: The “rough set” approach is an important tool to process uncertain or vague knowledge in AI applications. In this paper, the “rough set theory” is intensively researched and a concept of extended discemibility miitrix is introduced, which makes use of an algorithm “ROUSTIDA” to analyze incomplete data, which is based on “rough set theory”. The (advantage of this algorithm is that, it uses only the information given by the processed data and does not rely on other model assumptions. Experimental result shows this algorithm is efficient, comprehensible and adoptable for a pre-processing step before a data-mining method is emplqyed.

Proceedings ArticleDOI
02 Sep 2004
TL;DR: The paper develops a terminology to describe sensors which have been enhanced in some way by the integration of some additional processing circuitry, including 'smart sensors' and 'intelligent sensors', and the 'cogent sensor' is introduced.
Abstract: The paper develops a terminology to describe sensors which have been enhanced in some way by the integration of some additional processing circuitry. Several terms, current in the literature, including 'smart sensors' and 'intelligent sensors' are discussed and the 'cogent sensor' is introduced. This is followed by a brief review of existing and potential applications of artificial intelligence (AI) to microsensors, in terms of technology integration, device level performance enhancement and system level added functionality. Examples of AI applications to the design of smart, intelligent and cogent microsystems include sensor data validation, correction and missing data restoration, sensor fault detection, intelligent actuation and information inference from sensor data. Hardware implementations of ANN to support the functions above are mentioned and the future of AI for sensors is discussed.

Proceedings Article
01 Jan 2004
TL;DR: This paper provides a review of the work related to the areas of dynamic modelling and link prediction of social networks, and anomaly detection for detecting changes in the behaviour of e-mail usage.
Abstract: E-mail is one of the most popular and widely used form of electronic communication used today. The patterns in the social interactions or contacts between people by e-mail can be analysed using social network analysis and user behaviour analysis. In this paper we provide a review of the work related to the areas of dynamic modelling and link prediction of social networks, and anomaly detection for detecting changes in the behaviour of e-mail usage. We then discuss about the benefits of applying artificial intelligence techniques to these fields.

Journal ArticleDOI
TL;DR: Growing reliance on automated means of accessing information brings an increase in indexing and a corresponding decrease in classification, which brings about a shift from the modernist view of the world as permanently and hierarchically structured to the indeterminacy and contingency associated with postmodernism.
Abstract: To classify is to organize the particulars in a body of information according to some meaningful scheme. Difficulty recognizing metaphor, synonyms and homonyms, and levels of generalization renders those applications of artificial intelligence that are currently in widespread use at a loss to deal effectively with classification. Indexing conveys nothing about relationships; it pinpoints information on particular topics without reference to anything else. Keyword searching is a form of indexing, and here artificial intelligence excels. Growing reliance on automated means of accessing information brings an increase in indexing and a corresponding decrease in classification. This brings about a shift from the modernist view of the world as permanently and hierarchically structured to the indeterminacy and contingency associated with postmodernism.

BookDOI
01 Jan 2004
TL;DR: The Data Broker Framework is described, which is designed to automate the process of digital object acquisition, and NOODLE (Negotiation OntOlogy Description LanguagE) is introduced to formally specify terms in the negotiation domain.
Abstract: Collecting digital materials is time-consuming and can gain from automation. Since each source – and even each acquisition – may involve a separate negotiation of terms, a collector may prefer to use a broker to represent his interests with owners. This paper describes the Data Broker Framework (DBF), which is designed to automate the process of digital object acquisition. For each acquisition, a negotiation agent is assigned to negotiate on the collector’s behalf, choosing from strategies in a strategy pool to automatically handle most bargaining cases and decide what to accept and what counteroffers to propose. We introduce NOODLE (Negotiation OntOlogy Description LanguagE) to formally specify terms in the negotiation domain.

01 Jan 2004
TL;DR: The results from the case studies show that opposite reference behavior, either constructive or disruptive, could be a result for different programs, and care must be taken to make sure the disruptive memory references will not outweigh the benefit of parallelization.
Abstract: Memory performance becomes a dominant factor for today’s microprocessor applications In this paper, we study memory reference behavior of emerging multimedia and AI applications We compare memory performance for sequential and multithreaded versions of the applications on multithreaded processors The methodology we used including workload selection and parallelization, benchmarking and measurement, memory trace collection and verification, and tracedriven memory performance simulations The results from the case studies show that opposite reference behavior, either constructive or disruptive, could be a result for different programs Care must be taken to make sure the disruptive memory references will not outweigh the benefit of parallelization

Journal Article
TL;DR: The approach to building, updating and maintaining large Bayes net models is described, based on the implementation of the Semistructured Probabilistic Database Management System (SPDBMS) that provides us with robust storage and retrieval mechanisms for large quantities of probability distributions.
Abstract: Bayes nets appear in many Artificial Intelligence applications that model stochastic processes. Efficiently building Bayes nets is crucial to the applications. In this paper we describe our approach to building, updating and maintaining large Bayes net models. This approach is based on our implementation of the Semistructured Probabilistic Database Management System (SPDBMS) that provides us with robust storage and retrieval mechanisms for large quantities of probability distributions. On top of SPDBMS, we build client applications designed to deal with specific sub-tasks within the model construction problem. The two applications described here are the Bayes Net Builder (BNB) that allows knowledge engineers to describe the structure of the Bayes Net model, and the Probability Elicitation Tool (PET) designed to elicit conditional probability distributions from the domain experts.

Proceedings ArticleDOI
13 Aug 2004
TL;DR: This talk will address several of those issues, possible solutions, and currently unsolved problems to join the needs of AI in computer wargames with the solutions of current AI technologies.
Abstract: Computer wargames involve the most in-depth analysis of general game theory. The enumerated turns of a game like chess are dwarfed by the exponentially larger possibilities of even a simple computer wargame. Implementing challenging AI is computer wargames is an important goal in both the commercial and military environments. In the commercial marketplace, customers demand a challenging AI opponent when they play a computer wargame and are frustrated by a lack of competence on the part of the AI. In the military environment, challenging AI opponents are important for several reasons. A challenging AI opponent will force the military professional to avoid routine or set-piece approaches to situations and cause them to think much deeper about military situations before taking action. A good AI opponent would also include national characteristics of the opponent being simulated, thus providing the military professional with even more of a challenge in planning and approach. Implementing current AI technologies in computer wargames is a technological challenge. The goal is to join the needs of AI in computer wargames with the solutions of current AI technologies. This talk will address several of those issues, possible solutions, and currently unsolved problems.

Journal ArticleDOI
TL;DR: A methodology for using rough set for preference modeling in decision problem is presented, where a new approach for deriving knowledge rules from database based on rough set combined with genetic programming is introduced.
Abstract: A methodology for using rough set for preference modeling in decision problem is presented in this paper; where we will introduce a new approach for deriving knowledge rules from database based on rough set combined with genetic programming. Genetic programming belongs to the most new techniques in applications of artificial intelligence. Rough set theory, which emerged about 20 years back, is nowadays a rapidly developing branch of artificial intelligence and soft computing. At the first glance, the two methodologies that we discuss are not in common. Rough set construct is the representation of knowledge in terms of attributes, semantic decision rules, etc. On the contrary, genetic programming attempts to automatically create computer programs from a high‐level statement of the problem requirements. But, in spite of these differences, it is interesting to try to incorporate both the approaches into a combined system. The challenge is to obtain as much as possible from this association.

Journal ArticleDOI
TL;DR: Explicit symbolic logic has faded from prominence, but the close coupling of AI and the digital computer, and of thought and the stepwise algorithm, seem about as strong and unquestioned as ever.
Abstract: Explicit symbolic logic has faded from prominence, but the close coupling of AI and the digital computer, and of thought and the stepwise algorithm, seem about as strong and unquestioned as ever. Of course there's connectionism, but this too is mired in false assumptions that date back a long way. And it seems to have dragged neuroscience down with it to the extent that we now seem unable to think about real brains without resorting to models that owe too much of their inspiration to the three-layer perceptron. Traditional AI has excelled at solving certain kinds of problems. It can make systems that learn but not in any generally applicable way. AI is about making machines do what humans use intelligence to do, and often this doesn't actually require the machines to show any intelligence at all. But for many tasks, especially in robotics, the ability to see, learn, and perform complex motor actions is a prerequisite that the traditional approach has utterly failed to fulfill.

Proceedings Article
06 Aug 2004
TL;DR: The formalism used for defining the paradigm of multi-representation ontology is introduced and the manifestation of this paradigm with Enterprise Information Systems is shown.
Abstract: In the last decade, ontologies as shared common vocabulary played a major role in many AI applications and information integration for heterogeneous, distributed systems. The problems of integrating and developing information systems and databases in heterogeneous, distributed environment have been translated in the technical perspectives as system’s interoperability. Ontologies, however, are foreseen to play a key role in resolving partially the semantic conflicts and differences that exist among systems. Domain ontologies, however, are constructed by capturing a set of concepts and their links according to various criteria such as the abstraction paradigm, the granularity scale, interest of user communities, and the perception of the ontology developer. Thus, different applications of the same domain end up having several representations of the same real world phenomenon. Multi-representation ontology is an ontology (or ontologies) that characterizes ontological concept by a variable set of properties (static and dynamic) or attributes in several contexts and/ or in several scales of granularity. This paper introduces the formalism used for defining the paradigm of multi-representation ontology and shows the manifestation of this paradigm with Enterprise Information Systems.

Book ChapterDOI
01 Jan 2004
TL;DR: If a system is too complicated or demands too high a level of maintenance, then it will probably fail in most commercial or government environments.
Abstract: Simplicity and effectiveness guide commercial and government applications of Artificial Intelligence, with a strong emphasis on the simplicity. Basically, if a system is too complicated or demands too high a level of maintenance, then it will probably fail in most commercial or government environments. Furthermore, if a moderate level of understanding of the solution ‘engine’ is required in order to interpret the results, then it will also probably fail in most commercial or government environments.


Journal Article
TL;DR: The author has devised a technique for converting human language to and from a compact byte-coded intermediate representation, which is processed more easily by computer systems.
Abstract: Natural human languages have proven to be sub-optimal in artificial intelligence applications because of their tendency to inexact representation of meaning. The author has devised a technique for converting human language to and from a compact byte-coded intermediate representation, which is processed more easily by computer systems. A specialized lexical engine based on IEEE Standard 1275-1994 was created to embed redundant information invisibly within the byte-coded text stream, to enable use of a variety of alphabets, grammars, and pronunciation rules (including slang and regional dialects). Very large vocabularies in a variety of human languages are supported. These lexical tools are designed to facilitate speech recognition and speech synthesis subsystems, universal translators and machine intelligence systems.