scispace - formally typeset
Search or ask a question

Showing papers on "Applications of artificial intelligence published in 2000"


Journal ArticleDOI
TL;DR: This text covers all the material needed to understand the principles behind the AI approach to robotics and to program an artificially intelligent robot for applications involving sensing, navigation, planning, and uncertainty.
Abstract: From the Publisher: This text covers all the material needed to understand the principles behind the AI approach to robotics and to program an artificially intelligent robot for applications involving sensing, navigation, planning, and uncertainty. Robin Murphy is extremely effective at combining theoretical and practical rigor with a light narrative touch. In the overview, for example, she touches upon anthropomorphic robots from classic films and science fiction stories before delving into the nuts and bolts of organizing intelligence in robots. Following the overview, Murphy contrasts AI and engineering approaches and discusses what she calls the three paradigms of AI robotics: hierarchical, reactive, and hybrid deliberative/reactive. Later chapters explore multiagent scenarios, navigation and path-planning for mobile robots, and the basics of computer vision and range sensing. Each chapter includes objectives, review questions, and exercises. Many chapters contain one or more case studies showing how the concepts were implemented on real robots. Murphy, who is well known for her classroom teaching, conveys the intellectual adventure of mastering complex theoretical and technical material.

1,019 citations


Proceedings Article
30 Jul 2000
TL;DR: In this article, the authors proposed using interactive computer games for AI research, review previous research on AI and games, and present the different game genres and the roles that human-level AI could play within these genres.
Abstract: � Although one of the fundamental goals of AI is to understand and develop intelligent systems that have all the capabilities of humans, there is little active research directly pursuing this goal. We propose that AI for interactive computer games is an emerging application area in which this goal of human-level AI can successfully be pursued. Interactive computer games have increasingly complex and realistic worlds and increasingly complex and intelligent computer-controlled characters. In this article, we further motivate our proposal of using interactive computer games for AI research, review previous research on AI and games, and present the different game genres and the roles that human-level AI could play within these genres. We then describe the research issues and AI techniques that are relevant to each of these roles. Our conclusion is that interactive computer games provide a rich environment for incremental research on human-level AI.

342 citations


Book
01 Jan 2000
TL;DR: Logic-based artificial intelligence will be of interest to those applying theorem proving methods to problems in program and hardware verification, to those who deal with large knowledge base systems, those developing cognitive robotics, and for those interested in the solution of McCarthy's 1959 "oldest planning problem in AI:getting from home to the airport."
Abstract: The book is invaluable to graduate students and researchers in artificial intelligence, and advanced methods for database and knowledge base system. Logic-based artificial intelligence will also be of interest to those applying theorem proving methods to problems in program and hardware verification, to those who deal with large knowledge base systems, those developing cognitive robotics, and for those interested in the solution of McCarthy's 1959 "oldest planning problem in AI:getting from home to the airport."

271 citations


Book
01 Jan 2000
TL;DR: This book describes, from both practical and theoretical perspectives, an AI technology for supporting sound clinical decision making and safe patient management that combines techniques from conventional software engineering with a systematic method for building intelligent agents.
Abstract: Computer science and artificial intelligence are increasingly used in the hazardous and uncertain realms of medical decision making, where small faults or errors can spell human catastrophe. This book describes, from both practical and theoretical perspectives, an AI technology for supporting sound clinical decision making and safe patient management. The technology combines techniques from conventional software engineering with a systematic method for building intelligent agents. Although the focus is on medicine, many of the ideas can be applied to AI systems in other hazardous settings. The book also covers a number of general AI problems, including knowledge representation and expertise modeling, reasoning and decision making under uncertainty, planning and scheduling, and the design and implementation of intelligent agents.The book, written in an informal style, begins with the medical background and motivations, technical challenges, and proposed solutions. It then turns to a wide-ranging discussion of intelligent and autonomous agents, with particular reference to safety and hazard management. The final section provides a detailed discussion of the knowledge representation and other aspects of the agent model developed in the book, along with a formal logical semantics for the language.

260 citations


Journal ArticleDOI
TL;DR: In this article, a case-based reasoning (CBR) algorithm is proposed to improve the performance of a wide class of scheduling heuristics, including parametrized biased random sampling and priority rule-based methods.
Abstract: Most scheduling problems are notoriously intractable, so the majority of algorithms for them are heuristic in nature. Priority rule-based methods still constitute the most important class of these heuristics. Of these, in turn, parametrized biased random sampling methods have attracted particular interest, due to the fact that they outperform all other priority rule-based methods known. Yet, even the “best” such algorithms are unable to relate to the full range of instances of a problem: Usually there will exist instances on which other algorithms do better. We maintain that asking for the one best algorithm for a problem may be asking too much. The recently proposed concept of control schemes, which refers to algorithmic schemes allowing to steer parametrized algorithms, opens up ways to refine existing algorithms in this regard and improve their effectiveness considerably. We extend this approach by integrating heuristics and case-based reasoning (CBR), an approach that has been successfully used in artificial intelligence applications. Using the resource-constrained project scheduling problem as a vehicle, we describe how to devise such a CBR system, systematically analyzing the effect of several criteria on algorithmic performance. Extensive computational results validate the efficacy of our approach and reveal a performance similar or close to state-of-the-art heuristics. In addition, the analysis undertaken provides new insight into the behaviour of a wide class of scheduling heuristics. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 201–222, 2000

87 citations


Journal ArticleDOI
TL;DR: Fuzzy logic and neural network techniques to power electronics and electrical drives to give more user-friendly, efficient, and improved dynamic performance, and robust and intelligent products.
Abstract: Artificial intelligence (AI) techniques are finding increased applications in science and engineering. The application of AI techniques to household appliances, power systems, industrial and transport systems, medical equipment, etc., is increasing daily. AI techniques have made inroads to power electronics and drives to give more user-friendly, efficient, and improved dynamic performance, and robust and intelligent products. AI is basically embedding human intelligence into a machine so that it can think like a human being. AI is superior to human intelligence in some aspects. The computer with embedded AI techniques can process problems extremely fast compared to human beings; it can work continuously without tiring, and its problem-solving capability is not-affected by emotions and other human shortcomings. Advanced control based on artificial intelligence techniques is called intelligent control. Unlike classical control, intelligent control strategy may not need the mathematical model of the plant. Intelligent control can be visualized as a type of adaptive control. In this article, the application of fuzzy logic and neural network techniques to power electronics and electrical drives has been briefly described.

60 citations


Dissertation
08 Aug 2000
TL;DR: This dissertation is a systematic study of artificial intelligence (AI) applications for the diagnosis of power transformer incipient fault, resulting in a combined neural network and expert system tool (the ANNEPS system) for power transformerincipient diagnosis.
Abstract: To My lovely wife, Tong Wang And my to-be-born baby girl, Lucia Wang i ABSTRACT This dissertation is a systematic study of artificial intelligence (AI) applications for the diagnosis of power transformer incipient fault. The AI techniques include artificial neural networks (ANN, or briefly neural networks-NN), expert systems, fuzzy systems and multivariate regression. The fault diagnosis is based on dissolved gas-in-oil analysis (DGA). A literature review showed that the conventional fault diagnosis methods, i.e. the ratio methods (Rogers, Dornenburg and IEC) and the key gas method, have limitations such as the " no decision " problem. Various AI techniques may help solve the problems and present a better solution. Based on the IEC 599 standard and industrial experiences, a knowledge-based inference engine for fault detection was developed. Using historical transformer failure data from an industrial partner, a multi-layer perceptron (MLP) modular neural network was identified as the best choice among several neural network architectures. Subsequently, the concept of a hybrid diagnosis was proposed and implemented, resulting in a combined neural network and expert system tool (the ANNEPS system) for power transformer incipient diagnosis. The abnormal condition screening process, as well as the principle and algorithms of combining the outputs of knowledge based and neural network based diagnosis, were proposed and implemented in the ANNEPS. Methods of fuzzy logic based transformer oil/paper insulation condition assessment, and estimation of oil sampling interval and maintenance recommendations, were also proposed and implemented. Several methods of power transformer incipient fault location were investigated, and a 7×21×5 MLP network was identified as the best choice. Several methods for on-load tap changer (OLTC) coking diagnosis were also investigated, and a MLP based modular network was identified as the best choice. Logistic regression analysis was identified as a good auditor in neural network input pattern selection processes.

43 citations



Journal ArticleDOI
TL;DR: Synergism of fuzzy decisions and fuzzy controllers, at the supervisory level, with low level process regulators provide adaptive systems, which can optimize both long-term objectives and the short time dynamic responses.

32 citations


Journal ArticleDOI
TL;DR: The article discusses formal aspects of the notion of context as needed in AI applications and advocates the use of Martin-Lof's intuitionistic type theory to formalize and implement contexts.
Abstract: The article discusses formal aspects of the notion of context as needed in AI applications. We advocate the use of Martin-Lof's intuitionistic type theory to formalize and implement contexts. Through many examples belonging to the domains of computational semantics and knowledge based systems, we show that the built-in notion of context in intuitionistic type theory is a structure rich enough for representing most of the features that characterize contexts in an AI perspective. The fact that many recent theorem provers are built on the different theories of types suggests new perspectives in the development of AI applications.

24 citations


Book ChapterDOI
14 Aug 2000
TL;DR: The discussion is more specifically aimed at artificial intelligence applications and especially conceptual graphs applications presented in ICCS papers, and the importance of applications for a scientific domain.
Abstract: The traditional distinction between theories (developed to tackle theoretical problems) and applications (based on theories and realized to help a user) is blurred in computer science. To experiment their theories, computer scientists often write programs. This paper focuses on the features that make such a program an application (also in its software engineering meaning). The discussion is more specifically aimed at artificial intelligence applications and especially conceptual graphs applications presented in ICCS papers, and the importance of applications for a scientific domain.

Journal ArticleDOI
TL;DR: The experience in developing an AI system using standard off-the-shelf software components is described, an example of how development methodologies used to construct modern AI applications have become fully inline with mainstream practices.
Abstract: The stand-allocation system (SAS) is an AI application developed for the Hong Kong International Airport (HKIA) at Chek Lap Kok. sas uses constraint-programming techniques to assign parking stands to aircraft and schedules tow movements based on a set of business and operational constraints. The system provides planning, real-time operation, and problem-solving capabilities. sas generates a stand-allocation plan that finely balances the objectives of the airline-handling agents, the convenience of passengers, and the operational constraints of the airport. The system ensures a high standard of quality in customer service, airport safety, and use of stand resources. This article describes our experience in developing an AI system using standard off-the-shelf software components. SAS is an example of how development methodologies used to construct modern AI applications have become fully inline with mainstream practices.

Journal ArticleDOI
TL;DR: The range of AI activities within IBM Research is reported on, which takes place in four broad areas: knowledge representation and reasoning; statistical AI; vision; and game playing.
Abstract: IBM has played an active role in AI research since the field's inception more than 50 years ago. In a trend that reflects the increasing demand for applications that behave intelligently, IBM today carries out most AI research in an interdisciplinary fashion by combining AI technology with other computing techniques to solve difficult technical problems. This article reports on the range of AI activities within IBM Research and discusses emerging issues. AI at IBM computer science research takes place in four broad areas: knowledge representation and reasoning; statistical AI; vision; and game playing.

Book ChapterDOI
28 Aug 2000
TL;DR: It is shown that the input and output, and even the rules themselves, from an AI application can be represented as XML files allowing the software engineer to avoid having to invest considerable time and effort in building complex conversion procedures.
Abstract: One of the key advantages of XML is that it allows developers, through the use of DTD files, to design their own languages for solving different problems. At the same time, one of the biggest challenges to using rule-based AI solutions is that it forces the developer to cast the problem within particular, AI-specific, languages which are difficult to interface with. We demonstrate in this paper how XML changes all that by allowing the development of particular languages suited to particular AI problems and allows a seamless interface with the rules engine. We show that the input and output, and even the rules themselves, from an AI application can be represented as XML files allowing the software engineer to avoid having to invest considerable time and effort in building complex conversion procedures. We illustrate our ideas with an example drawn from the mortgage industry.

Journal ArticleDOI
TL;DR: A generalisation of AI towards Negrotti's overall Theory of the Artificial, which encompasses further specialisation such as artificial reality, artificial life, and applications of neural networks among others.
Abstract: In its forty years of existence, Artificial Intelligence has suffered both from the exaggerated claims of those who saw it as the definitive solution of an ancestral dream -- that of constructing an intelligent machine-and from its detractors, who described it as the latest fad worthy of quacks. Yet AI is still alive, well and blossoming, and has left a legacy of tools and applications almost unequalled by any other field-probably because, as the heir of Renaissance thought, it represents a possible bridge between the humanities and the natural sciences, philosophy and neurophysiology, psychology and integrated circuits-including systems that today are taken for granted, such as the computer interface with mouse pointer and windows. This writing describes a few results of AI that have modified the scientific world, as well as the way a layman sees computers: thetechnology of programming languages, such asLISP-witness the unique excellence of academic departments that have contributed to them-thecomputing workstations-of which our modern PC is but a vulgarised descendant-theapplications to the educational field-e.g., the realisation of some ideas of genetic epistemology-and tointerdisciplinary philosophy-such as Hofstadter's associations between the arts and mathematics-and the use ofAI techniques in music and musicology. All this has led to a generalisation of AI towards Negrotti's overallTheory of the Artificial, which encompasses further specialisation such asartificial reality, artificial life, and applications ofneural networks among others.

Book ChapterDOI
28 Aug 2000
TL;DR: The Symposium on the Application of Artificial Intelligence in Industry was held in conjunction with the Sixth Pacific Rim International Conference on Artificial Intelligence (PRICAI-2000), Melbourne Australia, August 2000.
Abstract: The Symposium on the Application of Artificial Intelligence in Industry was held in conjunction with the Sixth Pacific Rim International Conference on Artificial Intelligence (PRICAI-2000), Melbourne Australia, August 2000. It was the second of the Symposium series aiming to highlight actual applications of Artificial Intelligence in industry and to share and compare experiences in doing so. The Symposium brought together researchers and developers of applied Artificial Intelligence systems. The symposium is the leading forum in the Pacific Rim for the presentation of innovative applications of AI in industry.

Journal ArticleDOI
Chris Bissell1
Abstract: problems can be solved. Chapter 5 introduces the synthesis of linear time-invariant (LTI) controllers for nonlinear single-input, single-output (SISO) plants. In the same QFT spirit that a multivariable problem was reduced to a set of equivalent SISO problems, a linear plant and disturbance is found that is mathematically equivalent to the nonlinear problem. Chapter 6 again covers controller synthesis for SISO nonlinear plants, but this time the controller is time varying. This is achieved by finding equivalent linear plants and disturbances for different time intervals and then scheduling the linear controllers to form a piecewise LTI controller. Chapters 7 and 8 extend the synthesis of LTI and linear time-varying (LTV) controllers for nonlinear multivariable systems. If forced to choose between these two excellent texts, my personal preference would be for Yaniv, as I find the presentation clearer and the treatment more thorough. It also covers more ground, including nonlinear design, and has the advantage of using the MATLAB QFT Toolbox. But whatever their relative merits, it is good to have textbooks on QFT available for the first time in many years, as they can only help promote this valuable design technique.

Journal ArticleDOI
TL;DR: Some elements of condition-based maintenance and its applications, expert systems for machine diagnosis, and an example of machine diagnosis are reviewed, and some problems to be resolved are noted so that expert systemsFor machine diagnosis may gain wider acceptance in the future.
Abstract: In this paper we discuss interesting developments of expert systems for machine diagnosis and condition-based maintenance. We review some elements of condition-based maintenance and its applications, expert systems for machine diagnosis, and an example of machine diagnosis. In the last section we note some problems to be resolved so that expert systems for machine diagnosis may gain wider acceptance in the future.


Journal ArticleDOI
TL;DR: This survey highlights some important trends in AI research and development, focusing on perceiving and affecting the real world, and singles out for special mention one area that contributes centrally to all of these technologies, software development technology.
Abstract: This survey highlights some important trends in AI research and development, focusing on perceiving and affecting the real world. It primarily addresses robotics, but does not imply that this is the only important area of AI research and development in the 21st century. It singles out for special mention one area that contributes centrally to all of these technologies, software development technology.

Journal ArticleDOI
TL;DR: This work provides a UML description of the heuristic multiattribute decision pattern, a corresponding Generic Task having already been proposed in the literature, and illustrates the wide applicability of this pattern by specialising it to obtain a therapy decision pattern.
Abstract: We discuss the use of the UML to describe "Analysis Patterns" in AI, an area where OAD techniques are not widely used, inspite of the fact that some of the inspiration for the object approach can be traced to developments in this area. We study the relation between the notion of analysis pattern in the context of OO software development methods, and that of Generic Task in AI software development methods such as CommonKADS. Our interest is motivated by the belief that in the analysis and design of certain AI applications, particularly in Distributed AI, OO style patterns may be more appropriate than Generic Tasks. To illustrate the relation between these concepts, we provide a UML description of the heuristic multiattribute decision pattern, a corresponding Generic Task having already been proposed in the literature. We illustrate the wide applicability of this pattern by specialising it to obtain a therapy decision pattern. We discuss the suitability of the UML, together with OCL, for describing this and other analysis patterns arising in AI.


Journal ArticleDOI
TL;DR: The AAAI-99 Workshop Program (a part of the sixteenth national conference on artificial intelligence) was held in Orlando, Florida and included 16 workshops covering a wide range of topics in AI.
Abstract: The AAAI-99 Workshop Program (a part of the sixteenth national conference on artificial intelligence) was held in Orlando, Florida. The program included 16 workshops covering a wide range of topics in AI. Each workshop was limited to approximately 25 to 50 participants. Participation was by invitation from the workshop organizers. The workshops were Agent-Based Systems in the Business Context, Agents' Conflicts, Artificial Intelligence for Distributed Information Networking, Artificial Intelligence for Electronic Commerce, Computation with Neural Systems Workshop, Configuration, Data Mining with Evolutionary Algorithms: Research Directions (Jointly sponsored by GECCO-99), Environmental Decision Support Systems and Artificial Intelligence, Exploring Synergies of Knowledge Management and Case-Based Reasoning, Intelligent Information Systems, Intelligent Software Engineering, Machine Learning for Information Extraction, Mixed-Initiative Intelligence, Negotiation: Settling Conflicts and Identifying Opportunities, Ontology Management, and Reasoning in Context for AI Applications.

01 Jan 2000
TL;DR: A critical look at some AI techniques on the horizon of the researcher's own current research in developing the software infrastructure required to view interactive entertainment applications as cognitive multi-character systems.
Abstract: Researchers in the field of artificial intelligence (AI) are becoming increasingly interested in computer games as a vehicle for their research. From the researcher’s point of view this makes sense as many interesting and challenging AI problems arise quite naturally in the context of computer games. Of course, the hope is that the relationship is a symbiotic one so that the incorporation of AI techniques will lead to more interesting and enjoyable computer games. One question that arises, however, is how far this process can continue? In particular, what, if any, are the technical roadblocks to applying new AI research to interactive entertainment, and what would be the expected benefits? In this paper, we will therefore take a critical look at some AI techniques on the horizon of our own current research in developing the software infrastructure required to view interactive entertainment applications as cognitive multi-character systems.


Book ChapterDOI
Jae Kyu Lee1
28 Aug 2000
TL;DR: The status of AI applications in EC, and a research opportunity for future applications, is reviewed and issues like data mining in CRM, natural language, voice recognition, and machine translation on the web will be demonstrated.
Abstract: There are various applications of AI on web-based electronic commerce (EC) environment. However, the application of intelligence in EC is confined by the state-of-the-art of the AI aldiough EC has a high potential of AI deployment. In this talk, we will review the status of AI applications in EC, and a research opportunity for future applications. The key AI technologies applied in EC include agents for search, comparison, and negotiations; search by configuration and salesman expert systems; thesaurus of EC terms; knowledge-based processing of workflow; personalized e-catalog directory management; data mining in customer relationship management; natural language conversation, voice recognition and synthesis, and machine translation. 1. Agent: Concerning the agents, we will discuss the trend of XML standard such as ebXML for search, communication, and price negotiation. We will also review the status of learning from the seller and buyer agents' point of view. 2. Search by conflguration: Most searches seek standard commodities, while many products like electronic goods need to add optional parts to fulfill the required specification. This implies that we need to find a most similar template first, and then to adjust the optional parts that offer the minimum cost. 3. Thesaurus to aid comparison shopping: The product specification should be comparable each other whether they are represented in numbers, symbols, and words. To comprehend the specifications for comparison, we need a thesaurus of application domain. 4. Personalized e-catalog directory management: A buyer site defines a personalized e-catalog directory out of conmion standard catalog. The anomalies such as unbalanced directory are defined and automatic remedies are developed. 5. Other issues like data mining in CRM, natural language, voice recognition, and machine translation on the web will be demonstrated. The talk will be ended with the AI research opportunities in EC.


Proceedings Article
01 Jan 2000
TL;DR: The department has developed a web site which intends to be a comprehensive source for physicians, students and other health-care professionals in providing information on over 65 worldwide` available medical expert and knowledge-based systems.
Abstract: Recent years have seen an enormous development in the field of medical expert systems, making it a time consuming and complicated task for physicians finding the system most capable for them. To give physicians the opportunity to get fast and easy access to a specific system, our department has developed a web site which intends to be a comprehensive source for physicians, students and other health-care professionals in providing information on over 65 worldwide` available medical expert and knowledge-based systems. Medical Expert Systems: Doctor’s Silent Partners Our web page will be a unique collection of over 65 state-of-the-art medical expert systems and knowledge based systems. An expert system is an Artificial Intelligence program that uses knowledge to solve problems that would normally require a human specialist. Expert systems are one of the most successful commercial applications of Artificial Intelligence and they are used in many different areas. In medicine, expert systems have been developed to assist the physicians in a hospital or in his office in the course of interpreting medical findings, providing diagnostic support and therapy advice, giving hints for disease prognosis, guiding patient management, and monitoring hospital and patient’s medical data and costs. Expert systems and knowledge based systems in medicine help in the manipulation and application of expert medical knowledge. The growing complexity of the fund of knowledge makes the application of such systems more and more indispensable. The amount of medical knowledge is such that today no physician can access or memorize all the necessary information in his daily practice. Therefore, in an attempt to minimize the incidence of misdiagnosis, physicians are increasingly looking to expert systems to corroborate their findings and/or highlight anomalies and errors. Provided that expert systems are used correctly, they also reduce much of the repetitive and specialized mental efforts made by the treating physicians and enable him to devote his time and attention to the personal care of the patient. Another reason, why decision support technologies are becoming more and more important in medicine is their benefit in cost reduction. For example, expert systems allow the dissemination of information held by one or a small number of experts. This makes the knowledge available to a larger number of people, and less skilled (so less expensive) people, reducing the cost of accessing information. Additionally, human expertise about medical subjects in question is not always available when it is needed. This may because the necessary knowledge is held by a small group of medical experts, who may not be in the right place at the right time. Alternatively it may be because the knowledge is distributed through a variety of sources and is therefore difficult to assimilate.

Journal Article
TL;DR: The series of annual NAIC conferences and the regular production of theNVKI newslettter are the two main items responsible for the success and growth of the NVKI in the late eighties/early nineties.
Abstract: The Nederlandse Vereniging voor Kunstmatige Intelligentie (NVKI – Dutch AI Association) was founded in June 1981 in Amsterdam, shortly after Bob Wielinga had organized the AISB conference, the predecessor of the ECAI conference. The AISB conference took place in 1980 in Amsterdam. It was a very lively event with over 100 participants of which 20 were Dutch. The conference made a strong impression on the Dutch newcomers to the field. In Amsterdam the field of Computer Science was still struggling to be accepted as a discipline separate from mathematics and without its methodological rigor, but this AI conference went a lot further. Without the ‘burden’ of an established methodology, the conference was an exciting mixture of bold ideas from a range of disciplines that were discussed in an atmosphere in which everything was possible. The initiative to create the NVKI was taken by Bob Wielinga and Dennis de Champeaux. At that time both were working at the University of Amsterdam. Some networking brought together about 25 persons mostly from universities and research. The society was founded and used its membership fees to start a newsletter called KININE (Kunstmatige INtelligentie In the NEtherlands). The idea was to bring its members into contact with each other by organizing an inventory of projects and by inviting each other for seminars. The first inventory produced 19 projects in a variety of subfields including language understanding and generation, cognitive modeling, automated deduction, and computer chess. At the time there were no complete educational programs or research programs on AI. Single courses that were relevant to AI were taught in the context of widely varying disciplines and the idea of AI. The first chair in AI was established in 1982 at the Vrije Universiteit and Laurent Siklossy became the first full professor in AI in the Netherlands. Between 1985 and 1987 the NVKI became almost inactive. The most active AI persons in the Netherlands were occupied with the increasing interest for AI in research and teaching. In 1987 the board of the NVKI was renewed and initiated a range of activities. The year 1988 showed an increase in organized activities in AI. The first ‘Nederlands-talige AI Conferentie’ NAIC was organized in Amsterdam. The NVKI then had about 70 members. To the surprise of the organisers there were about 200 participants and 22 papers were presented. A period of growth started for NVKI, soon stimulated by the organisational and diplomatic efforts of Jaap van den Herik. Half a year later a conference on AI applications was organized in The Hague in collaboration with the NVKI. This was directed at the growing interest in expert systems and other applications such as scheduling and natural language processing. Membership of NVKI grew to 200 and the first full university program in Cognitive Artificial Intelligence started (University of Utrecht), followed by a full program at the Vrije Universiteit Amsterdam (1991), the University of Amsterdam (1992), and the Universiteit Maastricht (Knowledge Engineering, 1992). The Newsletter was professionalized with support by the Dutch Science Council. The industrial interest for AI was increasing. In particular development of expert systems was drawing much attention from industry and also from the universities. The series of annual NAIC conferences and the regular production of the NVKI newslettter are the two main items responsible for the success and growth of the NVKI in the late eighties/early nineties. Originally NAIC stands for Nederlands-talige AI Conferentie (Dutch-speaking AI Conference), implying that all contributions were presented and recorded in Dutch.

07 Jan 2000
TL;DR: Key to AI, rather than the 'number crunching' typical of computers until then, was viewed as the ability to manipulate symbols and make logical inferences, which allowed the building of what became known as expert systems or knowledge based systems (KBS).
Abstract: Few human endeavors can be viewed both as extremely successful and unsuccessful at the same time. This is typically the case when goals have not been well defined or have been shifting in time. This has certainly been true of Artificial Intelligence (AI). The nature of intelligence has been the object of much thought and speculation throughout the history of philosophy. It is in the nature of philosophy that real headway is sometimes made only when appropriate tools become available. Similarly the computer, coupled with the ability to program (at least in principle) any function, appeared to be the tool that could tackle the notion of intelligence. To suit the tool, the problem of the nature of intelligence was soon sidestepped in favor of this notion: If a probing conversation with a computer could not be distinguished from a conversation with a human, then AI had been achieved. This notion became known as the Turing test, after the mathematician Alan Turing who proposed it in 1950. Conceptually rich and interesting, these early efforts gave rise to a large portion of the field's framework. Key to AI, rather than the 'number crunching' typical of computers until then, was viewed as the ability to manipulate symbols and make logical inferences. To facilitate these tasks, AI languages such as LISP and Prolog were invented and used widely in the field. One idea that emerged and enabled some success with real world problems was the notion that 'most intelligence' really resided in knowledge. A phrase attributed to Feigenbaum, one of the pioneers, was 'knowledge is the power.' With this premise, the problem is shifted from 'how do we solve problems' to 'how do we represent knowledge.' A good knowledge representation scheme could allow one to draw conclusions from given premises. Such schemes took forms such as rules,frames and scripts. It allowed the building of what became known as expert systems or knowledge based systems (KBS).