scispace - formally typeset
Search or ask a question

Showing papers on "Applications of artificial intelligence published in 1994"


Journal ArticleDOI
TL;DR: This paper surveys recent research in deliberative real-time artificial intelligence, focusing both on progress within the field, and the costs, benefits and interactions among different problem and algorithm complexity limitations used in the surveyed work.
Abstract: This paper surveys recent research in deliberative real-time artificial intelligence (AI). Major areas of study have beenanytime algorithms, approximate processing, and large system architectures. We describe several systems in each of these areas, focusing both on progress within the field, and the costs, benefits and interactions among different problem and algorithm complexity limitations used in the surveyed work.

98 citations


Journal ArticleDOI
TL;DR: This paper reviews research into the use of AI methods to harness the knowledge and skills required to plan, set-up, operate and control grinding processes and predicts future developments will favour increasing communication within a CIM environment.

90 citations


Posted Content
TL;DR: It will be shown how conventional locking can step by step be improved and refined to finally reach the initial goal, namely a comprehensive support of synergistic cooperative work by the exploitation of application-specific semantics.
Abstract: Advanced database applications, such as CAD/CAM, CASE, large AI applications or image and voice processing, place demands on transaction management which differ substantially from those of traditional database applications. In particular, there is a need to support enriched data models (which include, for example, complex objects or version and configuration management), synergistic cooperative work, and application- or user-supported consistency. This paper deals with a subset of these problems. It develops a methodology for implementing semantics-based concurrency control on the basis of ordinary locking. More specifically, it will be shown how conventional locking can step by step be improved and refined to finally reach our initial goal, namely a comprehensive support of synergistic cooperative work by the exploitation of application-specific semantics. In addition to the conventional binding of locks to transactions we consider the binding of locks to objects (object related) and subjects (subject related locks). Object related locks can define persistent and adaptable access restrictions on objects. This permits, among others, the modeling of different types of version models (time versions, version graphs) as well as library (standard) objects. Subject related locks are bound to subjects (user, application, etc.) and can be used among others to supervise or direct the transfer of objects between transactions.

39 citations


Book
07 Mar 1994
TL;DR: During AI's first decade, the task environments in which AI scientists investigated their basic science issues were generally idealized "clean" task environments, such as propositional calculus theorem proving and puzzle solving, but after the mid-1960s, a bolder and more applied inclination to choose complex real-world problems as task environments became evident.
Abstract: During AI's first decade (1956-1966), the task environments in which AI scientists investigated their basic science issues were generally idealized "clean" task environments, such as propositional calculus theorem proving and puzzle solving. After the mid-1960s, a bolder and more applied inclination to choose complex real-world problems as task environments became evident. These efforts were both successful and exciting, in two ways. First, the AI programs were achieving high levels of competence at solving certain problems that human specialists found challenging (the excitement was that our AI techniques were indeed powerful and that we were taking the first steps toward the dream of the very smart machine). Second, these complex real-world task environments were proving to be excellent at stimulating basic science questions for the AI science, in knowledge representation, problem solving, and machine learning. To recognize and illuminate this trend, the Artificial Intelligence Journal in 1978 sponsored a special issue on applications of artificial intelligence.

39 citations


Book
07 Mar 1994
TL;DR: The early 1970s brought Schwartz's clarion call to adopt computers to augment human reasoning in medicine as discussed by the authors, Gorry's rejection of older flowchart and probabilistic methods, and the first demonstrations of"expert systems" that could indeed achieve human expert-level performance on bounded but challenging intellectual tasks that were important to practicing professionals.
Abstract: Our 1978 paper [27] reviewed the artificial intelligence-based medical (AIM) diagnostic systems. Medical diagnosis is one of the earliest difficult intellectual domains to which AI applications were suggested, and one where success could (and still can) lead to benefit for society. The early 1970s brought Schwartz's clarion call to adopt computers to augment human reasoning in medicine [24], Gorry's rejection of older flowchart and probabilistic methods [ 11 ], and the first demonstrations of"expert systems" that could indeed achieve human expert-level performance on bounded but challenging intellectual tasks that were important to practicing professionals, such as symbolic mathematics and the determination of chemical structure. By the mid-1970s, a handful of first-generation medical AI systems had been developed, demonstrated, and at least partially evaluated. Although the methods employed appeared on the surface to be very different, we identified the underlying knowledge on which each operated and classified the general methods they used. We emphasized the distinction between the "categorical" or structural knowledge of the programs and the particular

36 citations


Book ChapterDOI
01 Jan 1994
TL;DR: Self-Organising Neural Networks, as the Kohonen model, the inherent structures in high-dimensional input spaces are projected on a low dimensional space and an extension of this method from static to dynamic data is feasible.
Abstract: Knowledge acquisition is a frequent bottleneck in artificial intelligence applications. Neural learning may offer a new perspective in this field. Using Self-Organising Neural Networks, as the Kohonen model, the inherent structures in high-dimensional input spaces are projected on a low dimensional space. The exploration of structures resp. classes is then possible applying the U-Matrix method for the visualisation of data. Since Neural Networks are not able to explain the obtained results, a machine learning algorithm sig* was developed to extract symbolic knowledge in form of rules out of subsymbolic data. Combining both approaches in hybrid system results in a powerful method to solve classification and diagnosis problems. Several applications have been used to test this method. Applications on processes with dynamic characteristics, such as chemical processes and avalanche forecasting show that an extension of this method from static to dynamic data is feasible.

35 citations


Book
01 Jan 1994
TL;DR: In this paper, the authors present an overview of traffic and transport applications of artificial intelligence: an overview H.R. Kirby and B.G. Wild Developing expert systems in transport J.M. Wentworth Advanced management of urban public transport.
Abstract: Preface ARTIFICIAL INTELLIGENCE IN TRANSPORT: MOTIVATIONS, CURRENT STATE AND PERSPECTIVES The development of traffic and transport applications of artificial intelligence: an overview H.R. Kirby and B.G. Parker Using artificial intelligence in traffic engineering -- perspectives and potential applications B. Wild Developing expert systems in transport J. Wentworth Advanced management of urban public transport J.M. Le Dizes ARTIFICIAL INTELLIGENCE IN TRAFFIC ENGINEERING: TRAFFIC SURVEILLANCE AND CONTROL Urban traffic control: current methodologies G. Bruno and G. Improta Artificial intelligence approach to road traffic control G. Ambrosino, M. Bielli and M. Boero SAPPORO -- a framework for intelligent integrated traffic management systems B. Wild CLAIRE: a context-free Al based supervisor for traffic control G. Scemama Knowledge based systems for motorway traffic control J. Cuena and M. Molina Emerging technology applications in intelligent vehicle-highway systems S.G. Ritchie Intelligent intersection: artificial intelligence and computer vision techniques for automatic incident detection S. Sellam and A. Boulmakoul Qualitative simulation of traffic flows for urban traffic control G. Martin, F. Toledo and S. Moreno Using neural networks to recognise, predict and model traffic M.S. Dougherty, H.R. Kirby and R.D. Boyle The assessment by micro-simulation of a rule-based real-time system for supervising urban traffic control M.S. Dougherty, L.J. Ibbetson, H.R. Kirby and F.O. Montgomery IN-CAR ARTIFICIAL INTELLIGENCE: ROUTE PLANNING AND NAVIGATION The intelligent co-driver: route planning and navigation G. Adorni and A. Poggi Driver support system for traffic manoeuvres J. Malec and P. A-sterling

32 citations


Proceedings Article
01 Jul 1994
TL;DR: Two genetic algorithms to evolve monitoring strategies and a dynamic programming algorithm to find an optimum strategy and a simple mathematical model of monitoring, which appears to be a general monitoring strategy.
Abstract: Monitoring is the process by which agents assess their environments. Most AI applications rely on periodic monitoring, but for a large class of problems this is inefficient. The interval reduction monitoring strategy is better. It also appears in humans and artificial agents when they are given the same set of monitoring problems. We implemented two genetic algorithms to evolve monitoring strategies and a dynamic programming algorithm to find an optimum strategy. We also developed a simple mathematical model of monitoring. We tested all these strategies in simulations, and we tested human strategies in a "video game." Interval reduction always emerged. Environmental factors such as error and monitoring costs had the same qualitative effects on the strategies, irrespective of their genesis. Interval reduction appears to be a general monitoring strategy.

16 citations


Posted Content
01 Jan 1994
TL;DR: In this article, a methodology for implementing semantics-based concurrency control on the basis of ordinary locking is presented, which can support enriched data models (which include, for example, complex objects or version and configuration management), synergistic cooperative work, and application- or user-supported consistency.
Abstract: Advanced database applications, such as CAD/CAM, CASE, large AI applications or image and voice processing, place demands on transaction management which differ substantially from those of traditional database applications. In particular, there is a need to support enriched data models (which include, for example, complex objects or version and configuration management), synergistic cooperative work, and application- or user-supported consistency. This paper deals with a subset of these problems. It develops a methodology for implementing semantics-based concurrency control on the basis of ordinary locking. More specifically, it will be shown how conventional locking can step by step be improved and refined to finally reach our initial goal, namely a comprehensive support of synergistic cooperative work by the exploitation of application-specific semantics. In addition to the conventional binding of locks to transactions we consider the binding of locks to objects (object related) and subjects (subject related locks). Object related locks can define persistent and adaptable access restrictions on objects. This permits, among others, the modeling of different types of version models (time versions, version graphs) as well as library (standard) objects. Subject related locks are bound to subjects (user, application, etc.) and can be used among others to supervise or direct the transfer of objects between transactions.

13 citations


Journal ArticleDOI
TL;DR: The following areas are discussed: the state of the field, the quest for smart systems, AI as a provider of ontology, and knowledge level/symbol level confusion.
Abstract: As EIC (editor in chief) of IEEE Expert magazine, the author had to think about the AI field in global and strategic terms so that the magazine could guide its readers to the most vital information. In this capacity, he has inevitably developed some opinions about both the field and how the advances in it are presented to readers. He offers some loosely connected remarks about Al and the technology of intelligent systems. These views have their origin at least partly in his role as the EIC of IEEE Expert. The following areas are discussed: the state of the field, the quest for smart systems, AI as a provider of ontology, and knowledge level/symbol level confusion. >

13 citations


Journal ArticleDOI
TL;DR: A linear conditional planner is constructed and used to generate conditional plans for image-processing, and an existing hierarchical planning system is extended to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time- and resource-constrained environments.
Abstract: We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

01 Jan 1994
TL;DR: In this article, the authors provide a perspective on the development of artificial intelligence to traffic and transport, based on an overview of over 1000 items of the literature, originally prepared for the European Communities' DRIVE project on the applicability of Artificial Intelligene to Traffic and Transport (ATTAIN), it has been updated to reflect the considerable growth in the literature since.
Abstract: This chapter provides a perspective on the development of the application of artificial intelligence to traffic and transport, based on an overview of over 1000 items of the literature. Originally prepared for the European Communities' DRIVE project on the Applicability of Artificial Intelligene to Traffic and Transport (ATTAIN), it has been updated to reflect the considerable growth in the literature since. Its scope covers all aspects of artifical intelligence, and all fields of transport. Sources covered work in Europe, USA, Canada, Australia and Japan. Few systems are reported to have been validated and to have entered into operational use. Knowledge based systems are the dominant type of system, with use of expert system shells particularly prevalent in the USA and Canada, especially for highways related work. Use of neural networks is growing fast, and now accounts for 13 percent of the literature overall; some use of genetic algorithms is reported. (A) For the covering abstract see IRRD 869506.

Book
01 Nov 1994
TL;DR: In this paper, the authors address the major functional areas of telecommunication networks: planning, scheduling, monitoring, control, fault classification and diagnosis, training and help desks using AI techniques, including neural networks, expert systems, integrating rule-based systems with case-based reasoning systems, genetic algorithms, distribited AI and intelligent tutoring systems.
Abstract: From the Publisher: Telecommunications firms worldwide are actively involved in AI applications for resolving network management and telecommunications problems. This book adresses the folllowing major functional areas: planning, scheduling, monitoring, control, fault classification and diagnosis, training and help desks. Recent and emerging AI techniques are applied, including neural networks, expert systems, integrating rule-based systems with case-based reasoning systems, genetic algorithms, distribited AI and intelligent tutoring systems. Readers: researchers and professionals in telecommunication, AI experts and graduate students.

Journal ArticleDOI
TL;DR: This work selected an inherently parallel knowledge representation and reasoning method (marker-passing in a semantic network) and developed a natural-language processor based on it and implemented the memory-based parsing system/spl minus/called Parallel/splminus/on a marker-passed parallel computer especially designed for natural- language processing.
Abstract: Massively parallel computers offer not only improved speed but also a new perspective on computer vision, production systems, neural networks, and other AI applications. However, not much work has been done to apply parallel processing to natural-language processing, even though most sequential natural-language systems slow down as knowledge bases grow to realistic sizes and as linguistic features are added to handle special cases. To demonstrate the potential of parallel systems for natural-language processing, we selected an inherently parallel knowledge representation and reasoning method (marker-passing in a semantic network) and then developed a natural-language processor based on it. We implemented the memory-based parsing system/spl minus/called Parallel/spl minus/on a marker-passing parallel computer especially designed for natural-language processing. >

Journal ArticleDOI
TL;DR: It is concluded that AI constitutes a collective form of intellectual property and that there is a need for better documentation, evaluation and regulation of the systems already being used widely in clinical laboratories.

Journal ArticleDOI
TL;DR: It is VR, rather than AI, that is more likely to form the basis of a culture of the artificial, and the more directly experiential Virtual Reality (VR) more closely reflects the contemporary cultural climate of postmodernism.
Abstract: The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of the dominant philosophy of logical positivism. By contrast, artificial intelligence (AI) adopted an anti-positivist position which left it marginalised until the 1980s when two camps emerged: technical AI which reverted to positivism, and strong AI which reified intelligence. Strong AI's commitment to the computer as a symbol processing machine and its use of models links it to late-modernism. The more directly experiential Virtual Reality (VR) more closely reflects the contemporary cultural climate of postmodernism. It is VR, rather than AI, that is more likely to form the basis of a culture of the artificial.

01 Jan 1994
TL;DR: The major limitations of current traffic control technology and some of the problems addressed by AI and KB systems in this domain will be presented.
Abstract: This paper deals with several applications of Artificial Intelligence and Knowledge-Based systems to road traffic control. Some aspects and problems of traffic control where the application of AI and KB methods can be beneficial or proved to be beneficial in several pilot systems developed to date are highlighted, and how such methods can be applied to the problem is discussed. First, the major limitations of current traffic control technology and some of the problems addressed by AI and KB systems in this domain will be presented. A prototype system developed and evaluated during the last years in the framework of a DRIVE project is discussed. (A) For the covering abstract see IRRD 869506.

Journal ArticleDOI
TL;DR: In this paper, the authors address the use of software tools by engineers, operators, and planners to handle complex issues in less amounts of time using Artificial Intelligence (AI) techniques.
Abstract: This article addresses the use of software tools by engineers, operators, and planners to handle complex issues in less amounts of time. The topics of the article include applications of artificial intelligence (AI) in power system operations applications such as customer restoration and fault testing, voltage and VAR dispatch, dynamic security analysis, voltage collapse, control center load management assistance, optimal power flow, and power system restoration; and applications in power system planning such as training, data verification, stability studies, load forecasting, outage scheduling, ranking of alternatives, power transaction evaluation, and evaluation of third-party generation alternatives.

Journal ArticleDOI
TL;DR: In this article, the authors apply principles of artificial intelligence to computer-based test interpretations (CBTIs), which are programs for transforming psychological test data into interpretive reports, compared against an evolutionary scale ranging from simple algorithmic systems, which execute rule and equations and trigger verbal statements in an invarying sequence, to systems that are capable of learning.

Journal ArticleDOI
TL;DR: In this paper, the potential applications within telecommunications of the whole range of artificial intelligence technologies (i.e., expert systems, natural language understanding, speech recognition and understanding, machine translation, visual recognition and analysis, and robotics) are discussed in several areas of a telecommunications company's operation.

Proceedings ArticleDOI
12 Apr 1994
TL;DR: The paper describes two parts of a project carried out at the University of Dundee for Scottish Hydro-Electric plc on the use of an artificial intelligence system for alarm processing and fault diagnosis using the KappaPC toolkit operating on a 486 IBM compatible PC under Microsoft Windows 3.1.
Abstract: Alarm processing is a traditional feature of energy management systems (EMS) and has not changed significantly over several generations of SCADA design. However recent applications of artificial intelligence have dramatically altered the methods of handling this information. This paper describes two parts of a project carried out at the University of Dundee for Scottish Hydro-Electric plc (HE) on the use of an artificial intelligence system for alarm processing and fault diagnosis. The first part of the project was an overview and comparison study of three real-time object-oriented toolkits: Muse, Kappa and Nexpert Object. The study is based on the capabilities of such toolkits to handle the power system alarm processing integration with external programs and real-time databases, portability, price and execution speed. Some advantages and drawbacks of each toolkit are also pointed out. The second part of the project was the implementation of an object-oriented expert system using the KappaPC toolkit operating on a 486 IBM compatible PC under Microsoft Windows 3.1. The structure of the object-oriented expert system captures the heuristic knowledge used for power system operation. The knowledge-base is automatically updated by the existing SCADA system as the power system status changes. The paper also describes the features of the real-time object-oriented expert system which includes the need for fast deep-level reasoning, easy maintainability of the object-oriented programming and the end user's interface. >

01 Apr 1994
TL;DR: Interval reduction appears to be a general monitoring strategy that appears in humans and artificial agents when they are given the same set of monitoring problems.
Abstract: Title: Common Lisp Instrumentation Package: User Manual. Authors: David L. Westbrook, Scott D. Anderson, David M. Hart and Paul R. Cohen Address: Experimental Knowledge Systems Laboratory Dept. of Computer Science, LGRC Box 34610 Lederle Graduate Research Center Univ. of Massachusetts Amherst, MA 01003-4610 Date: April 1994 Monitoring is the process by which agents assess their environments. Most AI applications rely on periodic monitoring, but for a large class of problems this is inefficient. The interval reduction monitoring strategy is better. It also appears in humans and artificial agents when they are given the same set of monitoring problems. We implemented two genetic algorithms to evolve monitoring strategies and a dynamic programming algorithm to find an optimum strategy. We also developed a simple mathematical model of monitoring. We tested all these strategies in simulations, and we tested human strategies in a "video game." Interval reduction always emerged. Environmental factors such as error and monitoring costs had the same qualitative effects on the strategies, irrespective of their genesis. Interval reduction appears to be a general monitoring strategy.

Book ChapterDOI
01 Jan 1994
TL;DR: In this paper, the authors attempt to clear the names of artificial intelligence (AI) and computational cognitive science (CCS) and argue that the two related disciplines have been accused of a conceptual error so profound that their very existence is jeopardized.
Abstract: Publisher Summary This chapter is an attempt to clear the names of artificial intelligence (AI) and computational cognitive science. These two related disciplines have been accused of a conceptual error so profound that their very existence is jeopardized. Sometimes, however, philosophers successfully arrest and lock up the guilty. The best example of this, ironically, is in psychology. Artificial intelligence and computational cognitive science are both committed to the claim that computers can think. The former is committed to the claim that human-made computers can think, while computational cognitive science is committed to the view that naturally occurring computers, brains, think. AI is the field dedicated to building intelligent computers. AI ultimately wants a machine that could solve very difficult, novel problems like proving Fermat's last theorem, correcting the greenhouse effect, or figuring out the fundamental structure of space-time. Historically, AI is associated with computer science, but the compleat AI researcher frequently knows a fair amount of psychology, linguistics, neuroscience, mathematics, and possibly some other discipline.

Book ChapterDOI
TL;DR: The Cellular Abstract Machine is presented, a tool for building such hybrid systems, taking the form of heterogeneous networks of cooperating agents (here called cells), implemented over a parallel architecture (a Transputer network).
Abstract: The interest in new hybrid AI models, both symbolic and numeric, is currently increasing due to the complementary capabilities of these models. We present here the Cellular Machine (CAM), a tool for building such hybrid systems, taking the form of heterogeneous networks of cooperating agents (here called cells ). Several AI applications have been written using the CAM, including a sample hybrid system. The CAM is implemented over a parallel architecture (a Transputer network). We give here the basic principles of the parallel implementation.

Journal ArticleDOI
TL;DR: A new approach to planning collision-free motions for general real-life six degrees of freedom (d.o.f.) manipulators is presented, based on a simple object model previously developed, and computational cost is reduced to the strictly necessary by selecting the most adequate level of representation.
Abstract: The collision-free planning of motion is a fundamental problem for artificial intelligence applications in robotics. The ability to compute a continuous safe path for a robot in a given environment will make possible the development of task-level robot planning systems so that the implementation details and the particular robot motion sequence will be ignored by the programmer.

Book ChapterDOI
04 Jul 1994
TL;DR: This paper characterizes ad hoc reuse, design for reuse, domain-specific reuse and domain analysis from an uncertainty-based perspective, and presents and motivates key aspects of a specific DA method.
Abstract: Although software reuse research has borrowed extensively from artificial intelligence techniques and methods, there has been little explicit discussion in reuse research of uncertainty management, an area of critical importance in many AI applications. Yet several fundamental reuse issues, particularly in domain analysis methods and processes, can be usefully framed as problems of uncertainty. This paper characterizes ad hoc reuse, design for reuse, domain-specific reuse and domain analysis from an uncertainty-based perspective, and presents and motivates key aspects of a specific DA method. Organization Domain Modeling (ODM) as examples of uncertainty management strategies in domain analysis methods and processes.

Journal ArticleDOI
TL;DR: The purpose of this paper is to review the use of knowledge-based systems and artificial intelligence (AI) in business, and examines technical issues which are central to the construction of business AI systems and the commercial contribution made by methods for the development of AI systems.
Abstract: The purpose of this paper is to review the use of knowledge-based systems and artificial intelligence (AI) in business. Part I of this paper provided a broad survey of the use of AI in business, summarizing the application of AI in a number of business domains. In addition, it also provided a summary of the use of different forms of knowledge representation in business applications. Part I has a large set of references, including a number of survey papers, focusing on AI in business. Part II of this paper consists of more detailed analysis of particular systems or issues affecting AI in business. It examines technical issues which are central to the construction of business AI systems, and it also examines the commercial contribution made by methods for the development of AI systems. In addition, part II looks at integration between AI and more traditional information systems. AI can be used to add value to many existing information systems, such as database management systems. Particular attention is given to the integration of AI with operations research, which is the one of the primary “competitors” of AI, providing an alternative set of support tools for decision making.Business organizations are not concerned only with technology issues; there is also concern about the impact of AI on organizations. Further, the evaluation of AI often is based on an economic view of the world. Part II therefore investigates the organizational impact of AI, and the economics of AI, including issues such as value creation. The format of Part II is as follows: Section 8 analyses techniques for improving the performance of AI systems, thus maximizing economic return. Section 9 looks at different forms of uncertainty and ambiguity which must be dealt with by AI systems. It examines the contributions of fuzzy logic and numerical measures of certainty to handling these problems. Section 10 examines the usefulness of different approaches to knowledge acquisition in business situations, and investigates the benefits of methodological approaches to AI applications. It also looks at more recent AI programming techniques which eliminate the need for knowledge elicitation from an expert: neural networks, case-based reasoning and genetic algorithms are discussed. Sections 11 and 12 examine issues of integrating AI systems. Generally, the use of AI in business settings must ultimately be integrated with the broader base of corporate information systems. Section 11 looks at integration with information systems in general, and section 12 looks particularly at integration with operations research. Sections 13 and 14 review the organizational and economic impact of AI. Finally, section 15 provides a brief summary of part II.

Journal Article
TL;DR: For the grand challenge to succeed, massive computing power, massive data resource, and sophisticated modeling would be critical.
Abstract: Proliferation of massively parallel machines have undergone the first stage where researchers learn to know what it is like. Now it come to the second stage in which researchers axe asked to show visions for real applications. The author argues that Grand Challenge AI Applications should be proposed and pursued. These applications should have significant social, economic and scientific impact and serve as showcases of accomplishments of massively parallel AI. For the grand challenge to succeed, massive computing power, massive data resource, and sophisticated modeling would be critical. At the same time, it is thought that such efforts shall be promoted as international projects.

Proceedings ArticleDOI
10 May 1994
TL;DR: The concepts of AI and its six main divisions or sub-fields are introduced, and then experience with specific ES applications in the U.S. Navy are discussed, along with practical recommendations for implementing AI technology.
Abstract: Artificial Intelligence (AI) has entered center stage In the last several years AI has gained significant visibility in the business community, public domain, and academia Expert Systems (ES), the dominant sub-field in the AI arena today, offers extensive commercial, scientific, and military applications This paper introduces the concepts of AI and its six main divisions or sub-fields, and then focuses on the architecture and development of expert systems Experience with specific ES applications in the US Navy are discussed, along with practical recommendations for implementing AI technology >

Book ChapterDOI
01 Jan 1994
TL;DR: AI expert systems allow data fusion to utilize information from different NDE sensors, and can deal with uncertainty more effectively, and other AI capabilities have been applied successfully to several NDE systems.
Abstract: To meet the increased demand for reliable inspection in complex NDE tasks, artificial intelligence (AI) can provide new and effective approaches to many problems. AI expert systems (often rule-based) offer solutions to problems for which numerical algorithms may not be suitable and for which nonnumeric information is important. Expert systems allow data fusion to utilize information from different NDE sensors, and can deal with uncertainty more effectively. Another advance in AI perhaps is in artificial neural networks which have been increasingly used in NDE. These and other AI capabilities have been applied successfully to several NDE systems which will be discussed in this paper.