scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1986"


Journal ArticleDOI
Ahuja1, Carriero, Gelernter
TL;DR: Linda consists of a few simple operators designed to support and simplify the construction of explicitly-parallel programs, which is to make it largely unnecessary to think about the coupling between parallel processes.
Abstract: Linda consists of a few simple operators designed to support and simplify the construction of explicitly-parallel programs. Linda has been implemented on ATandT Bell Labs' S/Net multicomputer and, in a preliminary way, on an Ethernet-based MicroVAX network and an Intel iPSC hypercube. Parallel programming is often described as being fundamentally harder than conventional, sequential programming, but in our experience (limited so far, but growing) it isn't. Parallel programming in Linda is conceptually the same order of task as conventional programming in a sequential language. Parallelism does, though, encompass a potentially difficult problem. A conventional program consists of one executing process, of a single point in computational time-space, but a parallel program consists of many, and to the extent that we have to worry about the relationship among these points in time and space, the mood turns nasty. Linda's mission, however, is to make it largely unnecessary to think about the coupling between parallel processes. Linda's uncoupled processes, in fact, never deal with each other directly. A parallel program in Linda is a spatially and temporally unordered bag of processes, not a process graph.

572 citations


Journal ArticleDOI

331 citations



Journal ArticleDOI
TL;DR: The main goal in view integra-tionl aid the designer in identifying possiblays of integration and to help him resolve inconsistencies while working with large, real-life problems.
Abstract: g a database is an im-data engineering activi-view integration is one nt phases in logical data-uring this phase, the in-s designed by separate re integrated into a con-i for the entire organiza-data may originate from .1I OUser views-The perception of users about what a proposed database (or an ideal database) should contain. * Existing database schemas-The description of data in an existing information system, either manual or automated. User view integration is applicable to initial design, while existing schema integration applies to existing databases. Here, view integration refers to the activity of designing a global structure (integrated schema) starting from individual component structures (views). We feel that view integration can be accomplished only with interactive design tools and a continuous dialog with the designer. Integration is thus somewhat subjective, with the de-%sner helping to resolve semantic con-fIl. The main goal in view integra-tionl aid the designer in identifying possiblays of integration and to help him resolve inconsistencies while working with large, real-life problems.

240 citations


Journal ArticleDOI
Goguen1
TL;DR: Suggestions given here include systematic (but limited) use of semantics, by explicitly attaching theories to software components with views (which describe semantically correct interconnections at component interfaces); use of generic entities, to maximize reusability.
Abstract: This article covers some problems, concepts, and approaches relevant to environments for creating, documenting, and maintaining large software systems. The goal is to make programming significantly easier, more reliable, and cost effective by reusing previous code and programming experience to the greatest extent possible. Suggestions given here include: systematic (but limited) use of semantics, by explicitly attaching theories (which give semantics, either formal or informal) to software components with views (which describe semantically correct interconnections at component interfaces); use of generic entities, to maximize reusability; a distinction between horizontal and vertical composition; use of a library interconnection language, called LIL, to assemble large programs from existing entities; support for different levels of formality in both documentation and validation; and facilitation of program understanding by animating abstract data types and module interfaces. ADA is used for examples because it has some convenient features, but the proposals also apply to other languages.

229 citations


Journal ArticleDOI
TL;DR: Concurrent Prolog is a logic programming language designed for concurrent programming and parallel execution that embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms.
Abstract: Concurrent Prolog is a logic programming language designed for concurrent programming and parallel execution. It is a process oriented language, which embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms.

203 citations


Journal ArticleDOI
TL;DR: Experience with designing the same or similar artifacts allows an engineer to quickly solve the problem and knowledge plays a key role in carrying out this task.
Abstract: knowledge specific to the class of artifact being defined) as well as considerable problem-solving skill. Typically, one starts with a description of what function or functions the artifact should perform, and the task of \"design\" becomes one ofcoming up with an artifact that will function as intended. Knowledge plays a key role in carrying out this task. Experience with designing the same or similar artifacts allows an engineer to quickly solve the problem. Some of this experience can be viewed as knowledge that associates some of the requirements to parts of the artifact that carry out those requirements. Similarly, past experience can teach us how to

200 citations


Journal ArticleDOI
TL;DR: The Multilisp language as mentioned in this paper is used at M.I.T. for experiments in parallel symbolic programming, and it can be seen as a generalization of symbolic programming.
Abstract: Programs differ from one another in many dimensions. In one such dimension, programs can be laid out along a spectrum with predominantly symbolic programs at one end and predominantly numerical programs at the other. The differences between numerical and symbolic programs suggest different approaches to parallel processing. This article explores the problems and opportunities of parallel symbolic computing and describes the language Multilisp, used at M.I.T. for experiments in parallel symbolic programming.

172 citations


Journal ArticleDOI
TL;DR: A large unstructured collection of rules clearly lacks validity as a realistic model of design because reducing all knowledge to a single form does not recognize that there are many different types of knowledge used in any design problem-solving activity.
Abstract: M ost first-generation expert systems have been rule-based with a separate inference engine. However, a large unstructured collection of rules clearly lacks validity as a realistic model of design because reducing all knowledge to a single form does not recognize that there are many different types of knowledge used in any design problem-solving activity. Using such a collection of rules also does not recognize that design knowledge forms into clusters. Nor does it specify where or when this knowledge is to be applied, since different clusters ofknowledge may be applicable at different times. Similarly, using a single, central allpurpose inference engine ignores the richness of design problem-solving. Yet another problem is the potential for unfocused system behavior because all rules have equal status in the system and have equal potential for use. Many systems structure rules into sets. However, these sets are based on subtasks rather than on types of knowledge, 1-3 and the problem solving is still uniform, as the same inference engine acts on each rule set. Advocates of such structuring claim the subtasks can be solved linearly with no backtracking between tasks and with only minimal backtracking within tasks. Such subtask structure tells us more about the nature ofthe domain than about design, since it is clear that design decisions of any kind can often be wrong and, if so, will lead to attempts to recover from failure. The uniform rule representation and the lack of knowledge-dependent structure does not provide clear predictions about an expert's failure-recovery behavior. These problems stem mainly from a basic mismatch between the level of the tools available to build systems and the level of abstraction of the design task.

164 citations


Journal ArticleDOI
TL;DR: For your desk will come packaged as a VLSI workstation called SPUR, once the team at UC Berkeley finds a partner to transfer their upcoming prototype to industry.
Abstract: for your desk will come packaged as a VLSI workstation called SPUR, once the team at UC Berkeley finds a partner to transfer their upcoming prototype to industry.

154 citations


Journal ArticleDOI
Wiederhold1
TL;DR: Views provide a useful abstraction in programming languages; views provide a similar abstraction in databases as discussed by the authors, and since databases provide for persistent and shared data storage, view concepts will avoid problems occurring when persistent objects are to be shared.
Abstract: Objects provide a useful abstraction in programming languages; views provide a similar abstraction in databases. Since databases provide for persistent and shared data storage, view concepts will avoid problems occurring when persistent objects are to be shared. Direct storage of objects disables sharing.

Journal ArticleDOI
TL;DR: The methodology outlined here serves two purposes: one, it provides a way of choosing the appropriate architecture for a class architecture for an class of applications and, two, it gives a method of determining how good this choice was by actually doing the mapping of the algorithm on the architecture and performing a simulation of the execution.
Abstract: Steps to optimize the performance of a multicomputer system (MCS) are discussed Three aspects are emphasized: (1) the interconnection scheme that ties all the processors together, (2) the scheduling and mapping of the algorithm on the architecture, and (3) the mechanism for detecting parallelism and partitioning the algorithm into modules which achieve computational speedup when run on an MCS Mapping and scheduling issues are addressed, and an application example is given

Journal ArticleDOI
TL;DR: This article deals with the standardized class of parallel machines, which among the most attractive examples of array units discussed in the literature are the chip structure and, most notably, the class of systolic arrays.
Abstract: In this article, the authors deal with the standardized class of parallel machines. Fast, highly parallel, dedicated array units are well suited to VLSI or even WSI implementation because of the extreme regularity of their architecture and their interconnection locality. Given these attributes, it is reasonable, as H.T. Kung suggests, to look for algorithms inherently suited to such arrays (signal-processing algorithms, for instance, fall within this class). Among the most attractive examples of array units discussed in the literature are the chip structure and, most notably, the class of systolic arrays. On such architectures it is possible to activate a wavefront computation mode, in which computation propagates along one direction only for the various interconnection axes. The systems considered here are, then, regular interconnections of processing elements (cells), with information flowing in one direction only along all interconnection lines. The authors require that no memory devices be present in the array, with the possible exception of local ''service'' memories (for example, registers in serial arithmetic units). This limited use of memory elements is acceptable for attached processors that generally communicate by means of I/O lines with the main memories.

Journal ArticleDOI
TL;DR: The rule-based FMS production scheduling system described here is based on an Antonio Elia system and has a fellowship with Digital Equipearlier system developed with a traditional approach.
Abstract: lexible automation can enhance Thus, the FMS manager must be proproductivity dramatically-but it vided with adequate software support for F does so by increasing the complexity two tasks: of operations and the difficulty ofproduc*Production scheduling. This schedultion scheduling, which often means that ing takes a medium term horizon (two *_* * ahuman decisions fail quality and timing weeks) and determines the estimated startgoals. A flexible manufacturing system ing times of lots to allocate auxiliary a * e a * * * (FMS) can process many types of parts resources, such as manpower for arrangproduced in lots from the release times of ing pallets. The scheduling is subject to raw materials to the due dates of comseveral constraints, such as planned pleted parts. maintenance periods of machines and raw An FMS requires production schedulmaterial availability times. ing that can handle the changes flexibility * Real-time rescheduling. This activity demands. Production scheduling-deteris invoked when the planned schedule mining a schedule (a sequence) of part lots must be modified because unexpected to be machined in the EMS-must meet events occur, for example, when a mathe due dates of lots while taking into acchine breaks down or raw materials are count several related problems, such as (1) minimizing machine idle times, (2) queues at machines, and (3) work in progress. The rule-based FMS production sched___________________________________ uling system described here is based on an Antonio Elia now has a fellowship with Digital Equipearlier system developed with a traditional

Journal ArticleDOI
Srini1
TL;DR: An architectural comparison of seven dataflow processors based on sixteen criteria forms the major part of the article and presents the challenges in the design of processors.
Abstract: A Distributed Computer System, or DCS, based on the dataflow model of computation has the potential to concurrently execute a large number of tasks employing potentially thousands of processors. Since control has to be distributed and asynchronous in such a DCS, several new challenges appear in the design of processors, momory, and interconnection networks. Some of the challenges in the design of processors is presented in this article. An architectural comparison of seven dataflow processors based on sixteen criteria forms the major part of the article.

Journal ArticleDOI
TL;DR: 'ADMS+/-' is an advanced data base management system whose architecture integrates the ADSM+ mainframe data base system with a large number of work station data base systems, designated ADMS-; no communications exist between these work stations.
Abstract: 'ADMS+/-' is an advanced data base management system whose architecture integrates the ADSM+ mainframe data base system with a large number of work station data base systems, designated ADMS-; no communications exist between these work stations. The use of this system radically decreases the response time of locally processed queries, since the work station runs in a single-user mode, and no dynamic security checking is required for the downloaded portion of the data base. The deferred update strategy used reduces overhead due to update synchronization in message traffic.

Journal ArticleDOI
Keller1
TL;DR: The approach involves imposing syntactic criteria on the view update translations, enumerating the alternative translations that satisfy these criteria,3 and then, at view definition time, using semantics to choose among these alternatives.
Abstract: can express queries and updates against them. How to handle queries expressed against views is well understood: The user's query is composed with the view definition so as to obtain a query that can be executed on the underlying database. * Similarly, updates expressed against a view have to be translated into up,&,es that can be executed on the underlying database. This problem has been considered by many researchers, including Bancilhon and Spyratos1 and Dayal and Bernstein.2 The difficulty is that a solution to this problem is inherently ambiguous. My approach involves imposing syntactic criteria on the view update translations, enumerating the alternative translations that satisfy these criteria,3 and then, at view definition time, using semantics to choose among these alternatives. Since in the common model of relational databases4 the view is only an uninstantiated window onto the database, any updates specified against the database view must be translated into updates against the underlying database. The updated database state then induces a new view state, and it is desirable that the new view state look as much as possible as if the user had performed the update directly on it. The update process is described by the following diagram:

Journal ArticleDOI
TL;DR: The authors are concerned with the incorporation of more semantic modeling capabilities into database models and the development of better user environments, which include user-friendly interfaces and support different user views of the content and organization of the data.
Abstract: Database management systems evolved in response to the need for efficient maintenance of increasingly large amounts of data. The relatively slow speed of secondary storage devices holding the data is one of the main limitations of database design. Hence, the internal organization of databases has been the primary focus of research in the past. Another major influence on database research was the need to share information among a variety of users. In such an environment, strict rules governing the manipulation of data had to be imposed to preserve the integrity of the database and to guarantee privacy for each user. These needs led to the development of several basic data models. With the steadily increasing demand for user-oriented systems, however, new trends in database technology have evolved outside of the scope of the traditional data models. In this article the authors are concerned with two closely related efforts: (1) The incorporation of more semantic modeling capabilities into database models. (2) The development of better user environments, which include user-friendly interfaces and support different user views of the content and organization of the data.


Journal ArticleDOI
Stanley1
TL;DR: Computer-integrated manufacturing, or CIM, is a way to achieve such integration through computers and computational techniques in design, planning, and manufacturing to create an economically competitive automated system.
Abstract: F ory a$itomation, a relatively ne ec ology, is being pursued by a ndustrial countries. It promises t ncrease factory productivity ; i ove product quality; relieve ers from repetitive and bonnt s and unpleasant, hazard-ou ,{vironments; and minimize Suction delays and resource rwaste. An economically competitive automated system integrates the various control processes and data used in design, manufacturing, sale, and service of products so that feedback from any of the processes can be used to effect better design, planning, control, and production. Computer-integrated manufacturing , or CIM, is a way to achieve such integration through computers and computational techniques in design, planning, and manufacturing. Central to CIM is a database management system that facilitates the sharing of data among the component systems. Recent efforts in manufacturing integration the IPAD, ICAM,I and AMRF2 projects among others34-have recognized the importance of a stabase management facility for m~ging diverse CAD/CAM data. Fuxe CIM systems will likely be networlsf heterogeneous computer systems, each of which will control and support the operations of numeric control machines, robots, transport vehicles, and machine tools. This network structure seems probable because (1) low-level manufacturing equipment is likely to be produced by different manufacturers who use different computer hardware and software for equipment control and support , and (2) factories that are moving towards automation are likely to add to existing computer facilities rather than to replace them entirely. Also, we can expect the component systems in a factory network to have widely differing data management facilities, from simple application programs, to file management systems, to full-scale DBMSs. Because the nature of a CIM system is heterogeneous, a number of database requirements must be considered. First, data to control and support design, manufacturing, sale, and service of products will be physically stored at and processed by the component systems. Data sharing among these component systems requires a common data model that explicitly defines the structures, constraints, and operations (e.g., expert rules) that COMPUTER 34

Journal ArticleDOI
TL;DR: FIS (for Fault Isolation System), developed in an ongoing research project sponsored at the US Naval Research Laboratory, is an expert system that directs or assists a technician in diagnosing faults in a piece of electronic equipment.
Abstract: n recent years, escalating maintenance costs for electronic equipment and the increasing power of artificial intelligence methods have led to research in the application of Al to electronics troubleshooting. ' FIS (for Fault Isolation System), developed in an ongoing research project sponsored at the US Naval Research Laboratory, is an expert system that directs or assists a technician in diagnosing faults in a piece of electronic equipment. FIS is a more highly developed version of an earlier system. 2 FIS has more extensive knowledge acquisition capabilities and extensive minor improvements. It is written in Franz Lisp and runs on a VAX 11/780 computer. Figure I illustrates the context in which FIS is intended to be used. FIS was designed primarily to diagnose analog systems, isolating faults to the level of amplifiers, power supplies, and larger components. The methods employed in FIS are also applicable to the automatic generation of the programs that drive conventional automatic test equipment (ATE), to the real-time control of ATE, and to fault isolation in systems containing mechanical, hydraulic, optical, and other types of components. First, letme describe the diagnosis problem addressed by FIS. I assume that a knowledge engineer has documentation describing the function and structure of a specific piece of electronic gear called a unit under test (UUT). This documentation includes schematic and block diagrams, specified values of measurable parameters at various test points, and theory of operation. With this documentation, the knowledge engineer uses FIS to create a computer model of the UUT. Under the supervision of a technician, FIS later uses the model to recommend tests to make and analyzes the test results until faulty replaceable modules are identified. The following are the principal goals of the FIS project:

Journal ArticleDOI
Hudak1
TL;DR: Para-functional programming as discussed by the authors is a methodology for programming multiprocessor computing systems, which is based on a functional programming model augmented with features that allow programs to be mapped to specific topologies.
Abstract: 60 The importance of parallel computing hardly needs emphasis. Many physical problems and abstract models are seriously compute-bound, since sequential computer technology now faces seemingly insurmountable physical limitations. It is widely believed that the only feasible path toward higher performance is to consider radically different computer organizations, in particular ones exploiting parallelism. This argument is indeed rather old now, and considerable progress has been made in the construction of highly parallel computers. One of the simplest and most promising types of parallel machines is the wellknown multiprocessor architecture, a collection of autonomous processors with either shared or distributed memory that are interconnected by a homogeneous communications network and usually communicate by sending messages. The interest in machines of this type is not surprising, since not only do they avoid the classic \"von Neumann bottleneck\" by being effectively decentralized, but they are also extensible and in general quite easy to build. Indeed, more than a dozen commercial multiprocessors either are now or will soon be available. Although designing and building multiprocessors has proceeded at a dramatic pace, the development ofeffective ways to program them has generally not. This is an unfortunate state of affairs, since experience with sequential machines tells us that software development, not hardware development, is the most critical element in a system's design. The immense complexity of parallel computation can only increase our dependence on software. Clearly we need effective ways to program the new generation of parallel machines. In this article I introducepara-functional programming, a methodology for programming multiprocessor computing systems. It is based on a functional programming model augmented with features that allow programs to be mapped to specific multiprocessor topologies. The most significant aspect of the methodology is that it treats the multiprocessor as a single autonomous computer onto which a program is mapped, rather than as agroup of independent processors that carry out complex communication and require complex synchronization. In more conventional approaches to parallel programming, the latter method of treatment is often manifested as processes that cooperate by message-passing. However, such notions are absent in para-functional programming; indeed, a single language and evaluation model can be used from

Journal ArticleDOI
Larson1
TL;DR: A technique is needed for users of database management systems to locate information without having to specify an exact description of the information or where it is stored in the database.
Abstract: The operations of structuring, filtering, panning, and zooming can be used to browse the contents of a database. Avisual approach to browsing in a database environment is one way for users to access database contents easily and conveniently. People relate better to visual representations of the logical database than to other representations. Users perform four basic operations when browsing a database: structuring, choosing a structure of the objects to be examined; filtering, selecting instances of the objects to be examined; panning, examining neighboring object instances; and zooming, determining the level of detail for examining object instances. My approach consists of four very flexible steps corresponding to each of the four basic operations. Each step uses visual representations to present information to the users. By manipulating these visual representations, users can structure, filter, pan, and zoom. Users may revise decisions and choices made during any of the four steps. Most database management systems require users to formulate a complex specification describing the data that they wish to access. In order to do this, they must be familiar with the logical structure of the database; they must know the names of the types of objects to be accessed and how these objects are related. They must also be able to specify which occurrences ofthe objects are to be accessed by describing the criteria accessed occurrences must satisfy. This is difficult for many users because of their unfamiliarity with the syntax used to formulate requests, or because they have only a vague notion of what data they desire to access. Users who have only a vague notion of the data to be accessed need to browse through the database. By browsing through the bookshelves of a library, it may be possible to locate one or more books of interest. By browsing through the pages of these books (perhaps making use of indices and tables of contents) readers may be able to locate the desired information that they could not previously characterize or describe. Database users have a similar need to browse through information in a database. A technique is needed for users of database management systems to locate information without having to specify an exact description of the information or where it is stored in the database. Such a technique is useful when users initially have only a vague idea of what they desire, but feel confident that they will recognize the desired information when they see it.

Journal ArticleDOI
TL;DR: This article reviews some of the steps necessary to achieve the transition from current data management systems to future knowledge management systems and describes the design and experimental use of a prototype knowledge manager.
Abstract: urin he last decade data ma ment technology has syste at storage and retrieval e can efficiently search larg, omplex databases. At the s time, artificial intelligence tech-ology has created a number of expert systems, which use encoded knowledge to solve problems in a manner similar to that of a human expert. Many present and future computer applications could benefit from an effective and efficient marriage of these two technologies. For example, managers and their staff associates could use a system that stores expertise about company policies, plans, finances, and product evaluation to intelligently access management information system databases. Similarly , command and control systems could exploit stored expertise in analyzing and evaluating masses of battlefield data to greatly aid military de-ion making. ~jing user expectations, the ad-vancnt of computer technology, and the act of the continuing information losion are combining to hasten the transition from current data management systems to future knowledge management systems. This article reviews some of the steps necessary to achieve this transition and describes the design and experimental use of a prototype knowledge manager.

Journal ArticleDOI
TL;DR: Toast is an evolving Phase-1 assistant that has immediate potential in two-diagnosis and criticism and is written in Cops, a programming environment that allows for distributed processing and has a readily extensible library of both symbolic and numerical programs.
Abstract: The environments in which power system operators work are becoming more complex. New constraints are appearing, old constraints are tightening, and the number of decision variables is increasing. To cope with these trends, operators need intelligent assistants to help manage information and lighten their decision-making burdens. Such assistants can be divided into two types: Phase-1 assistants for off-line uses and Phase-2 assistants for on-line uses and Phase-2 assistants for on-line, real-time uses. Toast is an evolving Phase-1 assistant. Of the nine possible functions of an assistant, Toast has immediate potential in two-diagnosis and criticism. Its diagnostic knowledge, though hardly complete, is extensive enough to be useful to human operators. In contrast, its abilities to critique proposed courses of action are much less developed and, as yet, consist only of facilities to simulate some of the these courses of action. Toast has been written in Cops, a programming environment that allows for distributed processing and has a readily extensible library of both symbolic and numerical programs. These features should make the task of expanding Toast relatively painless. Of the many directions in which expansions could occur, we plan on adding diagnostic capabilities in the area of power system security. This areamore » was identified in a study as the most worthy of development.« less

Journal ArticleDOI
Kuhl1, Reddy1
TL;DR: A large number of basically autonomous processing elements interconnected by a structure that allows high-bandwidth communication between them and at the system level, these processing elements and interconnection facilities are viewed as the basic components of the system.
Abstract: Researcher have long conjectured upon the possibility of constructing large, massively-parallel computing engines by interconnecting many conventional processing elements to form an integrated supersystem. The rapid expansion in very large scale integration, or VLSI, circuit technology during the past decade has accelerated research in this direction. As advances in VLSI push basic component or chip functionalities to the processor level and beyond, it becomes natural to view complex processing elements as the basic components of much larger systems. Several names for such systems have been proposed, including network computers, multicomputers, and distributed multiprocessors. Despite the naming differences, these systems have the following salient features: (1) A large number of basically autonomous processing elements interconnected by a structure that allows high-bandwidth communication between them. At the system level, these processing elements and interconnection facilities are viewed as the basic components of the system. Each processing node has its own local memory and there is no sharing of memory between nodes. (2) A high degree of distribution of control or operating system functions among the processing elements. (3) Highly parallel computation performed by constructing applications as collections of several or many distinct tasks. These tasks may execute concurrently on different processors, withmore » necessary intertask communication carried out over the communication facilities linking the nodes. The collection of cooperating tasks comprising an application is sometimes referred to as a task force.« less

Journal ArticleDOI
TL;DR: This article describes work undertaken to automate storage and retrieval of complex data objects that contain text, images, voice, and programs, which pose a major problem in developing commercially viable database management systems capable of handling them.
Abstract: of attributes, text, images, and voice. T his article describes work undertaken to automate storage and T retrieval of complex data objects that contain text, images, voice, and programs. Until recently, the extremely large storage requirements of these objects posed a major problem in developing commercially viable database management systems capable of handling them. Database management systems that will manage multimedia information also have different functionality, interface, and performance requirements from traditional database management systems. Recent developments in hardware are now making automatic storage, retrieval, and manipulation of complex data objects both possible and economically feasible.

Journal ArticleDOI
TL;DR: Gallium arsenide, or GaAs, technology has recently shown rapid increases in maturity and is seen to have applications in computer design within several computationally intensive areas, particularly in the military and aerospace markets.
Abstract: Gallium arsenide, or GaAs, technology has recently shown rapid increases in maturity. In particular, the advances made in digital chip complexity have been enormous. This progress is especially evident in two types of chips: static rams and gate arrays. In 1983, static rams containing 1K bits were announced. One year later both a 4K-bit and a 16K-bit version were presented. Gate arrays have advanced from a 1000-gate design presented in 1984 to a 2000-gate design announced in 1985. With this enormous progress underway, it is now appropriate to consider the use of this new technology in the implementation of high-performance processors. GaAs technology generates high levels of enthusiasm primarily because of two advantages it enjoys over silicon: higher speed and greater resistance to adverse environmental conditions. GaAs gates switch faster than silicon transistor-transistor logic, or TTL, gates by nearly an order of magnitude. These switching speeds are even faster than those attained by the fastest silicon emitter-coupled logic, or ECL, but at power levels an order of magnitude lower. For this reason, GaAs is seen to have applications in computer design within several computationally intensive areas. In fact, it has been reported that the Cray-3 will contain GaAs parts. GaAsmore » also enjoys greater resistance to radiation and temperature variations than does silicon. GaAs successfully operates in radiation levels of 10 to 100 million rads. Its operating temperature range extends from -200 to 200/sup 0/C. Consequently, GaAs has created great excitement in the military and aerospace markets.« less


Journal ArticleDOI
Faught1
TL;DR: There are some problems, however, in applications and thereby present a clear picture of the canabilities of the technologists, so as to reduce potential duplication of effort in tool development.
Abstract: ecently a number of research and systems typically require a strong base of industrial organizations in the US development tools. Only recently have R\\ have been investigating artificial commercial tools become available. Also, intelligence techniques to address probAl expertise is in short supply, compared lems that have been difficult to solve using with the number of potential applications. standard computer technology. Because The goal of commercial tool developknowledge-based systems explicitly reprement is to address these problems by iden-3** * a a sent and reason with knowledge supplied tifying Al techniques to include in develby human experts, these systems offer opment environments and by supporting significant new capabilities and flexibility. the development of tool enhancements The incentive for applying AI to engineerthat focus on engineering applications. ing stems not only from the growing comThe ultimate goal is to reduce potential ... 0111 e1 5 _ plexity of modern engineering but from duplication of effort in tool development the traditional expense, time constraints, and encourage commonality among engiand limited availability of human experneering applications. tise. AI appears to offer an opportunity to Part of this effort is the examination (1) capture and retain expertise that has acand classification of typical engineering crued over many years of engineering, (2) applications in Al. This article briefly amplify expertise that is needed to sucdescribes several such applications built cessfully deploy new technologies and using IntelliCorp's Knowledge Engineerdesign applications, and (3) offer systems ing Environment, or KEE. These applicathat reason intelligently about necessary tions fall into general categories of fault actions to take in real time, thus freeing diagnosis, simulation, and configuration. operations personnel. ' The first goal ofthis article is to describe There are some problems, however, in applications and thereby present a clear rnauirina AT terhnnlnov AT nnnhiratins nictiire of the canabilities of the technol-