scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1986"


Patent
21 Oct 1986
TL;DR: In this article, the key is generated in a hardware module which is a single chip microprocessor and individual words are generated as a function of a plurality of words of the input key word sequence.
Abstract: Encrypted digital information in a data processing system is decrypted using a key which is a word sequence. The key is generated in a hardware module which is a single chip microprocessor. Individual words of the key word sequence generated by the module are generated as a function of a plurality of words of the input key word sequence. To that end, the microprocessor is programmed as a finite state machine. The hardware module may be combined with a storage medium in a software package. The decryption routines and a key sequence to be applied to the key generator are stored with the application software on the storage medium. To decrypt the application software, the stored key sequence is applied to the key generator to obtain an output key sequence. A computer system may include an encryption/decryption module and a key generator module to encrypt software and data prior to outputting the software or data from the system. Identical key generators may be utilized for encryption and decryption in a secure network.

153 citations


Journal Article

78 citations


Journal ArticleDOI
TL;DR: Designed to build both system and application software for large systems, Modula-2+ supports exception handling, automatic storage management, and concurrency for multiprocessors without compromising the integrity of Modulas 2.
Abstract: Designed to build both system and application software for large systems, Modula-2+ supports exception handling, automatic storage management, and concurrency for multiprocessors without compromising the integrity of Modula-2.

77 citations


Book ChapterDOI
01 Jan 1986
TL;DR: In the last few years, the rapid decline of computing costs, coupled with the sharp increase of personal computers and “canned” software, has expanded dramatically the population of the computer user community.
Abstract: In the last few years, the rapid decline of computing costs, coupled with the sharp increase of personal computers and “canned” software, has expanded dramatically the population of the computer user community. More and more people today are using computers. However, to many people, the usefulness of a computer is bounded by the usefulness of the canned application software available for the computer. Application programs written for a mass audience seldom give every user all the capabilities that he/she needs. Those who wish to use the computer to do something beyond the capabilities of the canned programs discover that they have to “program.”

65 citations


Journal ArticleDOI
TL;DR: A prototype Process Visualization System (PVS) is developed that allows nonprogrammers to construct graphics displays, and to establish relationships between these displays and changes in the data describing the process being monitored.
Abstract: An increasingly important use of computer graphics is to monitor such real-world processes as manufacturing plants, power plants, and refineries. As in many uses of computer graphics, the development of this application area has been programmer-intensive. As a consequence, system are being developed more slowly than is desired and they cost more than necessary. This article discusses some of the key issues and requirements involved in designing productivity tools for use in constructing monitoring systems. We have developed a prototype Process Visualization System (PVS) that allows nonprogrammers to construct graphics displays, and to establish relationships between these displays and changes in the data describing the process being monitored. With this system, customized, dynamically updated scenes can be created. The PVS allows users to construct symbols, to place them on the screen, and to connect them to data items from the monitored processes' database. Using this system frees the end-user from reliance on programmers, and allows the user to quickly display the data items of interest.

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a tool to speed up the development of complex software for 32-bit microprocessors by using reusable software components, which can be used to accelerate the development process.
Abstract: Programmers can speed the development of complex software for powerful 32-bit microprocessors by designing with reusable software components.

61 citations


Patent
James R. Rhyne1
30 Apr 1986
TL;DR: In this paper, the echo of user-initiated actions taken through the input devices of a workstation in a distributed computational system providing application processing services at a site remote to the user is controlled.
Abstract: Method for controlling the echo of user-initiated actions taken through the input devices of a workstation in a distributed computational system providing application processing services at a site remote to the user. When application services are being used, an application protocol located proximate to the user separates user input actions which do not require application computational response but which do require registration in the system from those actions which initiate events requiring application program computational response. The application protocol buffers the non-computational actions, and invokes a local service to register the actions and to stimulate a user-discernible response. Other actions are forwarded by the protocol to the remote application process in the host computer. The invention reduces the response time to certain user-initiated actions by providing local response facilities, thereby eliminating a time-consuming communication with the remote application process.

54 citations


Journal ArticleDOI
TL;DR: This article describes work undertaken to automate storage and retrieval of complex data objects that contain text, images, voice, and programs, which pose a major problem in developing commercially viable database management systems capable of handling them.
Abstract: of attributes, text, images, and voice. T his article describes work undertaken to automate storage and T retrieval of complex data objects that contain text, images, voice, and programs. Until recently, the extremely large storage requirements of these objects posed a major problem in developing commercially viable database management systems capable of handling them. Database management systems that will manage multimedia information also have different functionality, interface, and performance requirements from traditional database management systems. Recent developments in hardware are now making automatic storage, retrieval, and manipulation of complex data objects both possible and economically feasible.

50 citations


Proceedings ArticleDOI
01 Apr 1986
TL;DR: The robotics group of the Stanford Artificial Intelligence Laboratory is currently developing a new computational system for robotics applications that uses multiple NSC 32016 processors and one MC68010 based processor, sharing a common Intel Multibus.
Abstract: The robotics group of the Stanford Artificial Intelligence Laboratory is currently developing a new computational system for robotics applications. Stanford's NYMPH system uses multiple NSC 32016 processors and one MC68010 based processor, sharing a common Intel Multibus. The 32K processors provide the raw computational power needed for advanced robotics applications, and the 68K provides a pleasant interface with the rest of the world. Software has been developed to provide useful communications and synchronization primitives, without consuming excessive processor resources or bus bandwidth. NYMPH provides both large amounts of computing power and a good programming environment, making it an effective research tool.

45 citations


Journal ArticleDOI
01 Sep 1986
TL;DR: This monograph, taken from an advanced course at Stanford, covers binomial identities, recurrence relations, operator methods, and asymptotic analysis, and the subject of hashing is explored via operator methods.
Abstract: This monograph is an ideal companion for a graduate student in computer science or applied mathematics. The material was taken from an advanced course at Stanford. It assumes a background in complex variable theory and combinatorial analysis. It, covers binomial identities, recurrence relations, operator methods, and asymptotic analysis. The remainder of the monograph is devoted to exam questions and their solutions. Armed with this and a copy of volume 3 of Knuth's The Art of Computer Programming, the reader can almost re-create the lectures. for a researcher in the field, the strength of this book lies in its compactness. A topic can be quickly looked up and, if the discussion is not complete, many appropriate references can be followed up. The authors' attack on many difficult problems associated with the topics also makes the book quite useful to researchers. In one of the more engaging chapters, the subject of hashing is explored via operator methods. In a fashion typical of the monograph, analysis is presented by example, in this case, the cookie monster problem. The cookie monster problem calculates the probability that the monster will grow to size k t 1, given that its ability to catch and devour cookies is proportional to its current store of cookies ( k ) . The analysis uses the eigenoperator approach and the discussion of the rookie monster algorithm leads into a refreshingly clear exposition of coalesced hashing and open addressing.

41 citations


Journal ArticleDOI
Hoshino1
TL;DR: Using contemporary LSI and simple architecture, it is possible to construct a multiplepurpose machine PAX with a 1 Gflops speed, and the goal for the 1990's: a 1 Tflops PAX.
Abstract: Using contemporary LSI and simple architecture, we can construct a multiplepurpose machine PAX with a 1 Gflops speed. Our goal for the 1990's: a 1 Tflops PAX. O ne of the most popular research and development topics in computing is the exploitation of parallelism in processing computationally intensive applications and the creation ofthe necessary super-high-speed computer. I PAX computer development, while similar to other attempts to create a superhigh-speed computer, is somewhat different in approach. The PAX computer project, sponsored by the Japanese Ministry of Education, was started in 1977 by me and Prof. T. Kawai, Keio University. It aimed at the realization of a high-speed multi-microprocessor system that could calculate the power distribution in the Boiling Water Reactor (BWR) by solving the coupled neutronic-thermo-hydraulic equations that describe the three-dimensional reactor core. This goal was attained when PACS-32, an array of 32 microprocessors connected in a two-dimensional, nearestneighbor-mesh,2 performed a successful load-follow analysis of the BWR. The method of parallel processing used was very straightforward and traditional in that it took advantage of the locality of the physical process and the proximity of the physical actions: The physical space was directly mapped onto the processor array, and each processor executed its own mapped local task with proper exchange of data between the adjacent processors. During the development of PACS-32, there were indications that PACS architecture could be effective for a multipurpose, parallel computer covering the greater part of all scientific and engineering applications. The name \"PACS\" (Processor Array for Continuum Simulation) was later changed to \"PAX,\" which stands for Processor Array eXperiment; four hardware PAX prototypes (with 9, 32, 64, and 128 processors, respectively) have been constructed so far. The newest PAX prototype, the PAX-64J, is an industry-made, commercial version with 64 DCJ1 1 microprocessors. Nowadays, the PAX prototypes are used intensively in studies aimed at developing algorithms to be used with PAX-type architecture in solving scientific-application problems. As evidenced by the June 1985 issue of Computer, I the scope of parallel-processing research and commercial development is expanding; it spans from commercial supercomputers with vector-processing capability to highly parallel multi-microprnocessor systems for limited application. It is Nvious that all the growth is strongly dependent on the progress of LSI technology. The recent, innovative CMOS technology will definitely influence multiprocessing architecture in the sense that device technology is fundamental and is sometimes the most crucial economic factor. CMOS technology favors the highly parallel construct; it is shifting the R&D spectrum toward higher parallelism in both commercial supercomputers and specialized multiprocessor systems.

Patent
28 Feb 1986
TL;DR: In this paper, a control apparatus intercepts device-specific control commands generated by the software and translates the commands into commands which are compatible with the peripheral connected to the system, but non-device specific commands are passed untranslated through the control apparatus to the peripheral.
Abstract: Control apparatus allows application software written for use with peripheral devices manufactured by one company to run with other peripheral devices. The apparatus intercepts device-specific control commands generated by the software and translates the commands into commands which are compatible with the peripheral connected to the system. Non-device specific commands are passed untranslated through the control apparatus to the peripheral. More specifically, registers within the control apparatus which must be programmed with parameters unique to a particular peripheral cannot be accessed by the application software while other nonspecific registers remain read and write accessible. Peripheral-specific parameters are instead changed by a secondary processor which uses special hardware to minimize interference with the main processor.

Journal ArticleDOI
01 Aug 1986
TL;DR: This work has devised a class hierarchy that yields some conceptual order in the midst of diverse representations of shapes and builds a framework combining dissimilar models in an orderly manner.
Abstract: The class concept is one component of objectoriented programming systems that has proven useful in organizing complex software. In experimenting with classes for geometric modeling applications, we have devised a class hierarchy that yields some conceptual order in the midst of diverse representations of shapes. Rather than search for a uniform primitive representation, we accept the diversity and build a framework combining dissimilar models in an orderly manner.

Journal ArticleDOI
TL;DR: A sampling data display to solid modeling, shows that significant improvements in interactivity can be obtained by microprogramming a high-performance, general-purpose display architecture.
Abstract: Interactive computer graphics display requirements have generally been met is one of two ways: by highly specialized systems designed for a particular application, or, more frequently, by devices with a limited set of display functions common to a wide range of applications. A third alternative, presented here, is to use a high-performance, general-purpose display architecture to provide both common and application-specific graphics functions. A sampling data display to solid modeling, shows that significant improvements in interactivity can be obtained by microprogramming such a machine.

Journal ArticleDOI
TL;DR: This article proposes a standard data format protocol for public-key cryptography, arguing that industry standards are likely to be a necessary condition for wide acceptance of public- key cryptosystems.
Abstract: Can public-key cryptography succeed without industry-wide standards? Some feel it's time to get started on them. Public-key cryptography has not yet been widely accepted for commercial applications. There are some commercially available public-key encryp-tion products in the personal computer industry that have not sold particularly well. One reason for this is that public-key cryptography is not well known, so its value is not yet appreciated by the industry. This situation may improve with time. Another possible reason is that the most popular public-key encryption algorithm, the Rivest-Shamir-Adleman (RSA) algorithm , is relatively slow compared to single-key algorithms such as the NBS Data Encryption Standard, or DES. However , one may use RSA to \"bootstrap\" into the DES. Also, the speed of RSA may improve soon when VLSI hardware implementations of the RSA algorithm become available. Another reason for the lack of commercial acceptance of public-key cryptography is that different vendor's products cannot readily exchange messages, signatures , and keys, since no industry standard protocols have been defined for their data formats. The DES algorithm is widely used in the industry today because a standard exists. While industry standards may not be a sufficient condition for wide acceptance of public-key cryptosystems, I contend that they are likely to be a necessary condition. For this reason this article proposes a standard data format protocol for public-key cryptography. What is public-key cryptography? In conventional cryptosystems such as the DES, a single key is used for both en-cryption and decryption. This means that keys must be initially transmitted via secure channels so that both parties can know them before encrypted messages can be sent over insecure channels. This may be inconvenient. In public-key cryptosystems, there are two complementary keys: a publicly revealed encryption key, and a secret decryp-tion key. Each key is the functional inverse of the other key, such that using one of the keys on a message produces ciphertext that can be converted back to plaintext by the other key. Further, knowing the public key does not help you deduce the corresponding secret key. This produces two useful consequences. First, secret channels such as couriers are not required to transmit keys, because the intended recipient (in this discussion, a woman) ofa message has already publicly revealed her encryption key. The encryption key can even be published in a public key directory. Anyone wishing to send the recipient a message can use that …

Proceedings ArticleDOI
R. Gaglianello1, H. Katseff
01 Apr 1986
TL;DR: Meglos is a multiprocessor operating system that is ideal for real-time applications and provides the user with a distributed programming environment for developing, running, and debugging application programs.
Abstract: Meglos is a multiprocessor operating system that is ideal for real-time applications. It provides the user with a distributed programming environment for developing, running, and debugging application programs. Meglos allows a single application to span several processors. Further, unrelated programs may run independently of each other on the same processor. A simple, yet powerful, mechanism for communications between programs is provided. With the current hardware configuration, up to twelve Motorola 68000 Multibus computer systems and a DEC VAX host running the UNIX operating system can be connected via the S/NET interconnect. Meglos exhibits communications latencies of 750 μsec, sufficiently small for most real-time applications.

Journal ArticleDOI
TL;DR: The design of the database and its role in the CAE package for utility engineers concerned with transmission system protection is discussed, and the use of database management systems in all such CAE tools is discussed.
Abstract: When applying computer aids to engineering problems, a major difficulty is data handling. We have used a database management system as the central strucutre for a package of computer-aided-engineering (CAE) software for utility engineers concerned with transmission system protection. This paper discusses the design of the database and its role in the CAE package. The use of database management systems in all such CAE tools is discussed.

Journal ArticleDOI
TL;DR: The Rex approach to software architecture views executives as independently programmable machines that execute conventional-language application procedures as individual instructions of a higher-level program.
Abstract: The Rex approach to software architecture views executives as independently programmable machines that execute conventional-language application procedures as individual instructions of a higher-level program.

Journal ArticleDOI
TL;DR: Software reusability, system adaptability, and supercomputer system performance are the three important issues of modern mission-critical computing now faced by the defense department in its embedded computer system applications.
Abstract: Software reusability, system adaptability, and supercomputer system performance are the three important issues of modern mission-critical computing now faced by the defense department in its embedded computer system applications. The first issue has to do with the tremendous costs involved throughout the life cycle of a system, caused primarily by the rapid growth in the cost of software development, maintenance, and enhancement. Better life-cycle models and the use of more abstract, reusable resources and patterns (or, in general, knowledge) are needed to improve the productivity of software people. The second issue, adaptability, arises from the continual changes that occur during the long life of DOD mission-critical systems, as well as the fact that such systems must accommodate large dynamic variations of incoming data and meet different computational requirements at different stages of the system. These requirements suggest the introduction of adaptable software and hardware resources and environments to produce systems that will (1) take advantage of advances of both software and hardware over the next decade, (2) meet continuously changing mission needs. The third issue is the fact that required system performance has advanced to a point beyond what the existing technology base can handle. The increase in military weaponmore » sophistication and the growing complexity of mission applications impose formidable throughput requirements, demanding advanced hardware and system architectures to provide real-time supercomputing capabilities.« less

Proceedings ArticleDOI
02 Jul 1986
TL;DR: DOSS is a storage system designed to support CAD applications efficiently and its capabilities, some design decisions made within it and the associated tradeoffs are described.
Abstract: This report describes DOSS and its capabilities, some design decisions made within it and the associated tradeoffs. DOSS is a storage system designed to support CAD applications efficiently. We define composite objects, examine their ability to capture design data and outline our approach to distributed object naming. We believe our choice of the system/application interface is crucial for achieving acceptable performance in the CAD environment. We also describe our approach to associative search, change control and version management.

Journal ArticleDOI
TL;DR: This article will describe a graphics hardware architecture that meets the criteria to reinforce the growth of computer graphics and be implemented in VLSI for reasons of cost and speed.
Abstract: To reinforce the growth of computer graphics, hardware architectures must be developed to support the rapid growth in display resolution that will occur in the next few years. The architectures must allow a high degree of application independence without requiring significant changes in the overhead software interface. The hardware architecture must be expandable, again avoiding significant software changes. Finally, the architecture must be implemented in VLSI for reasons of cost and speed. This article will describe a graphics hardware architecture that meets these criteria.

Journal ArticleDOI
TL;DR: The Agora project is described, conducted within the context of a larger project called Kayak, ' and its architecture, where Agora's functional model was defined in parallel with the IFIP model.
Abstract: message system was designed as part of the Kayak project, employing a multisite architecture for the distributed, integrated electronic office. W e consider a distributed message system that allows exchange of compound documents among workstations to be the foundation of integrated office systems in the future. Although not widely used in corporations today, experiments like the one we describe here have shown the feasibility of these concepts. In particular, they have highlighted some essentials for a technically successful electronic office system: * multimedia workstations with good interface features, * reliable message transfer, * a flexible naming scheme, and * a multisite, distributed architecture. We here describe the Agora project, conducted within the context of a larger project called Kayak, ' and its architecture. Agora's functional model was defined in parallel with the IFIP model.2 Kayak's goal was to build a complete, integrated electronic office.

Journal ArticleDOI
TL;DR: DMDOS (Direct Manipulation Disk Operating System), that is designed by applying the concepts of direct manipulation to MS-DOS on the IBM PC, is described.
Abstract: Software engineers are often called upon to design user interfaces, but strategies and guidelines are only beginning to emerge. Shneiderman (1983) introduced the term "Direct Manipulation" to describe user interfaces which have:1) continuous representation of the objects of interest.2) physical actions (movement and selection by mouse, joystick, touch screen, etc.) or labeled button presses instead of complex Syntax.3) rapid, incremental, reversible operations whose impact on the object of interest is immediately visible.4) layered or spiral approach to learning that permits usage with minimal knowledge.The concepts of direct manipulation has been applied in some distinctive systems such as XEROX STAR and APPLE Macintosh, and many application software products such as spread sheets, word processors, drawing tools, desk-top managers, etc.However, the basic software of personal computers, the operating system, is still often based on command language concepts. This paper describes DMDOS (Direct Manipulation Disk Operating System), that we designed by applying the concepts of direct manipulation. to MS-DOS on the IBM PC.

Journal ArticleDOI
01 Feb 1986

Journal ArticleDOI
Shuey1, Wiederhold
TL;DR: This article discusses how the data requirements of information systems influence the structure of the distributed computer systems upon which they are often built.
Abstract: T he c ter has made it possible to me anize much of the infor-matio terchange and processing that §stitute the nervous system of ouro ty. The computerized sys-temsprovide this mechanization, co only called information sys-s, will be responsible for computer technology's long-term impact on society. We need to understand how the information requirements of the different segments of our society should influence the design and architecture of the information systems serving those segments. I We have for some time been in a distributed computer era. This article discusses how the data requirements of information systems influence the structure of the distributed computer systems upon which they are often built. Many of the concepts, carried over into centralized systems, enable those systems to evolve in a distributed world. Unfortunately, our current understanding of computer science does not iwnclude precise definitions of, or per-iMsharp boundaries among, data, in-for on, and knowledge.2 In this articleXorientation is on data; concepts toipg on information and knowledge are introduced only as necessary. Consequently, the reader will find situations in which the term \"data\" is used in a context where it might be more appropriate to use the terms \"information\" or \"knowledge.\" Our two principal premises are 1. that a specific information system may involve logically and physically distributed computers and databases , and 2. that the individual functional components of an information system are driven by information from companion components and in turn provide information to other components through the transfer of data. The term \"component\" here includes the interfaces as well as the internal parts of the systems, namely humans, displays, sensors, actuators, computers , and storage devices. Both of these premises are likely self-evident. However, not so self-evident is that this interchange of data is far more than a data communication problem. For example, we can describe the services provided by the post office from a pure data communication viewpoint. In the post office system, an envelope is delivered to the ad

Journal ArticleDOI
TL;DR: Fault diagnosis is the first step of as a design problem rather than an software development methodology-by in this process, and is of interest to the engineering commurequired.
Abstract: Developments in computer science and engineering methodologiess Din the 1960's and 1970's provided ~ application program developers with a variety of hardware and software tools.o t These tools increased the use of computers by engineers, but use has been limited almost exclusively to algorithmic solutions such as finite element methods and circuit simulators. Many engineering problems are not amenable to purely algorithmic solutions. Interpretation. The given data are anaing development cycle and the cycle for As Koen' put it, "The engineering methlyzed to deterhine their meaning. The data knowledge-based expert systems (Waterod is the use of heuristics to cause the best are often unreliable, erroneous, or man4 provides an excellent treatment of change in a poorly understood situation extraneous. the latter). within the available resources:' To deal Diagnosis. Identification of a problem Most software engineering projects with these ill-structured problems, an area or a fault is based on potentially noisy assume that the problem is one of impleengineer relies on his own judgment and data. The diagnostician must be able to mentation rather than design. The rigid experience, relate the symptoms to the appropriate specifications and modularization Knowledge-based expert systems profault. imposed are no longer helpful for projects vide a programmiing methodology for solvRepair. The faults in the systems are using knowledge-based engineering sysing ill-structured engineering problems. identified, and remedial actions are tems. There, the project should be thought Since these systems also provide a flexible suggested. Fault diagnosis is the first step of as a design problem rather than an software development methodology-by in this process. implementation issue.' separating the knowledge base from the Monitoring. Signals are continuously Functional specifications cannot be inference mechanism-they are of increasinterpreted and alarms are set whenever accurately detailed with expert systems. ing interest to the engineering commurequired They change as a wider body of test cases nity.p Simulation. A model of the system is and field problems are covered by the created, and the outputs for a set of inputs em'sbeededountilrthemsystemnhas Enowledginebasering usedrtobomlemsth are observed, development of a system for automating

Journal ArticleDOI
TL;DR: A possible approach to the "semantic" vectorization of microwave circuit design software is presented, which could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.
Abstract: Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the "semantic" vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scalar computers (CDC 7600), with cost reductions of more than one order of magnitude. This could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: This paper describes an environment developed for interactive computer graphic simulation of robotic applications to make the development of application programs easier and to allow the same models to be used by a variety ofApplication programs.
Abstract: This paper describes an environment developed for interactive computer graphic simulation of robotic applications. The goals of this environment are to make the development of application programs easier and to allow the same models to be used by a variety of application programs. In this environment, standard models of work cells, robots, sensors, and other entities can be created. These models can be extended to include user-defined properties of individual entities. Functions are provided to access the data in these models, to manipulate the data with computer graphic animation, and to perform other simulation tasks.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This work aims at presenting a model for assisting in the development of Information System for software maintenance documentation, based on a Data Dictionary structure, adequately designed for the organized storage of the information needed for the execution of the maintenance activities.
Abstract: This work aims at presenting a model for assisting in the development of Information System for software maintenance documentation.It seeks to emphasize the optimization of the functions related to the documentation, by employing automatic processes.In this study the system of documentation is based on a Data Dictionary structure, adequately designed for the organized storage of the information needed for the execution of the maintenance activities.Forms of the information input, updating and retrieval have been analysed with the intention of utilizing the basic software of support, as well as to consider the possibilities of the Dictionary interface with the Operating System file, throughout the modification of the application software.

Journal ArticleDOI
TL;DR: The next generation of CBT systems will incorporate many innovative concepts in the acquisition, representation, and management of expert knowledge and common-sense knowledge.
Abstract: Th here is growing consensus among those involved with computer-based training that the application of knowledge-based technology can improve the quality and cost effectiveness of training personnel. Recent articles by Freedman,' Kearsley and Seidel,2 Montague and Wulfeck,3 Psotka,4 and Woolf5 conclude that the next generation of CBT systems will incorporate many innovative concepts in the acquisition, representation, and management of expert knowledge and common-sense knowledge.