scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1984"


Journal ArticleDOI
TL;DR: A new compression algorithm is introduced that is based on principles not found in existing commercial methods in that it dynamically adapts to the redundancy characteristics of the data being compressed, and serves to illustrate system problems inherent in using any compression scheme.
Abstract: Data stored on disks and tapes or transferred over communications links in commercial computer systems generally contains significant redundancy. A mechanism or procedure which recodes the data to lessen the redundancy could possibly double or triple the effective data densitites in stored or communicated data. Moreover, if compression is automatic, it can also aid in the rise of software development costs. A transparent compression mechanism could permit the use of "sloppy" data structures, in that empty space or sparse encoding of data would not greatly expand the use of storage space or transfer time; however , that requires a good compression procedure. Several problems encountered when common compression methods are integrated into computer systems have prevented the widespread use of automatic data compression. For example (1) poor runtime execution speeds interfere in the attainment of very high data rates; (2) most compression techniques are not flexible enough to process different types of redundancy; (3) blocks of compressed data that have unpredictable lengths present storage space management problems. Each compression ' This article was written while Welch was employed at Sperry Research Center; he is now employed with Digital Equipment Corporation. 8 m, 2 /R4/OflAb l strategy poses a different set of these problems and, consequently , the use of each strategy is restricted to applications where its inherent weaknesses present no critical problems. This article introduces a new compression algorithm that is based on principles not found in existing commercial methods. This algorithm avoids many of the problems associated with older methods in that it dynamically adapts to the redundancy characteristics of the data being compressed. An investigation into possible application of this algorithm yields insight into the compressibility of various types of data and serves to illustrate system problems inherent in using any compression scheme. For readers interested in simple but subtle procedures, some details of this algorithm and its implementations are also described. The focus throughout this article will be on transparent compression in which the computer programmer is not aware of the existence of compression except in system performance. This form of compression is "noiseless," the decompressed data is an exact replica of the input data, and the compression apparatus is given no special program information, such as data type or usage statistics. Transparency is perceived to be important because putting an extra burden on the application programmer would cause

2,426 citations


Journal ArticleDOI
Lee1, Smith

602 citations


Journal ArticleDOI
TL;DR: This experiment showed that, at the current lexel of dexelopment, a familiar procedural specification, PDL, was easier to use and understand than the more difficult, unfamiliar, abstract specification of OBJ.
Abstract: ion versus procedural specifications. More experience is needed in wxriting and using true abstract specifications as xvell as \"rapid prototypes.\" What are the consequences of introducing bias in a \"rapid prototype,\" and are the more difficult abstract specifications wxorth the extra effort? This experiment showed that, at the current lexel of dexelopment, a familiar procedural specification, PDL, was easier to use and understand than the more difficult, unfamiliar, abstract specification of OBJ. Our studv of both formal and informal specification languag-es has showxn that none of the languages available is sufficiently automated. Formal specification languaaes, while showing promise for the future, thus far have been Xerv difficult to use and understand and are sex erely limited in poxxer. SPEC. FAU LT

446 citations


Journal ArticleDOI
Taylor1
TL;DR: One of the common rules of converting remainders, or residues, into integers as the Chinese Remainder Theorem, or CRT is referred to as the CRT today.
Abstract: The ancient study of the residue numbering system, or RNS, begins with a verse from a third-century book, Suan-ching, by Sun Tzu:* We have things of which we do not know the number, If we count them by threes, the remainder is 2, If we count them by fives, the remainder is 3, If we count them by sevens, the remainder is 2, How many things are there? The answer, 23. How to get the answer 23 is outlined in Sun Tzu's historic work. He presents a formula for manipulating remainders of an integer after division by 3, 5, and 7. We commemorate this contribution today by referring to one of the common rules of converting remainders, or residues, into integers as the Chinese Remainder Theorem, or CRT. This theorem, as well as the theory of residue numbers, was set forth in the 19th century by Carl Friedrich Gauss in his celebrated Disquisitiones Arithmetical.-t This 1700-year-old number system has been attracting a great deal of attention recently. Digital systems structured into residue arithmetic units may play an important role in ultra-speed, dedicated, real-time systems that support pure parallel processing of integer-valued data. It is a \"carry-free\" system that performs addition, subtraction, and multiplication as concurrent (parallel) operations, side-stepping one of the principal arithmetic delays managing carry information. The first attempt to use some of the unique RNS properties was made by D. H. Lenhmet who, in 1932, built a special-purpose machine he called the \"photo-electric sieve.\" This electro-mechanical device factored Mersenne *L. Dickson in the History of the Theorv of Numbers attributes the origin of the RNS to Sun Tsu (not Tzu) in the first centurv AD, but most scholars accept the Sun Tzu origin. numbers. Then in the mid-50's, the Czech researchers Svaboda and Valach conducted experiments on a hard-wired, small moduli RNS machine, which they used to study error codes. I The same idea apparently occurred to Aiken and Garner. 2 From the late 50's to mid-60's, the Department of Defense supported the RNS research by Szabo and Tanaka at Lockheed. They worked on a special-purpose digital correlator while a team from RCA looked into designing a general-purpose machine. Experimentally , these early efforts met with little success because winding the custom core memory required specialized residue mappings. The most tangible result of these early efforts was a comprehensive text written by Szabo and Tanaka, which survived only …

340 citations


Journal ArticleDOI
TL;DR: The work on developing a programming methodology called Pict that permits humans to use their native intelligence in programming that takes advantage of the human brain's ability to process pictures more efficiently than text is described.
Abstract: Mday finding that they are able to successfully use computers, especially personal and home computers, thanks to an abundance of well-designed \"canned\" software. Those who wish to progress beyond the canned software stage, however, discover that programming is painstaking work. Worse yet, learning to program is, for many, even more forbidding ; indeed, the attempt is often eventually abandoned in frustration. Frustration comes largely from inadequacies in common procedural, text-based programming languages. As Smith has pointed out, \"The representation of a problem in most programming languages bears little resemblance to the thought processes that occurred in its solution.\"' Why do programmers-especially novices-often encounter difficulties when they attempt to transform the human mind's multidimensional, visual, and often dynamic conception of a prob-lem's solution into the one-dimensional , textual, and static representation required by traditional programming languages? The groups of italicized antonyms in the preceding sentence may give one hint. Another reason may be that the type and variable names we use in our programs , such as \"x\" or \"root,\" are signs that bear no resemblance to their values (an integer, say, or a pointer to a binary tree of records of some type). Yet another explanation may be that the von Neumann paradigm of procedural programming is inherently unsuitable; some other method of programming, perhaps more functional, might prove easier to assimilate. Because of this general dissatisfaction , repeated attempts have been made to design \"new, im-proved\" programming languages. We believe that a radical departure from current programming styles is necessary if programming is to be made more accessible. Attempts to evolve new varieties of conventional languages will not suffice. With the cost of graphics displays decreasing and availability, most notably as integral components of personal computers, increasing-we believe it is time to take advantage of the human brain's ability to process pictures more efficiently than text. As Malone said, \"Perhaps the best use of sound and graphics ... is to represent and convey information more effectively than [is possible] with words or numbers.\"2 In this article we describe our work on developing a programming methodology called Pict that permits humans to use their native intelligence in programming. Our results represent but an imperfect collection of beginnings in a field now in its infancy. Still, our work straddles the interface of several established disciplines , the primary ones being (1) the design and implementation of programming systems (including languages) and …

248 citations


Journal ArticleDOI

243 citations


Journal ArticleDOI
TL;DR: An overview of the field of knowledge engineering is presented, describing the major developments that have led up to the current state of knowledge systems and the capabilities that will ultimately make knowledge systems vastly more powerful than the earlier technologies for storing and transmitting knowledge, books and conventional programs.
Abstract: Knowledge-based expert systems, or knowledge systems for short, employ human knowledge to solve problems that ordinarily require human intelligence.1 Knowledge systems represent and apply knowledge electronically. In the future, these capabilities will ultimately make knowledge systems vastly more powerful than the earlier technologies for storing and transmitting knowledge, books and conventional programs. These earlier storage and transmission technologies suffer from fundamental limitations. Although books now store the largest volume of knowledge, they merely retain symbols in a passive form. Before the knowledge stored in books can be applied , a human must retrieve it, interpret it, and decide how to exploit it for problem-solving. Most computers today perform tasks according to the decision-making logic of conventional programs, but these programs do not readily accommodate significant amounts of knowledge. Programs consist of two distinct parts, algorithms and data. Algorithms determine how to solve specific kinds of problems, and data characterize parameters in the particular problem at hand. Human knowledge doesn't fit this model, however. Because much human knowledge consists of elementary fragments of know-how, applying a significant amount of knowledge requires new ways to organize decision-making fragments into useful entities. Knowledge systems collect these fragments in a knowledge base, then access the knowledge base to reason about specific problems. As a consequence, knowledge systems differ from conventional programs in the way they are organized, the way they incorporate knowledge, the way they execute, and the impression they create through their interactions. Knowledge systems simulate expert human performance and present a human-like facade to the user. * Advising about computer system use, and * VLSI design. In all of these areas, system developers have worked to combine the general techniques of knowledge engineering with specialized know-how in particular domains of application. In nearly every case, the demand for a knowledge engineering approach arose from the limitations perceived in the alternative technologies available. The developers wanted to incorporate a large amount of fragmentary, judgmental, and heuristic knowledge; they wanted to solve automatically problems that required the machine to follow whatever lines of reasoning seemed most appropriate to the data at hand; they wanted the systems to accommodate new knowledge as it evolved; and they wanted the systems to use their knowledge to give meaningful explanations of their behaviors when requested. This article presents an overview of the field of knowledge engineering. It describes the major developments that have led up to the current …

235 citations


Journal ArticleDOI
TL;DR: As software applications become more complex, software engineering will evolve.
Abstract: As software applications become more complex, software engineering will evolve. Specification languages, rapid prototyping, complexity metrics, and maintenance techniques will be its most significant products. Computer users first became aware of a software crisis 15 years ago. Software projects were being delivered far behind schedule, quality was poor, and maintenance was expensive. And as more complex software applications were found, programmers fell further behind the demand and their results were of poorer quality. The high demand and comparatively low productivity drove software costs up. In the US in 1980, software cost approximately $40 billion, or two percent of the gross national product. I Dolotta estimates that by the year 1985, the cost of software will be approximately 8.5 percent of the GNP,2 while Steel points to 13 percent by 1990.3 And in recognition of the importance of software engineering, the Department of Defense began a software initiative and has plans for establishing a software engineering institute. Software engineering is a relatively new discipline. It seeks to devise techniques for software development.

188 citations


Journal ArticleDOI
Babb1
TL;DR: The Large-Grained Data Flow (LGDF) as mentioned in this paper model is a compromise between the data flow and traditional approaches, which takes a much finer grained view of system execution.
Abstract: Research in data flow architectures and languages, a major effort for the past 15 years, I has been motivated mainly by the desire for computational speeds that exceed those possible with current computer architectures. The computational speedup offered by the data flow approach is possible because all program instructions whose input values have been previously computed, can be executed simultaneously. There is no notion of a program counter or of global memory. Machine instructions are linked together in a network so that the result of each instruction execution is fed automatically into appropriate inputs of other instructions. Since no side-effects can occur as a result of instruction execution, many instructions can be active simultaneously. Although data flow concepts are attractive for providing an alternative to traditional computer architecture and programming styles, to date few data flow machines have been built, and data flow programming languages are not widely accepted. This article describes a compromise between the data flow and traditional approaches. The approach is called the large-grain data flow, or LGDF, to distinguish it from traditional data flow architectures and languages, which take a much finer grained view of system execution. Data flow machine instructions are typically at the level of an arithmetic operator and two operands. The LGDF model usually deals with much larger data-activated \"program chunks,\" corresponding to 5 to 50 (or even more) statements in a higher level programming language. Another difference in the model described here is that global memories can be shared by a specified set of programs , although access contention to shared memories is still managed in a data-flow-like manner. A fundamental concept of the LGDF model is that programs are viewed as comprising systems of data-activated processing units. Using a coherent hierarchy of data flow diagrams, complex systems are specified as compositions of simpler systems. The lowest level programs can be written in almost any language. (Fortran is used here). Programs specitied in this way have been implemented efficiently on both sequential (single instruction, single data) and vector (single instruction, multiple data), as well as true parallel (multiple instruction, multiple data) architectures. The steps involved in modeling and implementing a For-tran program using large-grain data flow techniques are * Draw Data Flow Diagrams. Create a hierarchical, consistent set of system data flow diagrams that express the logical data dependencies of the program fragments modeled. * Create Wirelist. Encode the data flow dependencies …

165 citations


Journal ArticleDOI
TL;DR: The computer field has matured to the point that most of us accept computers as a normal part of the authors' professional and personal lives, but the impact of advances in computer technology on education and the educational process is just starting to be realized.
Abstract: Preparing the specialists who sustain computer progress has long been overlooked. The Computer Society has joined with the ACM to resolve the problem of educational standards for engineers, programmers, and technicians. The explosive growth ofcomputer technology during the last half of this century has been driven, in part, by contributions from institutions of higher education. But just as education has contributed to the growth of the computer field, computer technology has influenced all phases ofeducation. In most cases, this intimate connection between education and computer technology has produced positive results. However, as with any new field, there have been many false starts and unfulfilled hopes, the former stemming from a lack of knowledge and the latter from results that could not be achieved. The 1980's is the first decade in which computer technology has impacted all branches of education. Before this decade, computer education was a concern of a relatively small number of students interested in employment either in computer science or engineering or in a closely related area. The resulting computer education programs were very specialized and were based on the assumption that anyone planning on entering a career or occupation requiring extensive computer use had to have an in-depth understanding of the computing process. The computer field has matured to the point that most of us accept computers as a normal part of our professional and personal lives. However, the impact of advances in computer technology on education and the educational process is just starting to be realized.

156 citations


Journal ArticleDOI
Snyder1
TL;DR: The Poker Parallel Programming Environment is known to support these five mechanisms conveniently; thus the conversion is easy and the parallel programming is simple.
Abstract: : Parallel programming is described as the conversion of an abstract, machine independent algorithm to a form, called a program, suitable for execution on a particular computer. The conversion activity is simplified where the form of the abstraction is close to the form required of the programming system. Fine mechanisms are identified as commonly occurring in algorithms specification. The Poker Parallel Programming Environment is known to support these five mechanisms conveniently; thus the conversion is easy and the parallel programming is simple. The Poker environment is described and examples are provided. An analysis of the efficiency of the programming facilities provided by Poker is given and they all seem to be very efficient. (Author)

Journal ArticleDOI
Fisher1
TL;DR: The feeling is that once there are enough parallel architectures around and enough bright programmers to use them, a parallel programming methodology will develop naturally.

Journal ArticleDOI
Ihara1, Mori
TL;DR: The autonomous, decentralized system described in this article is a highly reliable system, which performs the functions of both the control system and information system, and which greatly exceeds that of comparable systems with centralized architectures.
Abstract: Maximizing the reliability of a system, that is, reducing the rate at which hardware and software components fail, is of prime importance to control systems designers. This is especially the case when systems must promise nonstop operation and on-line maintenance, as well as offer possibilities for expansion. Several methods of increasing system reliability have been proposed, 1-3 and, in general, researchers have preferred to consider reliability from the standpoint of a digital system as well as that of components and circuits.4'5 Until recently, however, reliable systems either performed poorly or cost too much to build, but the progress in LSI and optical transmission technologies has made it possible to reduce (rapidly) the cost of information processing, storage, and transmission. In addition, with the introduction of a decentralized approach to system design, the reliability of control systems now greatly exceeds that of comparable systems with centralized architectures. The autonomous, decentralized system described in this article is a highly reliable system; 6-9 more important, however, is the fact that this system is a large-scale, multifunctional engineering system, which performs the functions of both the control system and information system. To clarify the importance of this duplicity, we offer the following descriptions:

Journal ArticleDOI
TL;DR: Like infants taking their first halting steps, expert and knowledge-based systems are slowly toddling out of basic research laboratories and making places for themselves in big business.
Abstract: More than technological wonders, knowledge systems are valuable human assistants, equalling or surpassing experts in reasoning and judgment. Since human work consists mostly of knowledge-based reasoning, future needs should increase. For years it remained little more than a bold experiment, an avant-garde technology with matchless capabilities but with painfully restricted applications. Since 1981, however, the emerging field of expert and knowledge-based systems has changed dramatically. No longer the exclusive property of diagnosticians and a few other medical specialists, the systems are finally starting to broaden their user base and wriggle their way into mainstream, commercial applications. Like infants taking their first halting steps, expert and knowledge-based systems are slowly toddling out of basic research laboratories and making places for themselves in big business. systems, or \"knowledge systems\" for short, have evolved over a 15-year period from laboratory curiosities of applied artificial intelligence into targets of significant technological and commercial development efforts 3 These systems employ computers in ways that differ markedly from conventional data processing applications , and they open up many new opportunities. The people who build these new systems have adopted the title of \"knowledge engineer\" and call their work \"knowledge engineering.\" Recently, many commercial and governmental organizations have committed themselves to exploiting this technology, attempting to advance it in dramatic ways and beginning to adapt their missions and activities to it. A staggering number of events occurred during the last five years: * Schlumberger, a leading oil services firm, determined that its future growth depended on knowledge engineering and formed two groups to build expert data interpretation systems. * Japan's Ministry of International Trade and Industry determined that the country's future economic viability required leadership in knowledge system technology and launched a $500 million, 10-year program in fifth-generation computing .3 * Responding to a perceived technological and competitive threat, UK's Alvey Commission retracted that country's long-standing disapproval of Al and urged a major push forward in knowledge systems technology, a recommendation that the Thatch-er government implemented.

Journal ArticleDOI
Dutta1, Basu
TL;DR: This article addresses two key issues in model-management system design: machine representation of models and development of mechanical methods for their manipulation.
Abstract: Decision Support Systems, or DSSs, are computerbased systems that can'be used directly by decision makers who are not sophisticated programmers to solve semistructured or unstructured problems. Such problems are characterized by the lack of quantitative descriptions, well-defined goals, or prescribed algorithms for their solution. As a result, the distribution of effort in solving them differs significantly from that required for structured problems, as shown in Figure 1. Since many important management problems, especially strategic problems, tend to be unstructured, or semistructured at best, the development of effective DSSs is an active field of research. Generically, a DSS consists of three major subsystems: the dialog, data, and the models shown in Figure 2. Due to the characteristics of DSS applications, the solution process usually involves appreciable trial-and-error, and data is transformed in various ways through a diverse collection of program modules (models). It is therefore necessary to have not only a comprehensive collection of such models, but also suitable mechanisms to use and control these models to solve problems that are usually phrased as queries to the system. In other words, the design of the model-management subsystem has a major impact on the interaction between a user and the DSS. This article addresses two key issues in model-management system design: machine representation of models and development of mechanical methods for their manipulation. Machine representation of models involves development of an adequate description of model parameters as well as the valid conditions under which a model can be executed with meaningful outputs. The approach presented here features a flexible representation ofmodels so that the same models can be used in different contexts. Mechanical (machine-executable) methods to manipulate such models will use the storage representation referred to above. The term manipulation includes several activities; among them are instantiation of models and syn-

Journal ArticleDOI
TL;DR: This work intends to develop and INTEGRATE KNOWLEDGE Acquis- ITion toOLs to FACILITATE ASSIMILATION of teaching and learning, and integrate knowledge into inTELLIGENT TUTORS.
Abstract: COMMUNITIES OF EXPERTS ARE NEEDED TO PROVIDE A FOCUS FOR ARTICULATING DISTRIBUTED KNOWLEDGE IN AN INTELLIGENT TUTOR. THE RESULTANT MACHINE TUTOR SHOULD INCLUDE RECENT AS WELL AS HISTORICAL RESEARCH ABOUT THINKING, TEACH- ING, AND LEARNING IN THE DOMAIN. EVALUATING SUCH AN ARTICULATION WOULD, IN ITSELF, CONTRIBUTE TO EDUCATION---AND ULTIMATELY TO COMMUNICATION BETWEEN EXPERTS. COMPILING DIVERSE RESEARCH RESULTS FROM ENVIRONMENTAL, TEACHING, COGNI- TIVE, AND DOMAIN EXPERTS IS CURRENTLY HAMPERED BY LACK OF EXPLICIT TOOLS TO HELP AUTHORS TRANSFER THEIR KNOWLEDGE TO A SYSTEM. BASED ON CRITERIA SET OUT ABOVE, WE INTEND TO CONTINUE TO DEVELOP AND INTEGRATE KNOWLEDGE ACQUIS- ITION TOOLS TO FACILITATE ASSIMILATION OF TEACHING AND LEARNING KNOWLEDGE INTO INTELLIGENT TUTORS.

Journal ArticleDOI
Boehm1, Penedo, Stuckle, Williams, Pyster 
TL;DR: The article describes the steps that led to the creation of the software productivity project and its components and summarized the requirements analyses on which the SPS was based.
Abstract: The software productivity system (SPS) was developed to support project activities. It involves a set of strategies, including the work environment; the evaluation and procurement of hardware equipment; the provision for immediate access to computing resources through local area networks; the building of an integrated set of tools to support the software development life cycle and all project personnel; and a user support function to transfer new technology. All of these strategies are being accomplished incrementally. The current architecture is VAX-based and uses the Unix operating system, a wideband local network, and a set of software tools. The article describes the steps that led to the creation of the software productivity project and its components and summarized the requirements analyses on which the SPS was based

Journal ArticleDOI
Wah1

Journal ArticleDOI
Keller1, Lin1
TL;DR: Functional languages are seen to be ideal tor the programming of multi-processors when distinction between them and uniproces-sors is undesirable, and should not have to set up processes explicitly to achieve concurrent processinig, nor be conicerned with synchronizing such processes.
Abstract: Multiprocessinig systems have the potential for increasing system speed over what is now offered by device technology. They must provide the meains of generating work for the processors, getting the work to processors, and coherently collecting the results from the processors. For most applications, they should also enisure the repeat-ability of behavior, i.e., determinacy, speed-independence, or elimination of \"critical races.\" 1-6 Determinacy can be destroyed, for example, by permitting-in separate, coni-current processes statements such as \"v: = x + 1\" and \"ift x = 0 then else. .\", which share a commoin variable. Here, there may be a critical race, in that more thani one global outcome is possible, depeindinig on execu-tioIn order. But by basinig a multiprocessiing system on functional languages, we can avoid such dangers. Our concerin is the coInstructionl of multiprocessors that canl be programmed in a logically transparenit tashion. In other words, the prograninier should Inot be aware of pro-grammning a multiprocessor versus a unipiocessor, except for optlmlzling perfornmanice oi-a specific coIItiguratioIn. This means that the progi-amimler should not have to set up processes explicitly to achieve concurrent processinig, nor be conicerned with synchronizing such processes. Programs expressed in functional languages possess a fair anmount of implicit concurrenicy. The conceptual ex-ecutioin of a tunctional programn is based purely oIn the evaluation of expressions, not on the assignmeint of values to rmemory cells. Accordingly, there can be no \"side ef-fects\" of one function on another, which ensures deter-miniacy; a program gives the samle results regardless of the physical aspects of communicatioin betweenI processors or the number of processors involved in its execution. These languages seenm to be ideal tor the programming of multi-processors when distinction between them and uniproces-sors is undesirable. Functional languages also have other conceptual advantages that have beeni discussed elsewhere.79 To demonstrate how a tunctional language provides tor concurrent execution, consider an expression such as max[subexpressioin-1, subexpression-2] wherce ttax is the usual nunmeric mlaximunm tunction (or any other function which requires both of its arguments). A concurrent execution model carries out three important aspects: (1) Spawning of tasks to evaluate the two subexpres-sions concurrently; (2) Synchronizatioin to deterimiine that both subevalua-tions are complete; and (3) Evaluation of the maxiimiuiim, once completioin is established. Obviously, only the third of these aspects would be found in a sequential implementation; the first two are implicit in a conicurrent functional implemneintation. In contrast, the specitication of these mechanical …

Journal ArticleDOI
TL;DR: This article investigates the issues involved in constructing software systems for the planning and control of activities in the job-shop, and focuses on the decision-making methodologies required for planning and Control.
Abstract: The term \"factory of the future\" has lost much of its meaning because of excessive publicity heralding each new machine tool, robot, or computer-based controller. It has become increasingly difficult to differentiate between fact and fantasy. Our purpose in writing this article is to examine some of the issues involved in creating an \"autonomous manufacturing environment\" for discrete parts. We restrict the concept of autonomous manufacturing to only the activities performed on the shop floor. In particular, autonomous manufacturing pertains to the complete automation of decision making on the shop floor, whether or not the actual production is performed manually or automatically. While much of computer-aided manufacturing has been concerned with \"flexible automation,\" we are concerned with the decision-making methodologies required for planning and control. The introduction of robotic and other flexible technologies into manufacturing increases the number of ways a product can be produced and decreases the production rate. Unfortunately , flexible technologies increase the complexity of operation and production scheduling and, because of the subsequent decrease in setup times, there is less time for decision making. Today, decisions made manually in the shop are less than satisfactory, demonstrated by high in-process time of orders, low machine utilization, and high overheads. Such manual planning and control methods limit our ability to utilize the flexibility afforded by robotic technology. We investigate the issues involved in constructing software systems for the planning and control of activities in the job-shop. Manufacturing is composed of many activities that can be monitored and controlled at different levels of abstraction. A shop floor can be viewed as a group of work centers, a work center as composed of manufacturing cells, and a manufacturing cell as composed of individual machines, robots, and tools (see Figure 1). Activity planning in such an environment is a complex problem in which activities must be selected and resources must be assigned and scheduled at each level of abstraction to meet production goals. While much of this can be performed before production begins, the dynamics of the manufacturing environment tend to quickly invalidate predictive planning, forcing the shop to adapt to changes. In our discussion, we assume the existence of a shop with the following characteristics: * a set of predefined parts to be produced in small batchese * one or more sequences of manufacturing operations defined for each part; * one or more work centers in which an operation is …

Journal ArticleDOI
TL;DR: To achieve the goal of high axailabilitv, xarious techniques have been dexised by AT&T Bell Laboratories tor use in its family of ESS computers, of which the latest is the 3B20D processor used in the No. 5 ESS and recently released as a commercial product.
Abstract: Although fault-tolerant techniques were employed in some of the earliest digital computers, the adxent of solid-state devices-first the transistor, and later integrated circuits greatly improved the overall reliability of computer systems. One notable exception to this trend, however, is the popular dynamic MOS RAM: memory systems based on such ICs are actually less reliable than the magnetic cores they replaced. In fact, the reliability of solid-state memory systems has been made to equal that of the older, magnetic-core systems only through the use of error correcting codes; however, parity and ECC schemes are applicable only to subsystems that do not perform any data transformations, but perform only pure data transport (such as computer buses and I/0 channels) or offer pure storage (such as main memory and disks). As a result of the improvement in basic component reliability on the one hand and the development of error-correcting codes on the other, the initial interest that computer scientists had in developing fault-tolerant techniques soon faded. During the 1960's and early 1970's, interest in such techniques remained limited to a number of specific applications areas, including computer-controlled telephone switching systems, military and commercial real-time monitoring and control systems, commercial time-sharing systems, and airline reservations systems. Computer-controlled, electronic switching systems began to appear in central offices of the public telephone network in the US and in France around 1965. Because of the nature of the services they render, the computers incorporated in such switching complexes must provide very high availabilitv. A typical requirement is that there be no more than two hours of svstem outage (doxxntime) in 40 years. To achieve the goal of high axailabilitv, xarious techniques have been dexised by AT&T Bell Laboratories tor use in its familv of ESS computers, of which the latest is the 3B20D processor used in the No. 5 ESS and recently released as a commercial product (see Figure 1). Details differ in each ESS implementation, but the general scheme is to duplicate all critical components (such as the control unit and memory system). The running system utilizes one set of subsystems, X hile a duplicate set is either in a \"'hot backup\" mode or is executing synchronously with the on-line set. The svstem detects errors either bv matchilng the results produced by both sets (as in the No. IA ESS) or by constructing each set from self-checkina moldules, which are themselves duplicates that match one anothier's …

Journal ArticleDOI
TL;DR: The goal of this project was to sample about 20 organizations, including IBM, and study their development practices, and it is believed that the average industry project is probably worse than what is described here.
Abstract: The term software engineering first appeared in the late 1960's to describe ways to develop, manage, and maintain software so that resulting products are reliable, correct, efficient, and flexible. I The 15 years of software engineering study by the computer science community has created a need to assess the impact that numerous advances have had on actual software production. To address this need, IBM asked the University of Maryland to conduct a survey of different program development environments in industry to determine the state of the art in software development and to ascertain which software engineering techniques are most effective. Unlike other surveys, such as the recent one on Japanese technology, 2 we were less interested in recent research topics. Journals, such as the IEEE Transactions on Software Engineering adequately report such developments; we were more interested in discovering which methods and tools are actually being used by industry today. 3 This report contains the results of that survey. The goal of this project, which began in spring 1981 and continued through summer 1983, was to sample about 20 organizations, including IBM, and study their development practices. We contacted major hardware vendors in the US, and most agreed to participate. Several other software companies and other "high-technology" companies were contacted and agreed to participate. While we acknowledge that this survey was not all inclusive, we did study each company in depth, and based on discussions with others in the field, we believe that what we found was typical. We were not interested in R&D activities in these companies. Most had individuals engaged in interesting developments, and most knew what was current in the field. Our primary concern was what the average programmers in these companies did to develop software projects. Data was collected in a two-step process. A detailed survey form was sent to each participating company. When the form was returned, a follow-up visit was made to clarify the answers given. We believe that this process, although limiting the number of places surveyed, allowed us to present more accurate information than if we had relied on the returned forms alone. Each survey form contained two parts. Section one asked for general comments on software development for the organization as a whole. The information typically represented the standards andpractices document for the organization. In addition, several recently completed projects within each company were studied. Each project leader completed the second section of the survey form, which described the tools and techniques used on that project. Several companies were concerned that the projects we were looking at were not typical of them. (Interestingly, very few companies claimed to be doing typical software.) However, since the companies selected the projects they described on the form, we believe we saw the better developed projects-if there is any bias to our report, it is that the average industry project is probably worse than what we describe here. Thirty organizations in both the US and Japan participated in the study: five IBM divisions, 12 other US companies, and 13 Japanese companies. About half the Japanese companies were not interviewed, while the other half were interviewed to varying degrees of detail. All US companies were interviewed. The "Acknowledgments" section at the end of this article lists the US participants. Some of the Japanese participants never responded to our request for permission to use their names, so only a few Japanese companies are listed. Table I characterizes the companies visited, divisions within a company, and the projects studied, arbitrarily

Journal ArticleDOI
Machover, Myers1
TL;DR: Sketchpad as mentioned in this paper was the first interactive computer graphics system, and it was reported at the Spring Joint Computer Conference in 1963 as the achievement from which Interactive Computer Graphics is often dated.
Abstract: Early graphics programmers sought algorithms to minimize computations; as more computing power became available, demands for graphic realism and interaction placed a premium on more efficient algorithms. Some have argued that the first computer graphics systems appeared with the first digital computers. Certainly MIT's Whirlwind computer, in the early 1950's, had CRT displays attached to it. In the middle 1950's, the Sage air-defense command and control system converted radar input into computer-generated pictures and provided the operator with a light pen for selecting targets. Sutherland's Sketchpad, reported to the Spring Joint Computer Conference in 1963, is the achievement from which interactive computer graphics is often dated.' It \"repre-sented the first embodiment of a truly interactive computer graphics system,\" Herbert Freeman said, introducing the paper in his reprint collection.2 Sketchpad was simple by today's standards-a black-and-white, line-drawing system running slowly on a large computer. Freeman identified three major barriers to rapid development. The first was the then high cost of computing. To make graphics interactive imposed inordinate demands on the computer in terms of both processing requirements and memory size. The second barrier was that picture generating software turned out to be more intricate than had been expected. It was necessary to have a data structure that would mirror the visual relationships, algorithms for hidden-line removal and other functions , and means for converting vectors to digitally oriented displays. Even as ostensibly simple a task as drawing a straight line segment or arc of a circle on a digitally oriented display turned out to require algorithms which were by no means trivial. Third, Freeman noted that the complexity of both system software and application software was grossly underestimated. Many of the early graphics achievements were mere \"toys.\" They seemed impressive at the time, but were quite inadequate when compared to the demands of actual, economically sound interactive graphics design applications. Implicit in these barriers was a fourth: the necessity to generate successive frames in a time short enough for the user to consider the system interactive. This barrier, in turn, was the result of relatively slow displays, low-performance central processing units, brute-force algorithms, and inefficient software. Considerable progress has been made in overcoming these barriers.


Journal ArticleDOI
Grinberg1, Nudd, Etchells
TL;DR: The Hughes Aircraft Company's 3-D machine, optimized for use on two-dimensionally structured data as encountered in image analysis and two-dimensional signal processing, both provides high throughput and offers great flexibility together with a programmable structure.
Abstract: The advent of VLSI as a viable circuit technology bears a number of significant implications for systems designers. For example, design costs increase dramatically as circuitry designs become more complex while, in the absence of special design considerations, the testability of these new designs declines. In order to minimize the impact of this rise in design costs, new high-throughput machines must be made as flexible and widely applicable as possible. However, such design constraints are often considered impracticable for advanced systems. Innovative architectures and packaging technologies must be simultaneously developed to enhance each of the performance measures of the VLSI computing system. One such architectural/technological development has been made by the Hughes Aircraft Company in the form of three-dimensional microelectronics.1'2 Optimized for use on two-dimensionally structured data as encountered in image analysis and two-dimensional signal processing, it both provides high throughput and offers great flexibility together with a programmable structure. In essence, it is a cellular architecture that allocates one processor for each pixel or matrix element of the input data set. Such structures, while they are by no means common, are fairly well known; early development of these structures began with the Illiac3 and continues today with ICL's DAP machine,4 UCL's CLIP-4,5 and Goodyear's Mpp.6 The most current of these machines uses a very large number of processors (64 x 64 for the DAP, 96 x 96 for the CLIP, and 128 x 128 for the MPP) to achieve high aggregate instruction rates. They are single-instruction, multiple-data machines with bit-serial arithmetic, and in this respect they are similar to Hughes' 3-D machine. However, as now implemented, they primarily use MSI TTL logic circuitry and achieve an instruction cycle equivalent to 100 to 200 ns. What is radically different about the Hughes' 3-D machine is the degree of integration employed in its construction, the 3-D organization of the processing circuitry, and the highly modular structure of the architecture. In particular, this architectural modularity permits the processor to be tailored to any of a wide range of applications and still retains its basic organization. The resulting architecture combines easy programmability with instruction rates on the order of 104 mops.

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects, in order to achieve uniform, integrated access to the different DBMs.
Abstract: It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

Journal ArticleDOI
Woo1
TL;DR: This article examines four data structure conversions using two types of internal representation: a CSG tree and a boundary representation graph.
Abstract: Because of their many possible uses, geometric models must be applicationindependent as well as informationally complete. This article examines four data structure conversions using two types of internal representation: a CSG tree and a boundary representation graph. Just as research in computer graphics in the 1960's led to the development of contemporary CAD/CAM systems, so is research in geometric modeling in the 1970's likely to be the core of a new generation of CAD/ CAM systems in the 80's. One of the goals of geometric modeliing is to enable the construction of a central database for the information storage, retrieval, and updating of three-dimensional mechanical components, assemblies, and systems. Because such informatioin is intended for a Nvide X ariety of purposes, such as documentation, drafting, engineering analysis, simulation, process planning, part programming, and automatic assembly, a geometric model must be not onlyx complete (having all the necessary information) but also application-independent. At the University of Michigan, the College of Engineering recently embarked on a long-range program (currently funded by a grant from the Air Force Office of Scientific Research) involving integrated manufacturing and robotics. Amid a wide range of research applications such as high-

Journal ArticleDOI
TL;DR: Computer designers have to be students of reliability, as do computer system users, and the computer system user must be able to understand the advantages and limitations of the state-of-the-art in reliability design, and specify the requirements for the system's reliability so that the application or computation can be successfully completed.

Journal ArticleDOI
Larson1
TL;DR: An overview of the hardware and software that support multitasking on the Cray X-MP and describes the programming Consideration required to write multitasking programs are presented.
Abstract: The need for increased computational speeds beyond what most physical devices can deliver-has led machine designers to explore parallelism in computer architectures. New hardware may exploit the parallelism inherent in application programs at any of several levels, including single operations (data flow), loop vectorization (pipelining), subroutine tasks (multitasking), and user Jobs (independent uniprocessing). The Cray X-MP-2 is a multiprocessor system capable of two types of operations: simultaneous scalar/vector processing of independent job streams and .simultaneous scalar/vector processing of independent tasks within one Job. Consequently, it is capable of exploiting more than one level of parallelism at the same time. This article presents an overview of the hardware and software that .support multitasking on the X-MP and describes the programming conisiderationis required to write multitasking programs. Although the Cray X-MP-2 has the same physical appearance as the Cray-1, the chassis niiw contains two processors, each an enhanced version of the Cray-I CPU. Moreover, a combination of several architectural changes2 contri'butes to anl improved performance in each processor (see Table 1 for major differences). The clock period is reduced from 12.5 to 9.5 ns, thereby quickening the issue and execution of instructions. The number of memory ports per processor is increased from one to four, allowing two memory reads, one memory write, and 1/0 to proceed simultaneotusly in each processor. Finally, the \"chaining\" mechanism, which allows results of prey ious operations to enlter the computational pipeline, is improved by eliminating the fixed-chain slot time (the clock period when links are put together). 3 Chains of operations can now include both memory reads and writes and arithmetic computations , and intermediate results automatically enter the pipeline as soon as they become available. Figure 1 illustrates the overall system organization. The mainframe communicates with the front-end system and external data-storage devices through the 1/0 subsystem. An optional solid-state storage device, SSD, provides an internal, second-level store. Both processors share a central memory. Each Liser's program and data area in memory are separated by individual base and limit registers. Special hardware enables the efficient and coordinated application of multiple processors to a single job. All processors assigned to a job share a unique set of binary semaphore and data registers. The semaphore registers allow the two processors to exchange signals indicating that processing should begin, should wait, or has been completed. Semaphore deadlock, a programming bug that cautses each processor to wait on the other, …