scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1974"


Journal ArticleDOI
TL;DR: The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique often used for software development when a hardware base is being altered.
Abstract: The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique. It is often used for software development when a hardware base is being altered. For example, if a programmer is developing software for some new special purpose (e.g., aerospace) computer X which is under construction and as yet unavailable, he will likely begin by writing a simulator for that computer on some available general-purpose machine G. The simulator will provide a detailed simulation of the special-purpose environment X, including its processor, memory, and I/O devices. Except for possible timing dependencies, programs which run on the “simulated machine X” can later run on the “real machine X” (when it is finally built and checked out) with identical effect. The programs running on X can be arbitrary — including code to exercise simulated I/O devices, move data and instructions anywhere in simulated memory, or execute any instruction of the simulated machine. The simulator provides a layer of software filtering which protects the resources of the machine G from being misused by programs on X.

963 citations


Journal ArticleDOI
TL;DR: Test patterns for testing digital circuits are usually checked on a test verification program to determine if all or most of the possible faults will be detected.
Abstract: Test patterns for testing digital circuits are usually checked on a test verification program to determine if all or most of the possible faults will be detected. Historically, such a test verification program would be accomplished with many simulations: one for each possible fault.

208 citations


Journal ArticleDOI
TL;DR: Multiple-valued logic is concerned with intermediate choices in which there are no middle choices between true and false, for example, when determining whether the status of a computer system is go or no-go.
Abstract: Computer scientists are familiar with options in which there are no middle choices between true and false. The lack of such choices is inconvenient — even critical — for example, when determining whether the status of a computer system is go or no-go. Multiple-valued logic is concerned with these intermediate choices.

77 citations


Journal ArticleDOI
TL;DR: This paper contains experimental measurements of a rather wide class of algorithms, which should be helpful in establishing some parameters of machine organization.
Abstract: In the folklore of computer architecture there has been much speculation about the effectiveness of various machines in performing various computations. While it is easy to design a machine (or part of a machine) and study its effectiveness on this algorithm or that, it is rather difficult to make general effectiveness statements about classes of algorithms and machines. We are attempting to move in this direction, and this paper contains experimental measurements of a rather wide class of algorithms. Such measurements should be helpful in establishing some parameters of machine organization.

76 citations


Journal ArticleDOI
TL;DR: Considering the relative success achieved in one-dimensional signal processing, it is to be expected that far greater strides could be made in the visual two-dimensional realm of signal processing.
Abstract: The state of the art in large-scale digital computers has recently opened the way for high resolution image processing by digital techniques. With the increasing availability of digital image input/output devices it is becoming quite feasible for the average computing facility to embark upon high-quality image restoration and enhancement. The motivation for such processes becomes self evident when one realizes the tremendous emphasis man puts on his visual senses for survival. Considering the relative success achieved in one-dimensional (usually time) signal processing, it is to be expected that far greater strides could be made in the visual two-dimensional realm of signal processing.

68 citations


Journal ArticleDOI
TL;DR: To anyone who has had the slightest connection with the design and construction of large programs for digital computers, it is obvious that this is no trivial task.
Abstract: To anyone who has had the slightest connection with the design and construction of large programs for digital computers, it is obvious that this is no trivial task. Many times the design of such programs is woefully inadequate, and the effort required to get them right is tremendous. The sometimes ruinous cost of such inadequate design is well-known.

46 citations


Journal ArticleDOI
David W. Hightower1
TL;DR: In this paper, a survey of the literature on the interconnection problem is presented, including Pin Assignment, Layering, Ordering, Wire List Determination, Spanning Trees, Rectilinear Steiner Trees, and Wire Layout.
Abstract: This paper represents a fairly extensive survey of the literature on the interconnection problem. The topics covered are Pin Assignment, Layering, Ordering, Wire List Determination, Spanning Trees, Rectilinear Steiner Trees, and Wire Layout. In addition, several new ideas are presented which could provide for better wire layout. Algorithms are presented in a way that makes them easy to understand, hence easy to discuss and apply. Formal statement of the algorithms can be found in the references cited.

44 citations


Journal ArticleDOI
TL;DR: The ability to selectively control access to information in computing systems has taken on a heightened importance in recent years, and concern over the impact of improper, inadequate controls and safeguards for this increasingly concentrated, inexpensive, and pervasive information processing power is rightfully increasing.
Abstract: The ability to selectively control access to information in computing systems has taken on a heightened importance in recent years, and we can expect that concern to continue to grow. Large amounts of data are now being concentrated in readily processible form, technical developments in networking are linking those collections, and the computing power available to examine these data continues to increase. Each of these developments is being accompanied by dramatic decreases in costs. Public, institutional, governmental, and military concern over the impact of improper, inadequate controls and safeguards for this increasingly concentrated, inexpensive, and pervasive information processing power is rightfully increasing.

40 citations


Journal ArticleDOI
TL;DR: Considerable advantages may be gained by considering higher-radix systems, even if decimal schemes are presently out of reach.
Abstract: We live in a binary world of computers, accepting the inevitability of dealing with strings of O's and l's, simply because this is dictated by the two-valued nature of switching primitives which make up the machines. Yet there is little doubt that most of us would prefer decimal machines if they were available. Present technology is unlikely to result in such machines in the near future, at least not the kind where basic building blocks are inherently 10-valued. However, this does not mean that the binary approach must continue to be the only alternative. Considerable advantages may be gained by considering higher-radix systems, even if decimal schemes are presently out of reach.

40 citations


Journal ArticleDOI
TL;DR: Several recent and proposed computer systems have employed parallel and pipelined architectures to increase instruction execution rate, or throughput, and these systems generally employ single instruction stream-single data stream processing.
Abstract: Several recent and proposed computer systems have employed parallel and pipelined architectures to increase instruction execution rate, or throughput. These vary from the giant ILLIAC IV1 with its large number of processing elements constrained to perform nearly identical computations in unison (single instruction stream-multiple data stream9) to the Carnegie-Mellon C. mmp system2 employing a number of independent minicomputers with shared memory (multiple instruction stream-multiple data stream). On the other hand, pipelining has been used in numerous' large computers, such as the Control Data 6600, 7600, and STAR, the IBM System 360/91 and 360/195, and the Texas Instruments ASC, to improve throughput. These systems generally employ single instruction stream-single data stream processing, although some machines in this category also have “vector” instructions that operate on multiple data streams.

34 citations


Journal ArticleDOI
TL;DR: The last 10 to 15 years have seen the evolution of hardware diagnosis and testing from an art to a science, but software testing, like hardware diagnosis, has not experienced the same growth.
Abstract: The last 10 to 15 years have seen the evolution of hardware diagnosis and testing from an art to a science (see Chang).1 Well conceived and well documented hardware test strategies are now available, as well as reliability measures for hardware designs. Software testing, on the other hand, has not experienced the same growth. (This is not to say that software is not being tested; many complex software systems are up and running without significant problems.) However, unlike hardware diagnosis, software testing methodology is very primitive.

Journal ArticleDOI
TL;DR: The steady growth of modern communication requirements has resulted in a steady increase in the volume of pictorial data that must be transmitted from one location to another.
Abstract: The steady growth of modern communication requirements has resulted in a steady increase in the volume of pictorial data that must be transmitted from one location to another. In some cases, although image transmission to a remote location is not necessary, one does need to store the images for future retrieval and analysis.

Journal ArticleDOI
TL;DR: It is useful to look briefly at networks from the point of view of their goals, their possible configurations, and their level of integration.
Abstract: The connection of several computers into a network poses new problems for the operating system designer. In order to appreciate these problems fully, it is useful to look briefly at networks from the point of view of their goals, their possible configurations, and their level of integration.

Journal ArticleDOI
TL;DR: This paper considers the case of paged memory systems — i.e., systems whose physical and logical address space is partitioned into equal sized blocks of contiguous addresses, which are used by many computer systems.
Abstract: Dynamic memory management is an important advance in memory allocation, especially in virtual memory systems. In this paper we consider the case of paged memory systems — i.e., systems whose physical and logical address space is partitioned into equal sized blocks of contiguous addresses. Paged memories have been used by many computer systems. However, the relationships among page fault frequency (the frequency of those instances at which an executing program requires a page of data or instructions not in main memory), efficiency, and space-time product with various replacement algorithms and page sizes are still not sufficiently understood and are of considerable interest.

Journal ArticleDOI
TL;DR: This presentation is an outgrowth of a workshop session held at Lake Arrowhead in 1973 on Distributed Software, and the participants in the session were Dave Farber of UC Irvine (Chairman), and Marty Graham of UC Berkeley.
Abstract: This presentation is an outgrowth of a workshop session held at Lake Arrowhead in 1973 on Distributed Software. The participants in the session were Dave Farber of UC Irvine (Chairman). Don Bennett of Sperrty Rand. Bob Bressler of Bolt. Beranek and Newman, Larry Rowe of UC Irvine, Bob Metcalfe of XEROX PARC, and Marty Graham of UC Berkeley.

Journal ArticleDOI
TL;DR: CDL, or Computer Design (description) Language, was created to bridge the gap between hardware and software designers and describes the computer elements and hardware algorithms at a level just above that of the electronics; this is the level commonly called the register transfer level.
Abstract: CDL, or Computer Design (description) Language, was first reported by the author in 1965. Since then, there have been some changes and many versions of simulators for different computer systems. The language was created to bridge the gap between hardware and software designers. As such, it describes the computer elements and hardware algorithms at a level just above that of the electronics; this is the level commonly called the register transfer level.

Journal Article
TL;DR: “Supercomputer” is a generic term which could apply to various machines depending on the criteria used; however, only a few computers today have the following characteristics and/or capabilities.
Abstract: “Supercomputer” is a generic term which could apply to various machines depending on the criteria used. However, only a few computers today have the following characteristics and/or capabilities: • The machine can directly access very large amounts of memory (i.e., 10M-100M bits) and execute meaningful scalar instructions at very high speeds (i.e., 10M-100M instructions per second). • The high precision computations execute in a Floating Point Arithmetic Unit with either 60- or 64-bit data word formats. • Special hardware implementation techniques are employed to obtain better memory bandwidth and simultaneous I/O operations. • Within the CPU organization of these machines, pipelined execution and look-ahead hardware are used to achieve very fast arithmetic operation times (much less than 1 μsec for 64-bit operand multiplication). • An operational supercomputer system costs over $10 million today; the price varies depending on the peripherals and auxiliary storage devices included and on the accounting procedures used in costing the installation.

Journal ArticleDOI
TL;DR: A digital system can be described at several levels, the highest of which is the algorithmic level which specifies only the algorithm to be used for solving a design problem.
Abstract: A digital system can be described at several levels. 1) The highest level is the algorithmic level which specifies only the algorithm to be used for solving a design problem. 2) The second level is the PMS (Processor, Memory, Switch) level which describes a system by processors, memory components, peripheral units, and switching networks. 3) The instruction level describes the instructions of a computer. 4) The register transfer or microinstruction level describes operations among registers. 5) The logic level expresses network in terms of gates and flip-flops. 6) The lowest level is the circuit level which implements gates and flip-flops by circuit elements such as transistors, resistors, etc.

Journal ArticleDOI
TL;DR: The purpose here will be to present the underlying philosophy of AHPL and to illustrate its usefulness as a design tool.
Abstract: AHPL (A Hardware-Programming language) is a hardware description language based on the notational conventions of APL. AHPL makes use of only those APL operations which can be readily interpreted as hardware primitives. A few special conventions have been added to AHPL to represent unique hardware capabilities such as parallel control sequences and asynchronous and conditional transfers. No attempt will be made in this short article to list every feature of AHPL. A complete description of the language may be found in Reference 1. Our purpose here will be to present the underlying philosophy of AHPL and to illustrate its usefulness as a design tool.

Journal ArticleDOI
TL;DR: Some of the image processing equipment at EG&G is described, along with the results obtained using this equipment, and digital holography is discussed in greater detail.
Abstract: EG&G, Inc., a leader in the field of photographie data acquisition and analysis for over 25 years, has for the past five years been placing increased emphasis on digital image processing in support of the Field Testing Division of the Los Alamos Scientific Laboratory under AEC Contract No. AT(29–1)1183. During that time, capabilities that originally involved photogrammetric and radiometrie analysis have been augmented to include two-dimensional Fourier frequency analysis and spatial filtering of images. This article describes some of the image processing equipment at EG&G, along with the results obtained using this equipment, and discusses digital holography in greater detail.

Journal ArticleDOI
Attila Toth1, Chip Holt1
TL;DR: The techniques and test system developed by Xerox for its large digital modules are described, beginning with a discussion of the issues peculiar to the test and repair of such modules.
Abstract: Developments in packaging reflect the fact that manufacturing economies are related to the level of integration achieved on electronic assemblies. Limiting the natural trend to build large, complex, highly-integrated assemblies is the corresponding drop in production yield. In recent years, however, yields for complex assemblies both in semiconductor chips and printed circuit boards have risen substantially. The effect has been to produce electronic assemblies whose complexities have placed new emphasis and new demands on testing. Traditional approaches to the problems of test and repair have, for the most part, been abandoned in favor of more manageable, cost-effective approaches. This paper describes the techniques and test system developed by Xerox for its large digital modules, beginning with a discussion of the issues peculiar to the test and repair of such modules.

Journal ArticleDOI
TL;DR: Design automation applied to custom MOS circuit design significantly lowers the total design cost by shortening the design cycle, reducing labor, and allowing error free designs to be produced before being manufactured.
Abstract: Without sophisticated design automation techniques, the increasing complexity of custom MOS circuits requires long design cycles and large investments. Usually only a few parts of each type of custom MOS circuit are required, and the design cost becomes a significant portion of the cost of the manufactured parts. These facts prohibit many companies from using custom MOS circuits in their products. Design automation applied to custom MOS circuit design significantly lowers the total design cost by shortening the design cycle, reducing labor, and allowing error free designs to be produced before being manufactured. This makes possible the use of custom MOS circuits, even when only a few parts are required.

Journal ArticleDOI
TL;DR: A simplified microcomputer architecture that offers maximum flexibility at minimum cost is described and experience with breadboard versions of this architecture has verified its usefulness over a surprisingly wide range of potential applications.
Abstract: The motivation behind this work has been the view that for 20 years computer hardware has become increasingly complex, languages more devious, and operating systems less efficient. Now, microcomputers afford some of us the opportunity to return to simpler systems. Inexpensive, LSI microcomputers could open up vast new markets. Unfortunately, development of these markets may be delayed by undue emphasis on performance levels which prohibit minimum cost. We are already promised more complex next-generation microcomputers before the initial ones have been widely applied. This paper discusses these points and describes a simplified microcomputer architecture that offers maximum flexibility at minimum cost. Design philosophy, programming considerations, and typical systems are also discussed. Experience with breadboard versions of this architecture has verified its usefulness over a surprisingly wide range of potential applications.

Journal ArticleDOI
Stanley Winkler1, Lee Danner1
TL;DR: Recognition that data in a computer system must be protected was a development brought on by three factors which caused a significant increase in the vulnerability of computer systems.
Abstract: Data security is the protection of data against unauthorized disclosure, modification, restriction, or destruction. Recognition1 that data in a computer system must be protected was a development brought on by three factors which caused a significant increase in the vulnerability of computer systems.

Journal ArticleDOI
TL;DR: Lookahead control was developed and performs these functions: • fetches instructions in advance continuously; • validates each instruction as encountered; • obtains operand addresses and operands for preprocessed instructions; • prepares for the alternatives of branching.
Abstract: Speed advances were provided in the third generation of computers. Since then, an order-of-magnitude improvement in speed has been achieved in logic circuitry. A high-speed parallel arithmetic unit can perform a complete operation in 80 nanoseconds; main memory, with the help of high speed buffers, can produce operands or instructions in 80 nanoseconds. Why should the arithmetic unit have to wait for several cycles before it gets each instruction and the operands for it? To improve this situation, lookahead control was developed and performs these functions: • fetches instructions in advance continuously; • validates each instruction as encountered; • obtains operand addresses and operands for preprocessed instructions; • prepares for the alternatives of branching.

Journal ArticleDOI
TL;DR: The potential applications for a microprocessor in an automobile can be divided roughly into the categories shown in Table 1, and only those monitoring/diagnosis functions which are directly related are mentioned.
Abstract: The potential applications for a microprocessor in an automobile can be divided roughly into the categories shown in Table 1. With existing emission regulations and possible future fuel economy legislation, a logical first application for an on-board computer is the engine control function. In this paper we shall concentrate on this application area, and will mention only those monitoring/diagnosis functions which are directly related.

Journal ArticleDOI
TL;DR: The goal is to measure the amount of DNA in each of the 46 chromosomes in normal human cells, and to use such measurements to detect any departures from normal that may be associated with disease or aging, or with exposure to radiation, drugs, pollutants, or other noxious agents.
Abstract: The biomedical image analysis system at the Lawrence Livermore Laboratory is being developed specifically to aid in the cytophotometric analysis of human chromosomes. Chromosomes carry genetic information in the form of deoxyribonucleic acid (DNA). Our goal is to measure the amount of DNA in each of the 46 chromosomes in normal human cells, and to use such measurements to detect any departures from normal that may be associated with disease or aging, or with exposure to radiation, drugs, pollutants, or other noxious agents.

Journal ArticleDOI
TL;DR: Microprocessors are very quickly reaching the stage of maturity as a design component in the field of digital systems even though they are relatively new.
Abstract: Microprocessors are very quickly reaching the stage of maturity as a design component in the field of digital systems even though they are relatively new. They are the one component with the most far reaching implications for future developments in the digital systems area since the transistor.

Journal ArticleDOI
Ann R. Ward1
TL;DR: This bibliography attempts to compile all articles, books, conference papers, seminar notes, and technical reports about LSI microprocessors which have been published in English from 1970 to April, 1974.
Abstract: LSI microprocessors and microcomputers are among the new developments on the constantly changing computer scene. Recent months have seen an explosion of publications on various aspects of the topic; but no list of references for comparison and study has appeared. This bibliography attempts to compile all articles, books, conference papers, seminar notes, and technical reports about LSI microprocessors which have been published in English from 1970 to April, 1974. Articles on processors which have been microprogrammed are outside the scope of the bibliography. Patents and specific microcomputer manuals are not included. Citations are grouped by year of publication, then listed alphabetically by author. It is interesting to note that the vast majority of the material on this topic has been published in the United States. Little has come from Europe and Japan.

Journal ArticleDOI
TL;DR: The character, scope, and economics of computer systems have dynamically evolved into a wide variety of architectures throughout the history of computing.
Abstract: The character, scope, and economics of computer systems have dynamically evolved into a wide variety of architectures throughout the history of computing. The present evolutionary movement of computer system architecture development is a rapid growth toward functionally distributing computer power. It therefore seems appropriate to review. recent developments in distributed-function architecture.