scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 1973"


Book ChapterDOI
TL;DR: In this article, the authors present a collection of theoretical and empirical arguments against the separation of testing from a theory of cognition, and present a general model of cognition and presents some of its implications for individual differences.
Abstract: Publisher Summary This chapter presents a collection of theoretical and empirical arguments against the separation of testing from a theory of cognition. It describes a general model of cognition and presents some of its implications for individual differences. The chapter presents a number of experiments, which relate the model to the present tests of intelligence and considers the implications of these results for both psychometrics and cognitive psychology, indicating some directions for future research. The theoretical model used is the Distributed Memory model, which is representative of a class of models acceptable to the majority of experimental psychologists interested in cognition. The theoretical approach underlying the distributed memory model is that the brain can be thought of as a computing system, and that as such it has a physical and implied logical construction which is called its system architecture. The physical structures comprising the system architecture are exercised by control processes analogous to programs in an actual computer.

285 citations


Journal ArticleDOI
E. A. Feustel1
TL;DR: The paper shows that the advantages of the change from the traditional von Neumann machine to tagged architecture are seen in all software areas including programming systems, operating systems, debugging systems, and systems of software instrumentation.
Abstract: This paper proposes that all data elements in a computer memory be made to be self-identifying by means of a tag. The paper shows that the advantages of the change from the traditional von Neumann machine to tagged architecture are seen in all software areas including programming systems, operating systems, debugging systems, and systems of software instrumentation. It discusses the advantages that accrue to the hardware designer in the implementation and gives examples for large- and small-scale systems. The economic costs of such an implementation for a minicomputer system are examined. The paper concludes that such a machine architecture may well be a suitable replacement for the traditional von Neumann architecture.

145 citations



Proceedings ArticleDOI
24 Sep 1973
TL;DR: An algorithm for the synthesis of applications-oriented microcode for a dynamically microprogrammable computer is described and heuristic tuning is suggested as an automization of the manual tuning process.
Abstract: This paper describes an algorithm for the synthesis of applications-oriented microcode for a dynamically microprogrammable computer. The need for such an algorithm is expressed by Reigel, Faber, and Fisher as an integral step in the solution of the tuning problem, or the problem of modifying a system architecture in order to optimally solve a given problem. This modification of architecture takes place through synthesis of microprograms that are stored in writable control storage. Writable control storage permits each class of user application programs to execute with a specialized instruction set, or architecture. A synthesis algorithm provides a method for generation of these specialized architectures. The required synthesis algorithm should be autonomous, should not require apriori knowledge of user application, and should be adaptable to day-by-day changes in user problems.Current attempts at tuning architectures can be considered as manual tuning. Heuristic tuning is suggested as an automization of the manual tuning process. Several phases of heuristic tuning are summarized. The architecture synthesis phase is considered in depth and an algorithm for microprogram synthesis is given. Several examples of the synthesis algorithm are presented and the expected execution improvements are shown.

5 citations


Journal ArticleDOI
09 Dec 1973
TL;DR: A fault tolerant multiprocessor architecture suitable for real time control applications requiring an extremely high degree of reliability and related to existing fault tolerant systems, and unique characteristics of the present design are indicated.
Abstract: This paper presents a fault tolerant multiprocessor architecture suitable for real time control applications requiring an extremely high degree of reliability. The architecture satisfies the following requirements: l) Ability to deal with software as well as hardware faults: The proposed architecture is based on the assignment of distinct but redundant software modules to each task. 2) Efficient use of resources: The proposed architecture is a multiprocessor using time redundancy for fault correction. Thus, redundancy (beyond that needed for fault detection) is invoked only when a fault is detected. In normal operation, this extra capacity is available as an additional computing resource. 3) No hard core: In addition to the usual replication of system components, a partitioned system executive and a unique communication facility is defined which insures that the available redundancy will not be lost through a “domino” effect. 4) Interaction of computing units with sensors and effectors: The manner in which system architecture must be responsive to the amount and type of redundancy provided by the sensors and effectors is shown. 5) Use of current technology: The proposed architecture is based on the use of currently available hardware for the major system components. After a detailed description of the architecture and the method of system operation, the system is related to existing fault tolerant systems, and unique characteristics of the present design are indicated.

5 citations


Book ChapterDOI
08 Oct 1973
TL;DR: The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior as distinct from the organization and data flow and control.
Abstract: Various authors and computer designers (computer architects) have defined computer architecture in several ways: Amdahl, Blauuw and Brooks (1964) in the article, “Architecture of the IBM System/360”, define architecture: “The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization and data flow and control, the logical design and the physical implementation.”

2 citations


Proceedings ArticleDOI
01 Jan 1973
TL;DR: A methodology for designing a performance monitor which is integrated into the system architecture and a new technique for implementing a monitor is described which uses microprogramming to provide a flexible and efficient interface between the monitor and the system.
Abstract: Many computer systems are sufficiently complex that it is not possible to predict or understand all of the interactions among system components. Performance monitoring is a technique for measuring system performance and recording the internal states of the system in response to various events. Current monitors employ hardware and/or software techniques. This report presents a methodology for designing a performance monitor which is integrated into the system architecture. A new technique for implementing a monitor is described which uses microprogramming to provide a flexible and efficient interface between the monitor and the system.

1 citations


Journal ArticleDOI
Harlan D. Mills1, Max L. Wilson1
01 Jan 1973
TL;DR: All these seemingly separate functions, which are currently separate for historical reasons as much as any other, can and should be addressed by a single integrated (but subsettable) “Kernel System”.
Abstract: We believe that the current boundaries and distinctions between operating systems, data base management systems (both “host language” and “self-contained”), programming language processors (both interpretive and compiled), programming support systems (library management, testing, and integration services), instructional systems, and certain other generalized data processing services, can and should be eliminated. That is, we believe that all these seemingly separate functions, which are currently separate for historical reasons as much as any other, can and should be addressed by a single integrated (but subsettable) “Kernel System”.We also believe that with the proper system architecture and implementation techniques (e.g., top-down and structured programming) and with the wholesale elimination of duplicated functions, a Kernel System can be built that would address the sum total of these functions and, at the same time, remain comparable in size and complexity to a typical self-contained data base management system alone. Further, such a structured Kernel System would be much better able to support the development, continued evolution, and operation of both itself and its applications.

1 citations