scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1968"


Journal ArticleDOI
TL;DR: A multiprogramming system is described in which all activities are divided over a number of sequential processes, in each of which one or more independent abstractions have been implemented.
Abstract: A multiprogramming system is described in which all activities are divided over a number of sequential processes. These sequential processes are placed at various hierarchical levels, in each of which one or more independent abstractions have been implemented. The hierarchical structure proved to be vital for the verification of the logical soundness of the design and the correctness of its implementation.

1,136 citations


Journal ArticleDOI
TL;DR: A new model, the “working set model,” is developed, defined to be the collection of its most recently used pages, which provides knowledge vital to the dynamic management of paged memories.
Abstract: Probably the most basic reason behind the absence of a general treatment of resource allocation in modern computer systems is an adequate model for program behavior. In this paper a new model, the “working set model,” is developed. The working set of pages associated with a process, defined to be the collection of its most recently used pages, provides knowledge vital to the dynamic management of paged memories. “Process” and “working set” are shown to be manifestations of the same ongoing computational activity; then “processor demand” and “memory demand” are defined; and resource allocation is formulated as the problem of balancing demands against available equipment.

995 citations


Journal ArticleDOI
TL;DR: My considerations are that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, and that his intellectual powers are rather geared to master static relations and his powers to visualize processes evolving in time are relatively poorly developed.
Abstract: For a number of years I have been familiar with the observation that the quality of programmers is a decreasing function of the density of go to statements in the programs they produce. More recently I discovered why the use of the go to statement has such disastrous effects, and I became convinced that the go to statement should be abolished from all "higher level" programming languages (i.e. everything except, perhaps, plain machine Code). At'that time I did not attach too much importance to this discovery ; I now submit my considerations for publication because in very recent discussions in which the subject turned up, I have been urged to do so. My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the "making" of the corresponding process is delegated to the machine. My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible. Let us now consider how we can characterize the progress of a process. (You may think about this question in a very concrete manner: suppose that a process, considered as a time succession of actions, is stopped after an arbitrary action, what data do we have to fix in order that we can redo the process until the very same point?) If the program text is a pure concatenation of, say, assignment statements (for the purpose of this discussion regarded as the descriptions of single actions) it is sufficient to point in the program text to a point between two successive action descriptions. (In the absence of go to statements I can permit myself the syntactic ambiguity in the last three words of the previous sentence: if we parse …

911 citations


Journal ArticleDOI
TL;DR: A method for locating specific character strings embedded in character text is described and an implementation of this method in the form of a compiler is discussed.
Abstract: A method for locating specific character strings embedded in character text is described and an implementation of this method in the form of a compiler is discussed. The compiler accepts a regular expression as source language and produces an IBM 7094 program as object language. The object program then accepts the text to be searched as input and produces a signal every time an embedded string in the text matches the given regular expression. Examples, problems, and solutions are also presented.

897 citations



Journal ArticleDOI
TL;DR: These are the first known studies measuring the performance of programers under controlled conditions for standard tasks, and statistically significant results indicated substantially faster debugging under online conditions in both studies.
Abstract: : Two exploratory experiments compared debugging performance of programers working under conditions of online and offline access to a computer. These are the first known studies measuring the performance of programers under controlled conditions for standard tasks. Statistically significant results indicated substantially faster debugging under online conditions in both studies. The results were ambiguous for central processor time--one study showed less computer time for debugging, and the other showed more time in the online mode. Perhaps the most important practical finding, overshadowing online/offline differences, involves the large and striking individual differences in programer performance. Attempts were made to relate observed individual differences to objective measures of programer experience and proficiency through factorial techniques. In line with the exploratory objectives of these studies, methodological problems encountered in designing and conducting these types of experiments are described, limitations of the findings are pointed out, hypotheses are presented to account for results, and suggestions are made for further research.

325 citations


Journal ArticleDOI
Robert Morris1
TL;DR: L'article donne une presentation didactique sur les methodes connues utilisees par ceux qui ecrivent les assembleurs and compilateurs de maniere a reduire les temps de recherche dans les tables de symboles.
Abstract: On rencontre de temps a autre, un article qui resume un nouveau domaine de recherche, qui eclaire les principaux resultats et les rend plus evidents. L'article de Morris est de ce type. L'article donne une presentation didactique sur les methodes connues utilisees par ceux qui ecrivent les assembleurs et compilateurs de maniere a reduire les temps de recherche dans les tables de symboles

218 citations


Journal ArticleDOI
TL;DR: An auction method is described for allocating computer time that allows the price of computer time to fluctuate with the demand and the relative priority of users to be controlled so that more important projects get better access.
Abstract: An auction method is described for allocating computer time that allows the price of computer time to fluctuate with the demand and the relative priority of users to be controlled so that more important projects get better access This auction is free of the periodic fluctuation in computer use often associated with monthly schemes

206 citations


Journal ArticleDOI
TL;DR: A critical review of recent efforts to automate the writing of translators of programming languages is presented and various approaches to automating the postsyntactic aspects of translator writing are discussed.
Abstract: A critical review of recent efforts to automate the writing of translators of programming languages is presented. The formal study of syntax and its application to translator writing are discussed in Section II. Various approaches to automating the postsyntactic (semantic) aspects of translator writing are discussed in Section III, and several related topics in Section IV.

183 citations


Journal ArticleDOI
TL;DR: The Manchester University ATLAS Operating System Part 1: The Internal Organization and Experience using a time-sharing multiprogramming system with dynamic address relocation hardware.
Abstract: Spartan Books, New York, pp. 61-78. 10. GL

154 citations


Journal ArticleDOI
Glenn D. Bergland1
TL;DR: In this article, a new procedure for calculating the complex, discrete Fourier transform of real-valued time series is presented for an example where the number of points in the series is an integral power of two.
Abstract: A new procedure is presented for calculating the complex, discrete Fourier transform of real-valued time series. This procedure is described for an example where the number of points in the series is an integral power of two. This algorithm preserves the order and symmetry of the Cooley-Tukey fast Fourier transform algorithm while effecting the two-to-one reduction in computation and storage which can be achieved when the series is real. Also discussed are hardware and software implementations of the algorithm which perform only (N/4) log2 (N/2) complex multiply and add operations, and which require only N real storage locations in analyzing each N-point record.

Journal ArticleDOI
TL;DR: The flexibility and power needed in the channel for a computer display are considered and it is found that successive improvements to the display processor design lie on a circular path, by making improvements one can return to the original simple design plus one new general purpose computer for each trip around.
Abstract: The flexibility and power needed in the channel for a computer display are considered. To work efficiently, such a channel must have a sufficient number of instruction that it is best understood as a small processor rather than a powerful channel. As it was found that successive improvements to the display processor design lie on a circular path, by making improvements one can return to the original simple design plus one new general purpose computer for each trip around. The degree of physical separation between display and parent computer is a key factor in display processor design.

Journal ArticleDOI
TL;DR: This paper focuses on the problems of protecting both user and system information during the execution of a process, and gives special attention to this problem when shared procedures and data are permitted.
Abstract: In this paper we will define and discuss a solution to some of the problems concerned with protection and security in an information processing utility. This paper is not intended to be an exhaustive study of all aspects of protection in such a system. Insteadj we concentrate our attention on the problems of protecting both user and system information (procedures and data) during the execution of a process. We will give special attention to this problem when shared procedures and data are permitted.

Journal ArticleDOI
TL;DR: A collection of basic ideas is presented, which have been involved by various workers over the past four years to provide a suitable framework for the design and analysis of multiprocessing systems.
Abstract: A collection of basic ideas is presented, which have been involved by various workers over the past four years to provide a suitable framework for the design and analysis of multiprocessing systems. The notions of process and state vector are discussed, and that the nature of basic operations on processes is considered. Some of the connections between processes and protection are analyzed. A very general approach to priority-oriented scheduling is described, and its relationship to conventional interrupt systems is explained. Some aspects of time-oriented scheduling are considered. The implementation of the scheduling mechanism is analyzed in detail and the feasibility of embodying it in hardware established. Finally several methods for interlocking execution of independent processes are presented and compared.

Journal ArticleDOI
TL;DR: In this paper, the mathematical model and computational techniques of the authors' digital holographic process are discussed, and applications of computer holography are suggested, and a new approach based on point apertures for the image is discussed.
Abstract: Optical and digital holography are reviewed. The mathematical model and computational techniques of the authors' digital holographic process are discussed, and applications of computer holography are suggested.Computer holograms have been made of three-dimensional objects which give faithful reconstructions, even in white light. A new approach based on point apertures for the image is discussed. Photographs of the images reconstructed from digital holograms are presented

Journal ArticleDOI
Brian Randell1, C. J. Kuehner1
TL;DR: A method of characterizing dynamic storage allocation systems--accordlng to the functional capabilities provided and the underlying techniques used--is presented.
Abstract: In many recent computer system designs, hardware facilities have been provided for easing the problems of storage allocation. A method of characterizing dynamic storage allocation systems--accordlng to the functional capabilities provided and the underlying techniques used--is presented. The basic purpose of the paper is to provide a useful perspective from which the utility of Various hardware facilities may be assessed. A brief survey of storage allocation facilities in several representative computer systems is included as an appendix.

Journal ArticleDOI
TL;DR: The data collected from the interpretive execution of a number of paged programs are used to describe the frequency of page faults and are used also for the evaluation of page replacement algorithms and for assessing the effects on performance of changes in the amount of storage allocated to executing programs.
Abstract: Results are summarized from an empirical study directed at the measurement of program operating behavior in those multiprogramming systems in which programs are organized into fixed length pages. The data collected from the interpretive execution of a number of paged programs are used to describe the frequency of page faults, i.e. the frequency of those instants at which an executing program requires a page of data or instructions not in main (core) memory. These data are used also for the evaluation of page replacement algorithms and for assessing the effects on performance of changes in the amount of storage allocated to executing programs.

Journal ArticleDOI
TL;DR: The aim is to provide methods for incorporating random number generators directly in FORTRAN programs, by means of a few in-line instructions, with the advantages are speed, convenience, and versatility.
Abstract: Some one-line random number generators, i.e. generators requiring a single FORTRAN instruction are discussed, and some short FORTRAN programs which mix several such generators are described. The aim is to provide methods for incorporating random number generators directly in FORTRAN programs, by means of a few in-line instructions. The advantages are speed (avoiding linkage to and from a subroutine), convenience, and versatility. Anyone wishing to experiment with generators, either using congruential generators by themselves or mixing several generators to provide a composite with potentially better statistical properties than the library generators currently available, may wish to consider some of the simple FORTRAN programs discussed here.

Journal ArticleDOI
TL;DR: It is here empirically shown that generators of this type can produce sequences whose autocorrelation functions up to lag 50 exhibit evidence of nonrandomness for many multiplicative constants.
Abstract: Hutchinson states that the “new” (prime modulo) multiplicative congruential pseudorandom generator, attributed to D.H. Lehmer, has passed the usual statistical tests for random number generators. It is here empirically shown that generators of this type can produce sequences whose autocorrelation functions up to lag 50 exhibit evidence of nonrandomness for many multiplicative constants. An alternative generator proposed by Tausworthe, which uses irreducible polynomials over the field of characteristic two, is shown to be free from this defect.The applicability of these two generators to the IBM 360 is then discussed. Since computer word size can affect a generator's statistical behavior, the older mixed and simple congruential generators, although extensively tested on computers having 36 or more bits per word, may not be optimum generators for the IBM 360.

Journal ArticleDOI
TL;DR: The practical application of the theory of finite-state automata to automatically generate lexical processors is dealt with in this tutorial article by the use of the AED RWORD system, developed at M.I.T. as part ofThe AED-1 system.
Abstract: The practical application of the theory of finite-state automata to automatically generate lexical processors is dealt with in this tutorial article by the use of the AED RWORD system, developed at M.I.T. as part of the AED-1 system. This system accepts as input descriptions of the multicharacter items or of words allowable in a language given in terms of a subset of regular expressions. The output of the system is a lexical processor which reads a string of characters and combines them into the items as defined by the regular expressions. Each output item is identified by a code number together with a pointer to a block of storage containing the characters and character count in the item. The processors produced by the system are based on finitestate machines. Each state of a \"machine\" corresponds to a unique condition in the lexical processing of a character string. At each state a character is read, and the machine changes to a new state. At each transition appropriate actions are taken based on the particular character read. The system has been in operation since 1966, and processors generated have compared favorably in speed to carefully hand-coded programs to accomplish the same task. Lexical processors for AED-O and MAD are among the many which have been produced. The techniques employed are independent of the nature of the items being evaluated. If the word \"events\" is substituted for character string, these processors may be described as generalized decision-making mechanisms based upon an ordered sequence of events. This allows the system to be used in a range of applications outside the area of lexical processing. However convenient these advantages may be, speed is the most important consideration. In designing a system for automatic generation of a lexical processor, the goal was a processor which completely eliminated backup or rereading, which was nearly as fast as hand-coded processors, which would analyze the language and detect errors, and which would be convenient and easy to use.

Journal ArticleDOI
TL;DR: A new hash coding method is presented that, besides being very simple and as fast as the best known methods, allows the table size to be almost any prime number.
Abstract: Although scatter storage tables are used widely in system programming, they are subject to various drawbacks. One of these is that the size of the table cannot be arbitrary, but is restricted to powers of 2 by the hash coding method. In this note we present a new hash coding method that, besides being very simple and as fast as the best known methods, allows the table size to be almost any prime number. The scatter storage techniques currently used in assemblers, compilers, and elsewhere, are excellently summarized in [1]. Items are entered into a table using an index which is computed from the item by means of some hash coding method. As tong as no two inserted items have the same hash code, searching and insertion are each performed in a single step, regardless of the size of the table. When two items have the same hash code, a collision is said to exist. In this ease the second item must be put out of place in the table. This takes extra time; but if the hash codes are randomly distributed, the average number of steps is less than 2 even for a table which is 75 percent full. The usual hash coding methods involve the calculation of a k-bit field which is assumed to be a random integer between 0 and 2 ~ -1. Thus the table size is restricted to

Journal ArticleDOI
TL;DR: The background and motivation for the adoption by the ACM Council on November 11, 1966, of a set of Guidelines for Professional Conduct in Information Processing are described, and several sections of theACM Guidelines are analyzed.
Abstract: The background and motivation for the adoption by the ACM Council on November 11, 1966, of a set of Guidelines for Professional Conduct in Information Processing are described. A brief history is given of ethical codes in other professions. Some reasons for and against adoption of ethical rules are considered, and several sections of the ACM Guidelines are analyzed. The purpose is to inform about this important aspect of our profession, as well as to stimulate thought and interest.

Journal ArticleDOI
TL;DR: This subrout ine finds all the eigenvalues and eigenvectors of a real general matr ix by means of the Qt~ double-step method and the eigenevectors by inverse i tera t ion.
Abstract: Purpose. This subrout ine finds all the eigenvalues and eigenvectors of a real general matr ix . The eigenvalues are computed by the Qt~ double-step method and the eigenvectors by inverse i tera t ion. Method. Fi rs t ly the following pre l iminary modifications are carr ied out to improve the accuracy of the computed results. (i) The mat r ix is scaled by a sequence of s imi lar i ty t rans format ions so t h a t the absolute sums of corresponding rows and columns are roughly equal. (ii) The scaled mat r ix is normalized so t h a t the value of the Eucl idean norm is equal to one. The ma in pa r t of the process commences wi th the reduct ion of the mat r ix to an upper-Hessenberg form by means of s imilar i ty t rans format ions (Householder 's method) . Then the QR doubles tep i t e ra t ive process is performed on the Hessenberg ma t r ix unt i l all e lements of the subdiagonal t h a t converge to zero are in modulus less t h a n 2 -t H HIIE , where t is the number of significant digits in the man t i s sa of a b ina ry f loat ing-point number . The eigenvalues are t hen ext rac ted f rom this reduced form. Inverse i t e ra t ion is performed on the upper-Hessenberg ma t r ix unt i l the absolute value of the largest component of the r igh t hand side vector is greater t han the bound 2t/(100 N), where N is the order of the matr ix . Normal ly af ter this bound is achieved, one step more is performed to ob ta in the computed eigenvector , bu t a t each s tep the residuals are computed, and if the residuals of one par t icular s tep are greater in absolute value t han the residuals of the previous step, then the vector of the previous s tep is accepted as the computed eigenvector. Program. The subrout ine E I G E N P is completely self-conta ined (composed of five subrout ines E I G E N P , SCALE, HESQR, REALVE, and COMPVE) and communica t ion to i t is solely th rough the a rgument list. The en t rance to the subrout ine is achieved by CALL E I G E N P (N, NM, A, T, EVR, EVI, VECR, VECI, I N D I C ) The meaning of the pa ramete r s is described in the comments at the beginning of the subrout ine E I G E N P .

Journal ArticleDOI
TL;DR: A field-proven scheme for achieving reliable duplex transmission over a half-duplex communication line is presented and to demonstrate the difficulty of the problem, another similar scheme, which is only slightly unreliable, is presented.
Abstract: A field-proven scheme for achieving reliable duplex transmission over a half-duplex communication line is presented, and to demonstrate the difficulty of the problem, another similar scheme, which is only slightly unreliable, is also presented. A flowchart for the reliable scheme and some interesting examples are given.

Journal ArticleDOI
B. L. Fox1, D. M. Landi1
TL;DR: In this paper, an algorithm for identifying ergodic subchains and transient states of a stochastic matrix is presented, which is used in Markov renewal programming and in the construction of variable length codes.
Abstract: An algorithm for identifying the ergodic subchains and transient states of a stochastic matrix is presented. Applications in Markov renewal programming and in the construction of variable length codes are reviewed, and an updating procedure for dealing with certain sequences of stochastic matrices is discussed. Computation times are investigated experimentally and compared with those of another recently proposed method.

Journal ArticleDOI
James R. Bell1
TL;DR: This algorithm is one of a class of normal deviate generators, which the authors shall call "chi-squared projections" by using van Neumann rejection to generate sin (¢) and cos (¢), without generating ¢ explicitly [3], which significantly enhances speed by eliminating the calls to the sin and cos functions.
Abstract: procedure norm (D1, D2) ; real D1, D2; comment This procedure generates pairs of independent normal random deviates with mean zero and standard deviation one. The output parameters D1 and D2 are normally distributed on the interval ( ~ , + oo). The method is exact even in the tails. This algorithm is one of a class of normal deviate generators, which we shall call \"chi-squared projections\" [1, 2]. An algorithm of this class has two stages. The first stage selects a random number L from a x~-distribution. The second stage calculates the sine and cosine of a random angle 0. The generated nornlal deviates are given by L sin (0) and L cos (0). The two stages can be altered independently. In particular, as better x22 random generators are developed, they can replace the first stage. (The negative exponential distribution is the same as that of x~2.) The fastest exact method previously published is Algorithm 267 [4], which includes a comparison with earlier algorithms. It is a straight chi-squared projection. Our algorithm differs from it by using van Neumann rejection to generate sin (¢) and cos (¢), [4, = 20], without generating ¢ explicitly [3]. This significantly enhances speed by eliminating the calls to the sin and cos functions. The author wishes to express his gratitude to Professor George Forsythe for his help in developing the algorithm. REFERENCES 1. Box, G., AND MULLER, M. A note on the generation of normal deviates. Ann. Math. Slat. 28, (1958), 610. 2. MULLER, ~V[. E. A comparison of methods for generating normal deviates on digital computers. J. ACM, 6 (July 1959), 376-383. 3. VON NEUMANN, J. Various techniques used in connection with random digits. In Nat. Bur. of Standards Appl. Math. Ser. 12, 1959, p. 36. 4. PIKE, M. C. Algorithm 267, Random Normal Deviate. Comm. ACM, 8 (Oct. 1965), 606.; comment R is any parameterless procedure returning a random number uniformly distributed on the interval from zero to one. A suitable procedure is given by Algorithm 266, Pseudo-Random Numbers [Comm. ACM, 8 (Oct. 1965), 605] if one chooses a = 0, b = 1, and initializes y to some large odd number, such as y = 13421773.; begin real X, Y, XX, YY, S, L;

Journal ArticleDOI
TL;DR: The question is considered of how many significant digits are needed in the intermedmte base to allow such in-and-out conversions to return the original number (when possible), or at least to cause a difference of no more than a unit in the least significant digit.
Abstract: gy an in-and-out conversion we mean that a floating-point number in one base is converted into a floating-point number in another base and then converted back to a floating-point number in the original base. For all combinations of rounding and truncation conversions the question is considered of how many significant digits are needed in the intermedmte base to allow such in-and-out conversions to return the original number (when possible), or at least to cause a difference of no more than a unit in the least significant digit.

Journal ArticleDOI
H. E. Kulsrud1
TL;DR: A system has been designed to produce a general purpose graphic language that is useful on a number of graphic devices quickly and cheaply and a model graphic language which has been developed with the system is presented.
Abstract: Interactive use of computers with graphic terminals will permit many new problems to be solved using machines. In order to handle a variety of applications, it is expedient to develop a general purpose graphic language that is useful on a number of graphic devices. A system has been designed to produce such a language quickly and cheaply. A model graphic language which has been developed with the system is presented.

Journal ArticleDOI
TL;DR: An algorithm for analyzing any context-free phrase structure grammar and for generating a program which can then parse any sentence in the language (or indicate that the given sentence is invalid) is described.
Abstract: An algorithm for analyzing any context-free phrase structure grammar and for generating a program which can then parse any sentence in the language (or indicate that the given sentence is invalid) is described. The parser is of the “top-to-bottom” type and is recursive. A number of heuristic procedures whose purpose is to shorten the basic algorithm by quickly ascertaining that certain substrings of the input sentence cannot correspond to the target nonterminal symbols are included. Both the generating algorithm and the parser have been implemented in RCA SNOBOL and have been tested successfully on a number of artificial grammars and on a subset of ALGOL. A number of the routines for extracting data about a grammar, such as minimum lengths of N-derivable strings and possible prefixes, are given and may be of interest apart from their application in this particular context.

Journal ArticleDOI
TL;DR: An implementation of Stiefel's exchange algorithm for determining a Chebyshev solution to an overdetermined system of linear equations is presented, that uses Gaussian LU decomposition with row interchanges.
Abstract: An implementation of Stiefel's exchange algorithm for determining a Chebyshev solution to an overdetermined system of linear equations is presented, that uses Gaussian LU decomposition with row interchanges. The implementation is computationally more stable than those usually given in the literature. A generalization of Stiefel's algorithm is developed which permits the occasional exchange of two equations simultaneously.