scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1974"


Journal ArticleDOI
TL;DR: In this paper, the synchronization task between loosely coupled cyclic sequential processes is viewed as keeping the relation "the system is in a legitimate state" invariant, and each individual process step that could possibly cause violation of that relation is preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed.
Abstract: The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.

2,118 citations


Journal ArticleDOI
TL;DR: In this paper, the authors develop Brinch-Hansen's concept of a monitor as a method of structuring an operating system and describe a possible method of implementation in terms of semaphores and give a suitable proof rule.
Abstract: This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system. It introduces a form of synchronization, describes a possible method of implementation in terms of semaphores and gives a suitable proof rule. Illustrative examples include a single resource scheduler, a bounded buffer, an alarm clock, a buffer pool, a disk head optimizer, and a version of the problem of readers and writers.

1,705 citations


Journal ArticleDOI
TL;DR: The nature and implementation of the file system and of the user command interface are discussed, including the ability to initiate asynchronous processes and over 100 subsystems including a dozen languages.
Abstract: UNIX is a general-purpose, multi-user, interactive operating system for the Digital Equipment Corporation PDP-11/40 and 11/45 computers. It offers a number of features seldom found even in a larger operating systems, including: (1) a hierarchical file system incorporating demountable volumes; (2) compatible file, device, and inter-process I/O; (3) the ability to initiate asynchronous processes; (4) system command language selectable on a per-user basis; and (5) over 100 subsystems including a dozen languages. This paper discusses the nature and implementation of the file system and of the user command interface.

1,140 citations


Journal ArticleDOI
TL;DR: A model of a third-generation-like computer system is developed and formal techniques are used to derive precise sufficient conditions to test whether such an architecture can support virtual machines.
Abstract: Virtual machine systems have been implemented on a limited number of third generation computer systems, e.g. CP-67 on the IBM 360/67. From previous empirical studies, it is known that certain third generation computer systems, e.g. the DEC PDP-10, cannot support a virtual machine system. In this paper, model of a third-generation-like computer system is developed. Formal techniques are used to derive precise sufficient conditions to test whether such an architecture can support virtual machines.

1,040 citations


Journal ArticleDOI
Leslie Lamport1
TL;DR: A simple solution to the mutual exclusion problem is presented which allows the system to continue to operate despite the failure of any individual component.
Abstract: A simple solution to the mutual exclusion problem is presented which allows the system to continue to operate despite the failure of any individual component.

737 citations


Journal ArticleDOI
Leslie Lamport1
TL;DR: Methods are developed for the parallel execution of different iterations of a DO loop and practical application to the design of compilers for such computers is discussed.
Abstract: Methods are developed for the parallel execution of different iterations of a DO loop. Both asynchronous multiprocessor computers and array computers are considered. Practical application to the design of compilers for such computers is discussed.

678 citations


Journal ArticleDOI
TL;DR: The problem of scheduling two or more processors to minimize the execution time of a program which consists of a set of partially ordered tasks and a dynamic programming solution for the case in which execution times are random variables is presented.
Abstract: The problem of scheduling two or more processors to minimize the execution time of a program which consists of a set of partially ordered tasks is studied. Cases where task execution times are deterministic and others in which execution times are random variables are analyzed. It is shown that different algorithms suggested in the literature vary significantly in execution time and that the B-schedule of Coffman and Graham is near-optimal. A dynamic programming solution for the case in which execution times are random variables is presented.

647 citations


Journal ArticleDOI
TL;DR: It is standard practice to save periodically sufficient information to enable the job to be restarted at the previous point at which information was saved, and the saving of such information at these points is called checkpointing.
Abstract: To avoid having to restart a job from the beginning in case of random failure, it is standard practice to save periodically sufficient information to enable the job to be restarted at the previous point at which information was saved. Such points are referred to as checkpoints, and the saving of such information at these points is called checkpointing [1].

643 citations


Journal ArticleDOI
TL;DR: A new family of clipping algorithms is described, able to clip polygons against irregular convex plane-faced volumes in three dimensions, removing the parts of the polygon which lie outside the volume.
Abstract: A new family of clipping algorithms is described. These algorithms are able to clip polygons against irregular convex plane-faced volumes in three dimensions, removing the parts of the polygon which lie outside the volume. In two dimensions the algorithms permit clipping against irregular convex windows.Polygons to be clipped are represented as an ordered sequence of vertices without repetition of first and last, in marked contrast to representation as a collection of edges as was heretofore the common procedure. Output polygons have an identical format, with new vertices introduced in sequence to describe any newly-cut edge or edges. The algorithms easily handle the particularly difficult problem of detecting that a new vertex may be required at a corner of the clipping window.The algorithms described achieve considerable simplicity by clipping separately against each clipping plane or window boundary. Code capable of clipping the polygon against a single boundary is reentered to clip against subsequent boundaries. Each such reentrant stage of clipping need store only two vertex values and may begin its processing as soon as the first output vertex from the preceeding stage is ready. Because the same code is reentered for clipping against subsequent boundaries, clipping against very complex window shapes is practical.For perspective applications in three dimensions, a six-plane truncated pyramid is chosen as the clipping volume. The two additional planes parallel to the projection screen serve to limit the range of depth preserved through the projection. A perspective projection method which provides for arbitrary view angles and depth of field in spite of simple fixed clipping planes is described. This method is ideal for subsequent hidden-surface computations.

566 citations


Journal ArticleDOI
TL;DR: It is shown that the most general mean-finishing-time problem for independent tasks is polynomial complete, and hence unlikely to admit of a non-enumerative solution.
Abstract: Sequencing to minimize mean finishing time (or mean time in system) is not only desirable to the user, but it also tends to minimize at each point in time the storage required to hold incomplete tasks. In this paper a deterministic model of independent tasks is introduced and new results are derived which extend and generalize the algorithms known for minimizing mean finishing time. In addition to presenting and analyzing new algorithms it is shown that the most general mean-finishing-time problem for independent tasks is polynomial complete, and hence unlikely to admit of a non-enumerative solution.

539 citations


Journal ArticleDOI
TL;DR: This paper describes the design philosophy of HYDRA—the kernel of an operating system for C.mmp, the Carnegie-Mellon Multi-Mini-Processor, through the introduction of a generalized notion of “resource,” both physical and virtual, called an “object.”
Abstract: This paper describes the design philosophy of HYDRA—the kernel of an operating system for C.mmp, the Carnegie-Mellon Multi-Mini-Processor. This philosophy is realized through the introduction of a generalized notion of “resource,” both physical and virtual, called an “object.” Mechanisms are presented for dealing with objects, including the creation of new types, specification of new operations applicable to a given type, sharing, and protection of any reference to a given object against improper application of any of the operations defined with respect to that type of object. The mechanisms provide a coherent basis for extension of the system in two directions: the introduction of new facilities, and the creation of highly secure systems.

Journal ArticleDOI
TL;DR: Five design principles help provide insight into the tradeoffs among different possible designs in the Multics system and several known weaknesses in the current protection mechanism design are discussed.
Abstract: The design of mechanisms to control the sharing of information in the Multics system is described. Five design principles help provide insight into the tradeoffs among different possible designs. The key mechanisms described include access control lists, hierarchical control of access specifications, identification and authentication of users, and primary memory protection. The paper ends with a discussion of several known weaknesses in the current protection mechanism design.

Journal ArticleDOI
TL;DR: A computer using capability-based addressing may be substantially superior to present systems on the basis of protection, simplicity of programming conventions, and efficient implementation.
Abstract: Various addressing schemes making use of segment tables are examined. The inadequacies of these schemes when dealing with shared addresses are explained. These inadequacies are traced to the lack of an efficient absolute address for objects in these systems. The direct use of a capability as an address is shown to overcome these difficulties because it provides the needed absolute address. Implementation of capability-based addressing is discussed. It is predicted that the use of tags to identify capabilities will dominate. A hardware address translation scheme which never requires the modification of the representation of capabilities is suggested. The scheme uses a main memory hash table for obtaining a segment's location in main memory given its unique code. The hash table is avoided for recently accessed segments by means of a set of associative registers. A computer using capability-based addressing may be substantially superior to present systems on the basis of protection, simplicity of programming conventions, and efficient implementation.

Journal ArticleDOI
TL;DR: The algorithm is a modification of the simplex method of linear programming applied to the primal formulation of the/1 problem and is the most efficient yet devised for solving the /1 problem.
Abstract: Submittal of an algorithm for consideration for publication in Communications of the ACM implies unrestricted use of the algorithm within a computer is permissible. General permission to republish, but not for profit, all or part of this material is granted provided that ACM's copyright notice is given and that reference is made to the publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery. Description The algorithm calculates an l~ solution to an overdetermined system of rn linear equations in n unknowns, i.e. the algorithm determines a vector x = {xy} which minimizes the sum of the absolute values of the residuals e(x) = ~'~i~l [ b~-~'-~i~l aij xy I. A typical application of the algorithm is that of solving the linear l~ data fitting problem. Suppose that data consisting of m points with coordinates (t~, yd is to be approximated by a linear approximating function c~l~,~ (t) + c~,~ (t) + .-. + a,~ (t) in the l, norm. This is equivalent to finding an l~ solution to the system of linear equations ~=i 4~i (t~)c~s = Y~ for i = 1, 2 ..... m. If the data contains some wild points (i.e. values of the dependent variable that are very inaccurate compared to the overall accuracy of the data), it is advisable to calculate an ll approximation rather than an/2 (least-squares) approximation, or an 1~ approximation. The algorithm is a modification of the simplex method of linear programming applied to the primal formulation of the/1 problem. A feature of the routine is its ability to pass through several simplex vertices at each iteration. The algorithm does not require that the matrix {a;,i } satisfy the Haar condition, nor does it require that it be of full rank. Complete details of the method may be found in [1J. Computational experience with this and other algorithms indicates that it is the most efficient yet devised for solving the /1 problem. The parameters M and N represent the number of equations and number of unknowns respectively. M2 and N2 should be set to M q-2 and N q-2 respectively. The simplex iterations are carried out in the two dimensional array A of size (M2,N2). Initially the coefficients of the matrix {a,. j} should be stored in the first M rows and first N columns of A, and the …

Journal ArticleDOI
TL;DR: The spline under tension was introduced by Schweikert in an attempt to imitate cubic splines but avoid the spurious critical points they induce.
Abstract: The spline under tension was introduced by Schweikert in an attempt to imitate cubic splines but avoid the spurious critical points they induce. The defining equations are presented here, together with an efficient method for determining the necessary parameters and computing the resultant spline. The standard scalar-valued curve fitting problem is discussed, as well as the fitting of open and closed curves in the plane. The use of these curves and the importance of the tension in the fitting of contour lines are mentioned as application.

Journal ArticleDOI
TL;DR: A password scheme is presented which does not require secrecy in the computer and is based on using a function H which the would-be intruder is unable to invert.
Abstract: In many computer operating systems a user authenticates himself by entering a secret password known solely to himself and the system. The system compares this password with one recorded in a Password Table which is available to only the authentication program. The integrity of the system depends on keeping the table secret. In this paper a password scheme is presented which does not require secrecy in the computer. All aspects of the system, including all relevant code and data bases, may be known by anyone attempting to intrude.The scheme is based on using a function H which the would-be intruder is unable to invert. This function is applied to the user's password and the result compared to a table entry, a match being interpreted as authentication of the user. The intruder may know all about H and have access to the table, but he can penetrate the system only if he can invert H to determine an input that produces a given output.This paper discusses issues surrounding selection of a suitable H. Two different plausible arguments are given that penetration would be exceedingly difficult, and it is then argued that more rigorous results are unlikely. Finally, some human engineering problems relating to the scheme are discussed.

Journal ArticleDOI
TL;DR: The proposed method is an extension of the method of univariate interpolation developed earlier by the author and is likewise based on local procedures on avoiding excessive undulation between given grid points.
Abstract: A method is designed for interpolating values given at points of a rectangular grid in a plane by a smooth bivariate function z = z(x, y). The interpolating function is a bicubic polynomial in each cell of the rectangular grid. Emphasis is on avoiding excessive undulation between given grid points. The proposed method is an extension of the method of univariate interpolation developed earlier by the author and is likewise based on local procedures.

Journal ArticleDOI
TL;DR: If computer programming is to become an important part of computer research and development, a transition of programming from an art to a disciplined science must be effected.
Abstract: When Communications of the ACM began publication in 1959, the members of ACM's Editorial Board made the following remark as they described the purposes of ACM's periodicals [2]: “If computer programming is to become an important part of computer research and development, a transition of programming from an art to a disciplined science must be effected.” Such a goal has been a continually recurring theme during the ensuing years; for example, we read in 1970 of the “first steps toward transforming the art of programming into a science” [26]. Meanwhile we have actually succeeded in making our discipline a science, and in a remarkably simple way: merely by deciding to call it “computer science.”

Journal ArticleDOI
Ben Wegbreit1
TL;DR: Two classes of techniques are considered: heuristic methods which derive loop predicates from boundary conditions and/or partially specified inductive assertions and extraction methods which use input predicates and appropriate weak interpretations to obtain certain classes of loop predicate by an evaluation on the weak interpretation.
Abstract: Current methods for mechanical program verification require a complete predicate specification on each loop. Because this is tedious and error prone, producing a program with complete, correct predicates is reasonably difficult and would be facilitated by machine assistance. This paper discusses techniques for mechanically synthesizing loop predicates. Two classes of techniques are considered: (1) heuristic methods which derive loop predicates from boundary conditions and/or partially specified inductive assertions: (2) extraction methods which use input predicates and appropriate weak interpretations to obtain certain classes of loop predicates by an evaluation on the weak interpretation.

Journal ArticleDOI
TL;DR: It is suggested that for the protection of time sharing systems from unauthorized users polynomials over a prime modulus are superior to one-way ciphers derived from Shannon codes.
Abstract: The protection of time sharing systems from unauthorized users is often achieved by the use of passwords. By using one-way ciphers to code the passwords, the risks involved with storing the passwords in the computer can be avoided. We discuss the selection of a suitable one-way cipher and suggest that for this purpose polynomials over a prime modulus are superior to one-way ciphers derived from Shannon codes.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method called multiple key hashing which attempts to minimize the cost of page accessing by using a combination of inversion and hash functions. But this approach is not always preferable to inversion, a combined method is described.
Abstract: The high cost of page accessing implies a need for for more careful data organization in a paged memory than is typical of most inverted file and similar approaches to multi-key retrieval. This article analyses that cost and proposes a method called multiple key hashing which attempts to minimize it. Since this approach is not always preferable to inversion, a combined method is described. The exact specifications of this combination for a file with given data and traffic characteristics is formulated as a mathematical program. The proposed heuristic solution to this program can often improve on a simple inversion technique by a factor of 2 or 3.

Journal ArticleDOI
TL;DR: The method presented requires time proportional to the number of characters in α.
Abstract: A method is presented for calculating a string B, belonging to a given regular language L, which is “nearest” (in number of edit operations) to a given input string a. B is viewed as a reasonable “correction” for the possibly erroneous string a, where a was originally intended to be a string of L. The calculation of B by the method presented requires time proportional to |a|, the number of characters in a. The method should find applications in information retrieval, artificial intelligence, and spelling correction systems.

Journal ArticleDOI
TL;DR: The experimental results obtained by using the method to restructure an interactive text editor and the file system module of an operating system have shown its substantial superiority over the other methods proposed in the literature.
Abstract: A new approach to program locality improvement via restructuring is described. The method is particularly suited to those systems where primary memory is managed according to a working set strategy. It is based on the concept of critical working set, a working set which does not contain the next memory reference. The data the method operates upon are extracted from a trace of the program to be restructured. It is shown that, except in some special cases, the method is not optimum. However, the experimental results obtained by using the method to restructure an interactive text editor and the file system module of an operating system have shown its substantial superiority over the other methods proposed in the literature.

Journal ArticleDOI
R. H. Canaday1, R. D. Harrison1, E. L. Ivie1, J. L. Ryder1, L. A. Wehr1 
TL;DR: An experimental implementation of the eXperimental Data Management System, XDMS, is described and certain conclusions about the back-end approach are drawn from this implementation.
Abstract: It is proposed that the data base management function be placed on a dedicated back-end computer which accepts commands (in a relatively high level language such as the CODASYL Data Base Task Group, April 1971 Report) from a host computer, accesses the data base on secondary storage, and returns results. The advantages of such a configuration are discussed. An experimental implementation, called the eXperimental Data Management System, XDMS, is described and certain conclusions about the back-end approach are drawn from this implementation.

Journal ArticleDOI
TL;DR: A least-errors recognizer is developed informally using the well-known recognizer of Earley, along with elements of Bellman's dynamic programming, and takes a general class of context-free grammars as drivers and any finite string as input.
Abstract: A least-errors recognizer is developed informally using the well-known recognizer of Earley, along with elements of Bellman's dynamic programming. The analyzer takes a general class of context-free grammars as drivers, and any finite string as input. Recognition consists of a least-errors count for a corrected version of the input relative to the driver grammar. The algorithm design emphasizes practical aspects which help in programming it.

Journal ArticleDOI
Ben Wegbreit1
TL;DR: The EL1 language contains a number of features specifically designed to simultaneously satisfy both natural problem-oriented notation and efficient implementation, in a context that allows efficient compiled code and compact data representation.
Abstract: In constructing a general purpose programming language, a key issue is providing a sufficient set of data types and associated operations in a manner that permits both natural problem-oriented notation and efficient implementation. The EL1 language contains a number of features specifically designed to simultaneously satisfy both requirements. The resulting treatment of data types includes provision for programmer-defined data types and generaic routines, programmer control over type conversion, and very flexible data type behavior, in a context that allows efficient compiled code and compact data representation.

Journal ArticleDOI
TL;DR: A general method of constructing a drive workload representative of a real workload is described, in which a synthetic program in which the characteristics can be varied by varying the appropriate parameters is used.
Abstract: A general method of constructing a drive workload representative of a real workload is described. The real workload is characterized by its demands on the various system resources. These characteristics of the real workload are obtained from the system accounting data. The characteristics of the drive workload are determined by matching the joint probability density of the real workload with that of the drive workload. The drive workload is realized by using a synthetic program in which the characteristics can be varied by varying the appropriate parameters. Calibration experiments are conducted to determine expressions relating the synthetic program parameters with the workload characteristics. The general method is applied to the case of two variables, cpu seconds and number of I/O activities; and a synthetic workload with 88 jobs is constructed to represent a month's workload consisting of about 6000 jobs.

Journal Article
TL;DR: This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system, introduces a form of synchronization, describes a possible method of implementation in terms of semaphores and gives a suitable proof rule.
Abstract: This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system. It introduces a form of synchronization, describes a possible method of implementation in terms of semaphores and gives a suitable proof rule. Illustrative examples include a single resource scheduler, a bounded buffer, an alarm clock, a buffer pool, a disk head optimizer, and a version of the problem of readers and writers.

Journal ArticleDOI
R. A. Freiburghouse1
TL;DR: The paper compares register allocation based on usage counts to other commonly used register allocation techniques, and presents evidence which shows that the usage count technique is significantly better than these other techniques.
Abstract: This paper introduces the notion of usage counts, shows how usage counts can be developed by algorithms that eliminate redundant computations, and describes how usage counts can provide the basis for register allocation. The paper compares register allocation based on usage counts to other commonly used register allocation techniques, and presents evidence which shows that the usage count technique is significantly better than these other techniques.

Journal ArticleDOI
TL;DR: The algorithm calculates the exact cumulative distribution of the two-sided Kolmogorov-Smirnov statistic for samples with few observations for data sampling and discrete system simulation.
Abstract: The algorithm calculates the exact cumulative distribution of the two-sided Kolmogorov-Smirnov statistic for samples with few observations. The general problem for which the formula is needed is to assess the probability that a particular sample comes from a proposed distribution. The problem arises specifically in data sampling and in discrete system simulation. Typically, some finite number of observations are available, and some underlying distribution is being considered as characterizing the source of the observations.