scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1972"


Journal ArticleDOI
TL;DR: It is pointed out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further, and how the method can be used for more general curve fitting.
Abstract: Hough has proposed an interesting and computationally efficient procedure for detecting lines in pictures. This paper points out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further. It also shows how the method can be used for more general curve fitting, and gives alternative interpretations that explain the source of its efficiency.

6,693 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss modularization as a mechanism for improving the flexibility and comprehensibility of a system while allowing the shortening of its development time, and the effectiveness of modularization is dependent upon the criteria used in dividing the system into modules.
Abstract: This paper discusses modularization as a mechanism for improving the flexibility and comprehensibility of a system while allowing the shortening of its development time. The effectiveness of a “modularization” is dependent upon the criteria used in dividing the system into modules. A system design problem is presented and both a conventional and unconventional decomposition are described. It is shown that the unconventional decompositions have distinct advantages for the goals outlined. The criteria used in arriving at the decompositions are discussed. The unconventional decomposition, if implemented with the conventional assumption that a module consists of one or more subroutines, will be less efficient in most cases. An alternative approach to implementation which does not have this effect is sketched.

5,028 citations


Journal ArticleDOI
TL;DR: The algorithm is supplied as one file of BCD 80 character card images at 556 B.P.I., even parity, on seven ~rack tape, and the user sends a small tape (wt. less than 1 lb.) the algorithm will be copied on it and returned to him at a charge of $10.O0 (U.S.and Canada) or $18.00 (elsewhere).
Abstract: and Canada) or $18.00 (elsewhere). If the user sends a small tape (wt. less than 1 lb.) the algorithm will be copied on it and returned to him at a charge of $10.O0 (U.S. only). All orders are to be prepaid with checks payable to ACM Algorithms. The algorithm is re corded as one file of BCD 80 character card images at 556 B.P.I., even parity, on seven ~rack tape. We will supply the algorithm at a density of 800 B.P.I. if requested. The cards for the algorithm are sequenced starting at 10 and incremented by 10. The sequence number is right justified in colums 80. Although we will make every attempt to insure that the algorithm conforms to the description printed here, we cannot guarantee it, nor can we guarantee that the algorithm is correct.-L.D.F. Descdption The following programs are a collection of Fortran IV sub-routines to solve the matrix equation AX-.}-XB = C (1) where A, B, and C are real matrices of dimensions m X m, n X n, and m X n, respectively. Additional subroutines permit the efficient solution of the equation ArX + xa = C, (2) where C is symmetric. Equation (1) has applications to the direct solution of discrete Poisson equations [2]. It is well known that (1) has a unique solution if and only if the One proof of the result amounts to constructing the solution from complete systems of eigenvalues and eigenvectors of A and B, when they exist. This technique has been proposed as a computational method (e.g. see [1 ]); however, it is unstable when the eigensystem is ill conditioned. The method proposed here is based on the Schur reduction to triangular form by orthogonal similarity transformations. Equation (1) is solved as follows. The matrix A is reduced to lower real Schur form A' by an orthogonal similarity transformation U; that is A is reduced to the real, block lower triangular form.

1,797 citations


Journal ArticleDOI
TL;DR: This paper presents an approach to writing specifications for parts of software systems sufficiently precise and complete that other pieces of software can be written to interact with the piece specified without additional information.
Abstract: This paper presents an approach to writing specifications for parts of software systems. The main goal is to provide specifications sufficiently precise and complete that other pieces of software can be written to interact with the piece specified without additional information. The secondary goal is to include in the specification no more information than necessary to meet the first goal. The technique is illustrated by means of a variety of examples from a tutorial system.

747 citations


Journal ArticleDOI
TL;DR: Dijkstra's "Humble Programmer" as discussed by the authors is one of the great classics in the field, providing an educational experience for the junior programmer, and truly delightful reading for the veteran.
Abstract: In my opinion, Dijkstra's "Humble Programmer" ought to be required reading for everyone who claims that programming is his or her profession. To me, it ranks as one of the great classics in the field, providing an educational experience for the junior programmer, and truly delightful reading for the veteran. It serves as a wonderful reminder of the good old days of the computer field, and offers an excellent summary of the philosophies and guiding principles by which we try to do our jobs. The concepts expressed are eloquent and profound, sometimes controversial, and generally thought-provoking. I worry that some of the most eloquent remarks cannot be appreciated by today's programmers: For example, Dijkstra says, " . . . when we had a few weak computers, programming became a mild problem, and now that we have gigantic computers, programming has become an equally gigantic problem." Most of us who began our careers on 1K or 4K machines will smile in appreciation, but will the remarks mean anything to the programmer of the 1980s who will begin his or her career on a 16-megabyte computer? When Dijkstra states, " . . . one of the most important aspects of any computing tool is its influence on the thinking habits of those who try to use it," I wonder whether today's hobbyist programmer, with his buildit- at-home computer and subset of BASIC, has any idea of what Dijkstra is talking about. There are the controversial comments, too: For instance, Dijkstra refers to FORTRAN as an infantile disorder, and PL/I as a fatal disease. Curiously enough, even though he praises such obscure languages as LISP, he does not seem to acknowledge the existence of the two languages that account for probably 75 percent of all the computer programs written today: COBOL and RPG. It is in this paper as well that Dijkstra suggests that the primary resistance to the so-called structured revolution will come from educational institutions, and from the political backlash of an EDP organization that would prefer to maintain the status quo. I personally agree with this, having experienced such resistance first-hand in my own work as a consultant and educator. You may or may not agree, but you certainly will find Dijkstra's comments worth reading.

629 citations


Journal ArticleDOI
TL;DR: A method for generating values of continuous symmetric random variables that is relatively fast, requires essentially no computer memory, and is easy to use is developed.
Abstract: A method for generating values of continuous symmetric random variables that is relatively fast, requires essentially no computer memory, and is easy to use is developed. The method, which uses a uniform zero-one random number source, is based on the inverse function of the lambda distribution of Tukey. Since it approximates many of the continuous theoretical distributions and empirical distributions frequently used in simulations, the method should be useful to simulation practitioners.

430 citations


Journal ArticleDOI
TL;DR: It is found that the algorithm operating with the triangular array is the most sensitive to image irregularities and noise, yet it will yield a thinned image with an overall reduced number of points.
Abstract: In this report three thinning algorithms are developed: one each for use with rectangular, hexagonal, and triangular arrays. The approach to the development of each algorithm is the same. Pictorial results produced by each of the algorithms are presented and the relative performances of the algorithms are compared. It is found that the algorithm operating with the triangular array is the most sensitive to image irregularities and noise, yet it will yield a thinned image with an overall reduced number of points. It is concluded that the algorithm operating in conjunction with the hexagonal array has features which strike a balance between those of the other two arrays.

234 citations


Journal ArticleDOI
TL;DR: Five well-known scheduling policies for movable head disks are compared using the performance criteria of expected seek time (system oriented) and expected waiting time (individual I/O request oriented) to choose a utility function to measure total performance.
Abstract: Five well-known scheduling policies for movable head disks are compared using the performance criteria of expected seek time (system oriented) and expected waiting time (individual I/O request oriented). Both analytical and simulation results are obtained. The variance of waiting time is introduced as another meaningful measure of performance, showing possible discrimination against individual requests. Then the choice of a utility function to measure total performance including system oriented and individual request oriented measures is described. Such a function allows one to differentiate among the scheduling policies over a wide range of input loading conditions. The selection and implementation of a maximum performance two-policy algorithm are discussed.

232 citations


Journal ArticleDOI
TL;DR: The independent-reference model, in which page references are statistically independent, is used to assess the effects of interpage dependencies on working-set size observations and under general assumptions, working- set size is shown to be normally distributed.
Abstract: A program's working set W(t, T) at time t is the set of distinct pages among the T most recently referenced pages. Relations between the average working-set size, the missing-page rate, and the interreference-interval distribution may be derived both from time-average definitions and from ensemble-average (statistical) definitions. An efficient algorithm for estimating these quantities is given. The relation to LRU (lease recently used) paging is characterized. The independent-reference model, in which page references are statistically independent, is used to assess the effects of interpage dependencies on working-set size observations. Under general assumptions, working-set size is shown to be normally distributed.

209 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a structured representation of multiprogramming in a high level language, which explicitly associates a data structure shared by concurrent processes with operations defined on it and permits a large class of time-dependent errors to be caught at compile time.
Abstract: This paper presents a proposal for structured representation of multiprogramming in a high level language. The notation used explicitly associates a data structure shared by concurrent processes with operations defined on it. This clarifies the meaning of programs and permits a large class of time-dependent errors to be caught at compile time. A combination of critical regions and event variables enables the programmer to control scheduling of resources among competing processes to any degree desired. These concepts are sufficiently safe to use not only within operating systems but also within user programs.

205 citations


Journal ArticleDOI
TL;DR: The need for education related to information systems in organizations is discussed, and a curriculum is proposed for graduate professional programs in universities, at the Master's level, and courses incorporating it are specified.
Abstract: The need for education related to information systems in organizations is discussed, and a curriculum is proposed for graduate professional programs in universities, at the Master's level. Material necessary for such programs is identified, and courses incorporating it are specified. Detailed course descriptions are presented, program organization discussed, and implementation questions considered. General permission to republish, but not for profit, all or part of this material is granted, provided that reference is made to this publication, to its date of lissue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery.

Journal ArticleDOI
TL;DR: It is shown how the Multics software achieves the effect of a large segmented main memory through the use of the Honeywell 645 segmentation and paging hardware.
Abstract: As experience with use of on-line operating systems has grown, the need to share information among system users has become increasingly apparent. Many contemporary systems permit some degree of sharing. Usually, sharing is accomplished by allowing several users to share data via input and output of information stored in files kept in secondary storage. Through the use of segmentation, however, Multics provides direct hardware addressing by user and system programs of all information, independent of its physical storage location. Information is stored in segments each of which is potentially sharable and carries its own independent attributes of size and access privilege.Here, the design and implementation considerations of segmentation and sharing in Multics are first discussed under the assumption that all information resides in a large, segmented main memory. Since the size of main memory on contemporary systems is rather limited, it is then shown how the Multics software achieves the effect of a large segmented main memory through the use of the Honeywell 645 segmentation and paging hardware.

Journal ArticleDOI
TL;DR: Although the implementation described here required some compromise to achieve a system operational within six months of hardware checkout, TENEX has met its major goals and provided reliable service at several sites and through the ARPA network.
Abstract: TENEX is a new time sharing system implemented on a DEC PDP-10 augmented by special paging hardware developed at BBN. This report specifies a set of goals which are important for any time sharing system. It describes how the TENEX design and implementation achieve these goals. These include specifications for a powerful multiprocess large memory virtual machine, intimate terminal interaction, comprehensive uniform file and I/O capabilities, and clean flexible system structure. Although the implementation described here required some compromise to achieve a system operational within six months of hardware checkout, TENEX has met its major goals and provided reliable service at several sites and through the ARPA network.

Journal ArticleDOI
TL;DR: A parallel processing algorithm for shrinking binary patterns to obtain single isolated elements, one for each pattern, is presented and an analogy with a neural network description, in terms of McCulloch-pitts “neurons” is presented.
Abstract: A parallel processing algorithm for shrinking binary patterns to obtain single isolated elements, one for each pattern, is presented. This procedure may be used for counting patterns on a matrix, and a hardware implementation of the algorithm using large scale integrated tecnology is envisioned. The principal features of this method are the very small window employed (two-by-two elements), the parallel nature of the process, and the possibility of shrinking any pattern, regardless of the complexity of its configuration. Problems regarding merging and disconnection of patterns during the process as well as the determination of the maximum number of steps necessary to obtain a single isolated element from a pattern, are reviewed and discussed. An analogy with a neural network description, in terms of McCulloch-pitts “neurons” is presented.

Journal ArticleDOI
TL;DR: The authors' primary conwiba~ion is the rise of polynomiaI sampling (as ex~ p/tiffed in Section 2) to eliminate any dependency on standard&ruction programs.
Abstract: distributed raw, dora m~m}~.rs into expo~e~ttaRy a=d normally dis~rib~ed q~mntilies~ W.~e most ef~kien~ ones are compared, i~ terms of memory reqairemenN a~'~d sNeed, wi#~ some ne~' a~gori~bms, A rmmber of pro

Journal ArticleDOI
TL;DR: The formal description of the synchronization mechanism makes it very easy to prove that the buffer will neither overflow nor underflow, that senders and receivers will never operate on the same message frame in the buffer nor will they run into a deadlock.
Abstract: Formalization of a well-defined synchronization mechanism can be used to prove that concurrently running processes of a system communicate correctly. This is demonstrated for a system consisting of many sending processes which deposit messages in a buffer and many receiving processes which remove messages from that buffer. The formal description of the synchronization mechanism makes it very easy to prove that the buffer will neither overflow nor underflow, that senders and receivers will never operate on the same message frame in the buffer nor will they run into a deadlock.

Journal ArticleDOI
TL;DR: The security of an information system may be represented by a model matrix whose elements are decision rules and whose row and column indices are users and data items respectively, which is used to explain security features of several existing systems.
Abstract: The security of an information system may be represented by a model matrix whose elements are decision rules and whose row and column indices are users and data items respectively. A set of four functions is used to access this matrix at translation and execution time. Distinguishing between data dependent and data independent decision rules enables one to perform much of the checking of security only once at translation time rather than repeatedly at execution time. The model is used to explain security features of several existing systems, and serves as a framework for a proposal for general security system implementation within today's languages and operating systems.

Journal ArticleDOI
TL;DR: Initial findings with regard to form determiners such as voice, form, tense, and mood, some n~es fk~r embedding sentences, and some attention to pronominal substitution are reported.
Abstract: A system is described for generating English senterraces from a form of semantic ~e{s ia rrhid~ #~e nodes are ~'ord-sense mea~dngs a~d the paths are primarily deep case relations. The grammar ~sed by {he system is ia the form of a nef~ork that imposes an ordering on a set of syntactic transfnrmations {ha{ are expressed as t J S P flmctio~ts. The generation algorithm ~ses the information in the semantic net,~ork to select appropri~, ate genera{bin paths through the grammar. The system is designed for ase as a computational tool that allahs a linguist *o develop and stmly methods for ge~erating s~rfaee strings from an undeHying semantic sm~ctt~re. Initial findings ~ith regard fo form determiners such as voice, form, tense, and mood, some n~es fk~r embedding sentences, and some attention to pronominal substitution are reported. The system is programmed in I3SP 1~5 and is avMiab|e from the authors.

Journal ArticleDOI
TL;DR: This second part of the paper shows how the cosine transformation can be computed by a modification of the fast Fourier transform and all three problems overcome.
Abstract: In a companion paper to this, “I Methodology and Experiences,” the automatic Clenshaw-Curtis quadrature scheme was described and how each quadrature formula used in the scheme requires a cosine transformation of the integrand values was shown. The high cost of these cosine transformations has been a serious drawback in using Clenshaw-Curtis quadrature. Two other problems related to the cosine transformation have also been troublesome. First, the conventional computation of the cosine transformation by recurrence relation is numerically unstable, particularly at the low frequencies which have the largest effect upon the integral. Second, in case the automatic scheme should require refinement of the sampling, storage is required to save the integrand values after the cosine transformation is computed.This second part of the paper shows how the cosine transformation can be computed by a modification of the fast Fourier transform and all three problems overcome. The modification is also applicable in other circumstances requiring cosine or sine transformations, such as polynomial interpolation through the Chebyshev points.

Journal ArticleDOI
David C. Walden1
TL;DR: A system of communication between processes in a time-sharing system is described and the communication system is extended so that it may be used between processes distributed throughout a computer network.
Abstract: A system of communication between processes in a time-sharing system is described and the communication system is extended so that it may be used between processes distributed throughout a computer network. The hypothetical application of the system to an existing network is discussed.

Journal ArticleDOI
TL;DR: Following the fixpoint theory of Scott, the semantics of computer programs are defined in terms of the least fixpoints of recursive programs, which allows not only the justification of all existing verification techniques, but also their extension to the handling in a uniform manner.
Abstract: Following the fixpoint theory of Scott, the semantics of computer programs are defined in terms of the least fixpoints of recursive programs. This allows not only the justification of all existing verification techniques, but also their extension to the handling, in a uniform manner of various properties of computer programs, including correctness, termination, and equivalence.

Journal ArticleDOI
Sakti P. Ghosh1
TL;DR: Conditions under which the consecutive retrieval property exists and remain invariant have been established and an outline for designing an information retrieval system based on the consecutive retrieved property is discussed.
Abstract: The consecutive retrieval property is an important relation between a query set and record set. Its existence enables the design of an information retrieval system with a minimal search time and no redundant storage. Some important theorems on the consecutive retrieval property are proved in this paper. Conditions under which the consecutive retrieval property exists and remain invariant have been established. An outline for designing an information retrieval system based on the consecutive retrieval property is also discussed.

Journal ArticleDOI
Jean E. Sammet1
TL;DR: This paper discusses both the history and future of programming languages and a tree showing the chronological development of languages and their interrelationships is shown.
Abstract: This paper discusses both the history and future of programming languages ( = higher level languages). Some of the difficulties in writing such a history are indicated. A key part of the paper is a tree showing the chronological development of languages and their interrelationships. Reasons for the proliferation of languages are given. The major languages are listed with the reasons for their importance. A section on chronology indicates the happenings of the significant previous time periods and the major topics of 1972. Key concepts other than specific languages are discussed.

Journal ArticleDOI
Barbara Liskov1
TL;DR: The Venus Operating System is an experimental multiprogramming system which supports five or six concurrent users on a small computer and is defined by a combination of microprograms and software.
Abstract: The Venus Operating System is an experimental multiprogramming system which supports five or six concurrent users on a small computer. The system was produced to test the effect of machine architecture on complexity of software. The system is defined by a combination of microprograms and software. The microprogram defines a machine with some unusual architectural features; the software exploits these features to define the operating system as simply as possible. In this paper the development of the system is described, with particular emphasis on the principles which guided the design.

Journal ArticleDOI
TL;DR: The early origins of mathematics are discussed, emphasizing those aspects which seem to be of greatest interest from the standpoint of computer science.
Abstract: The early origins of mathematics are discussed, emphasizing those aspects which seem to be of greatest interest from the standpoint of computer science. A number of old Babylonian tablets, many of which have never before been translated into English, are quoted.

Journal ArticleDOI
TL;DR: This paper shows that this objection can be overcome by computing the cosine transformation by a modification of the fast Fourier transform algorithm.
Abstract: Clenshaw-Curtis quadrature is a particularly important automatic quadrature scheme for a variety of reasons, especially the high accuracy obtained from relatively few integrand values. However, it has received little use because it requires the computation of a cosine transformation, and the arithmetic cost of this has been prohibitive.This paper is in two parts; a companion paper, “II Computing the Cosine Transformation,” shows that this objection can be overcome by computing the cosine transformation by a modification of the fast Fourier transform algorithm. This first part discusses the strategy and various error estimates, and summarizes experience with a particular implementation of the scheme.

Journal ArticleDOI
TL;DR: The development of the research project in microprogramming and emulation at State University of New York at Buffalo consisted of the evaluation of various possible machines to support this research; the decision to purchase one such machine, which appears to be superior to the others considered; and the organization and definition of goals for each group in the project.
Abstract: The development of the research project in microprogramming and emulation at State University of New York at Buffalo consisted of three phases: the evaluation of various possible machines to support this research; the decision to purchase one such machine, which appears to be superior to the others considered; and the organization and definition of goals for each group in the project. Each of these phases is reported, with emphasis placed on the early results achieved in this research.

Journal ArticleDOI
TL;DR: The purpose of this paper is to describe a course concerned with both the effects of computers on society and the responsibilities of computer scientists to society, and the possible formats for such a course are discussed.
Abstract: The purpose of this paper is to describe a course concerned with both the effects of computers on society and the responsibilities of computer scientists to society. The impact of computers is divided into five components: political, economic, cultural, social, and moral; the main part of the paper defines each component and presents examples of the relevant issues. In the remaining portions the possible formats for such a course are discussed, a topic by topic outline is given, and a selected set of references is listed. It is hoped that the proposal will make it easier to initiate courses on this subject.

Journal ArticleDOI
TL;DR: The subroutine CPOLY is a Fortran program to find all the zeros of a complex polynomial by the three-stage complex algorithm described in Jenkins and Traub [4].
Abstract: The subroutine CPOLY is a Fortran program to find all the zeros of a complex polynomial by the three-stage complex algorithm described in Jenkins and Traub [4]. (An algorithm for real polynomials is given in [5].) The algorithm is similar in spirit to the two-stage algorithms studied by Traub [1, 2]. The program finds the zeros one at a time in roughly increasing order of modulus and deflates the polynomial to one of lower degree. The program is extremely fast and the timing is quite insensitive to the distribution of zeros. Extensive testing of an Algol version of the program, reported in Jenkins [3], has shown the program to be very reliable.

Journal ArticleDOI
TL;DR: It is concluded that a successful CPU scheduling method must be preemptive and must prevent a given job from holding the CPU for too long a period.
Abstract: Microscopic level job stream data obtained in a production environment by an event-driven software probe is used to drive a model of a multiprogramming computer system. The CPU scheduling algorithm of the model is systematically varied. This technique, called trace-driven modeling, provides an accurate replica of a production environment for the testing of variations in the system. At the same time alterations in scheduling methods can be easily carried out in a controlled way with cause and effects relationships being isolated. The scheduling methods tested included the best possible and worst possible methods, the traditional methods of multiprogramming theory, round-robin, first-come-first-served, etc., and dynamic predictors. The relative and absolute performances of these scheduling methods are given. It is concluded that a successful CPU scheduling method must be preemptive and must prevent a given job from holding the CPU for too long a period.