# Showing papers in "Communications of The ACM in 1969"

•

4,463 citations

••

TL;DR: An attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics.

Abstract: In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This involves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantages, both theoretical and practical, may follow from a pursuance of these topics.

2,638 citations

••

BBN Technologies

^{1}TL;DR: The memory structure and comprehension process of TLC allow new factual assertions and capabilities for relating text to such stored assertions to generalize automatically and provide a large increment in TLC's effective knowledge of the world and in its overall ability to comprehend text.

Abstract: The Teachable Language Comprehender (TLC) is a program designed to be capable of being taught to “comprehend” English text. When text which the program has not seen before is input to it, it comprehends that text by correctly relating each (explicit or implicit) assertion of the new text to a large memory. This memory is a “semantic network” representing factual assertions about the world.The program also creates copies of the parts of its memory which have been found to relate to the new text, adapting and combining these copies to represent the meaning of the new text. By this means, the meaning of all text the program successfully comprehends is encoded into the same format as that of the memory. In this form it can be added into the memory.Both factual assertions for the memory and the capabilities for correctly relating text to the memory's prior content are to be taught to the program as they are needed. TLC presently contains a relatively small number of examples of such assertions and capabilities, but within the system, notations for expressing either of these are provided. Thus the program now corresponds to a general process for comprehending language, and it provides a methodology for adding the additional information this process requires to actually comprehend text of any particular kind.The memory structure and comprehension process of TLC allow new factual assertions and capabilities for relating text to such stored assertions to generalize automatically. That is, once such an assertion or capability is put into the system, it becomes available to help comprehend a great many other sentences in the future. Thus the addition of a single factual assertion or linguistic capability will often provide a large increment in TLC's effective knowledge of the world and in its overall ability to comprehend text.The program's strategy is presented as a general theory of

501 citations

••

TL;DR: A simple procedure for achieving reliable full-duplex transmission over half-duple links is proposed and is compared with another of the same type, which has recently been described in the literature.

Abstract: A simple procedure for achieving reliable full-duplex transmission over half-duplex links is proposed. The scheme is compared with another of the same type, which has recently been described in the literature. Finally, some comments are made on another group of related transmission procedures which have been shown to be unreliable under some circumstances.

491 citations

••

TL;DR: This method might be called a “paper strip method” for analysis of variance and is similar to paper strip methods used for operations with polynomials.

Abstract: row one of IL down the right edge of a strip of paper using the same spacing as for the observations. Now place this movable strip alongside the observation vector so that the top element on the paper strip is opposite the top element of the observation vector. Multiply adjacent elements and write the sum of these products at the top of a new column. Now slide the paper strip down t, spaces. Form the indicated inner product as before and write the result in the new column below the previous entry. Continue in this manner until all the observations have been used. Now write row two of MD on a strip of paper and proceed as before. If we continue this process with all the rows of Mn we will get a new vector zn whose elements are linear transformations of the observation vector y, The dimension of z,, is the same as that of y. Similarly form znBl from Zn and &-I . Continuing this process we finally obtain z1 = z which is the desired interaction vector. In all the foregoing we used the normalized contrast matrices; thus the sums of squares are the squares of the elements of z. For hand computation, one might prefer using the unnormalized contrast matrices, since their elements are integers. But then we need a vector of divisors; it is obtained by performing the same operations on a column of ones as on y, except that we use the squares of the elements of the contrast matrices. Then the ith sum of squares equals 2; 2 divided by the corresponding divisor. This method might be called a “paper strip method” for analysis of variance and is similar to paper strip methods used for operations with polynomials. For examples of this, see Lanczos [3] and Pragcr [4]. We require 21J2 l . . t,, locations for storing y and z plus sup(ll , t ‘2, “‘, 1,‘) locations for storilq a row of iV, . The number of multiplicntions rcquircxl is (nli) (r1i + 1). AC~;NO\\~LI:DGMI:NTS: The author wishes to thank Dr. A. E. Brxndt for initirlting h i s intcrcst in programming analysis of vwiancc. I It! wishes LO t,h:~& Dr. W. I l . Carter , J r . , and the rcf(lrcbc, for 1~4 pf ul (*oinm(fil ts. Ii 1~:~1~:1~I~:Nc!l:s : 1. GOOI), 1. J. ‘I’hc intcr:~.ction ulgorithm und practical FouricI mdysis. J. lb?/. S,c(utl.s/.. Sot. 113) 20, 2 (1958), 361-372. 2. Gow, I. ,J. The intcractio11 :\\.lgori thm :md prnctictll Fouricl mdysis: AI) :~ddcnd~~rn. ,I. Il’o~~. Stat is t . Sot. [I31 $2, 3 (l!KO). 372-375. 3. LANCZOS, CA /1 pphd 4A nah~sis. l’rcM,icc-I1:111, Englcwood Cliffs, N.J., 195G. 4 . ~RAGIX, w’. hlroduction t o Ihsic I~ortrw~ l’royrunttt~i~rg tmd Xu?~rcricul Melds. Bl:lisdcll, M’al tllam, l'hss., 19G5. 5 . YATIS, F. The des ign and an:tlysis of fttctoritll csperimcnts. Imperial Bureau of Soil Science, II:~rper&n, England, 1937.

396 citations

••

TL;DR: Algorithms are presented which examine a request in the light of the current allocation of resources and determi whether or not the granting of the request will introduce the possibility of a deadlock.

Abstract: A well-known problem in the design of operating systems is the selection of a resource allocation policy that will prevent deadlock. Deadlock is the situation in which resources have been allocated to various tasks in such a way that none of the tasks can continue. The various published solutions have been somewhat restrictive: either they do not handle the problem in sufficient generality or they suggest policies which will on occasion refuse a request which could have been safely granted. Algorithms are presented which examine a request in the light of the current allocation of resources and determi.~e whether or not the granting of the request will introduce the possibility of a deadlock. Proofs given in the appendixes show that the conditions imposed by the algorithms are both necessary and sufficient to prevent deadlock. The algorithms have been successfully used in the THE system.

310 citations

••

TL;DR: A garbage-collection algorithm for list-processing systems which operate within very large virtual memo, ies is described, which is more the compaction of active storage than the discovery of free storage.

Abstract: In this paper a garbage-collection algorithm for list-processing systems which operate within very large virtual memo, ies is described. The object of the algorithm is more the compaction of active storage than the discovery of free storage. Because free storage is never really exhausted, the decision to garbage collect is not easily made; therefore, various criteria of this decision are discussed.

268 citations

••

TL;DR: In this article, the LU decomposition is computed with row interchanges of the basic matrix of Dantzig's simplex method, which is based on the LU matrix decomposition.

Abstract: Standard computer implementations of Dantzig's simplex method for linear programming are based upon forming the inverse of the basic matrix and updating the inverse after every step of the method. These implementations have bad round-off error properties. This paper gives the theoretical background for an implementation which is based upon the LU decomposition, computed with row interchanges, of the basic matrix. The implementation is slow, but has good round-off error behavior. The implementation appears as CACM Algorithm 350.

232 citations

••

IBM

^{1}TL;DR: Methods of analyzing the control flow and data flow of programs during compilation are applied to transforming the program to improve object time efficiency and implementation of these and other optimizations in OS/360 FORTRAN H are described.

Abstract: Methods of analyzing the control flow and data flow of programs during compilation are applied to transforming the program to improve object time efficiency. Dominance relationships, indicating which statements are necessarily executed before others, are used to do global common expression elimination and loop identification. Implementation of these and other optimizations in OS/360 FORTRAN H are described.

210 citations

••

TL;DR: It is shown that carefully designed matrix algorithms can lead to enormous savings in the number of page faults occurring when only a small part of the total matrix can be in main memory at one time.

Abstract: Matrix representations and operations are examined for the purpose of minimizing the page faulting occurring in a paged memory system. It is shown that carefully designed matrix algorithms can lead to enormous savings in the number of page faults occurring when only a small part of the total matrix can be in main memory at one time. Examination of addition, multiplication, and inversion algorithms shows that a partitioned matrix representation (i.e. one submatrix or partition per page) in most cases induced fewer page faults than a row-by-row representation. The number of page-pulls required by these matrix manipulation algorithms is also studied as a function of the number of pages of main memory available to the algorithm.

177 citations

••

TL;DR: A fast method is presented for finding a fundamental set of cycles for an undirected finite graph and is similar to that of Gotlieb and Corneil and superior to those of Welch and Welch.

Abstract: A fast method is presented for finding a fundamental set of cycles for an undirected finite graph. A spanning tree is grown and the vertices examined in turn, unexamined vertices being stored in a pushdown list to await examination. One stage in the process is to take the top element v of the pushdown list and examine it, i.e. inspect all those edges (v, z) of the graph for which z has not yet been examined. If z is already in the tree, a fundamental cycle is added; if not, the edge (v, z) is placed in the tree. There is exactly one such stage for each of the n vertices of the graph. For large n, the store required increases as n2 and the time as ng where g depends on the type of graph involved. g is bounded below by 2 and above by 3, and it is shown that both bounds are attained.In terms of storage our algorithm is similar to that of Gotlieb and Corneil and superior to that of Welch; in terms of speed it is similar to that of Welch and superior to that of Gotlieb and Corneil. Tests show our algorithm to be remarkably efficient (g = 2) on random graphs.

••

TL;DR: A high level programming language for large, complex associative structures has been designed and implemented using a hash-coding technique and the discussion includes a comparison with other work and examples of applications.

Abstract: A high level programming language for large, complex associative structures has been designed and implemented. The underlying data structure has been implemented using a hash-coding technique. The discussion includes a comparison with other work and examples of applications of the language.

••

IBM

^{1}TL;DR: A formalization of relationships between space-sharing, program behavior, and processor efficiency in computer systems is presented to illustrate a possible analytic approach to the investigation of the problems of space- Sharing.

Abstract: A formalization of relationships between space-sharing, program behavior, and processor efficiency in computer systems is presented. Concepts of value and cost of space allocation per task are defined and then value and cost are combined to develop a single parameter termed value per unit cost.The intent is to illustrate a possible analytic approach to the investigation of the problems of space-sharing and to demonstrate the method on sample problems.

••

Bell Labs

^{1}TL;DR: The main idea is to in ter leave composit ions of x and n -x objects and resor t to a lexicographic genera t ion ofComposit ions.

Abstract: p r o c e d u r e Ising (n, x, t, S); i n t e g e r n, x, l; i n t e g e r array S; c o m m e n t Ising generates n-sequences ($1, \" \" , S,) of zeros and ones where x = ~ i ~ S~ and t = ~,-~1 I S~+I S~ I are given. The main idea is to in ter leave composit ions of x and n -x objects and resor t to a lexicographic genera t ion of composit ions. We call these sequences Is ing configurat ions since we believe they first appeared in the s t u d y of the so-called I s ing problem (See Hill [1], Is ing [2]). T he number R(n, x, t) of dist i nc t configurat ions wi th fixed n, x, t is well known [1, 2]:

••

TL;DR: The real procedure gauss computes the area under the left-hand portion of the normal curve by using National Bureau of Standards formulas 26.6.4, 26.5, and 26.8 for computation of the statistic and the approximation for the approximation.

Abstract: The real procedure gauss computes the area under the left-hand portion of the normal curve. Algorithm 209 [3] may be used for this purpose. If f < 0 or if dr1 < 1 or if dr2 < 1 then exit to the label error occurs. National Bureau of Standards formulas 26.6.4, 26.6.5, and 26.6.8 are used for computation of the statistic, and 26.6.15 is used for the approximation [2]. Thanks to Mary E. Rafter for extensive testing of this procedure and to the referee for a number of suggestions. begin if dfl < 1 V dr2 < 1 Vf < 0.0 then go to error; if f = 0.0 then prob := 1.0 else begin real fl, f2, x, ft, vp; fl := dfl; f2 := dr2; fl := 0.0; x := f2/(f2+flXf); vp := fl +f2-2.0; if 2 X (dfl+2) = dfl A dfl ~ maxn then begin realxx; xx := 1.0-x; for fl := fl-2.0 step-2.0 until 1.0 do begin vp := vp-2.0; ft := xx X vp/fl X (1.0+fl) end; ft := x 1\" (0.5X f2) X (1.O+fl) end else if 2 X (dr2 + 2) = dr2/S df2 =< maxn then begin for f2 := f2-2.0 step-2.0 until 1.0 do begin vp := vp-2.0; ft := x X vp/]2 X (1.O+ft) end ; ft := 1.0-(1.O-x)1\" (0.5X fl) X (1.O+ft) end else if dr1 \"4-dr2 <= maxn then begin real theta, sth, eth, sis, ets, a, b, xi, gamma; theta := arctan(sqrt(fl Xf /f2)) ; sth := sin(theta); cth := eos(theta); sts := sthl\"2; cts: = cthl'2; a := b := 0.0; if dr2 > 1 then begin for f2 := ./'2-2.0 step-2.0 until 2.0 do a := cts X (f2-1.0)/f2 X (1.0+a); a := sth X cth × (1.0+a) end ; a := thela + a; if dfl > 1 then begin for fl := fl-2.0 step-2.0 until 2.0 do begin vp := vp-2.0; b := sts X vp/fl X (1.0+b)

••

TL;DR: An algorithm and coding technique is presented for quick evaluation of the Lehmer pseudo-random number generator modulo 2 ** 31 - 1, a prime Mersenne number, on a p-bit (greater than 31) computer.

Abstract: An algorithm and coding technique is presented for quick evaluation of the Lehmer pseudo-random number generator modulo 2 ** 31 - 1, a prime Mersenne number which produces 2 ** 31 - 2 numbers, on a p-bit (greater than 31) computer. The computation method is extendible to limited problems in modular arithmetic. Prime factorization for 2 ** 61 - 2 and a primitive root for 2 ** 61 - 1, the next largest prime Mersenne number, are given for possible construction of a pseudo-random number generator of increased cycle length.

••

IBM

^{1}TL;DR: The running time of programs in a paging machine generally increases as the store in which programs are constrained to run decreases, but experiments have revealed cases in which the reverse is true: a decrease in the size of the store is accompanied by a decreases in running time.

Abstract: The running time of programs in a paging machine generally increases as the store in which programs are constrained to run decreases. Experiment, however, have revealed cases in which the reverse is true: a decrease in the size of the store is accompanied by a decrease in running time.An informal discussion of the anomalous behavior is given, and for the case of the FIFO replacement algorithm a formal treatment is presented.

••

TL;DR: A clear and useful separation of structural and behavioral model description is obtained, a reduction of manual tasks in converting Boolean logic into a structural model, the elimination of manual processes in achieving exclusive simulation of activity, an event-scheduling technique which does not deteriorate in economy as the event queue grows in length, and a simulation procedure which deals effectively with any mixture of serial and simultaneous activities.

Abstract: A technique for simulating the detailed logic networks of large and active digital systems is described. Essential objectives sought are improved ease and economy in model generation, economy in execution time and space, and a facility for handling simultaneous activities. The main results obtained are a clear and useful separation of structural and behavioral model description, a reduction of manual tasks in converting Boolean logic into a structural model, the elimination of manual processes in achieving exclusive simulation of activity, an event-scheduling technique which does not deteriorate in economy as the event queue grows in length, and a simulation procedure which deals effectively with any mixture of serial and simultaneous activities. The passage of time is simulated in a precise, quantitative fashion, and systems to be simulated may be combinations of synchronous and asynchronous logic. Certain aspects of the techniques described may be used for the simulation of network structures other than digital networks.

••

IBM

^{1}TL;DR: The most striking result is the apparently general rule that rounding up requests for storage, to reduce the number of different sizes of blocks coexisting in storage, causes more loss of storage by increased internal fragmentation than is saved by decreased external fragmentation.

Abstract: The main purpose of this paper is the presentation of some of the results of a series of simulation experiments investigating the phenomenon of storage fragmentation. Two different types of storage fragmentation are distinguished: (1) external fragmentation, namely the loss in storage utilization caused by the inability to make use of all available storage after it has been fragmented into a large number of separate blocks; and (2) internal fragmentation, the loss of utilization caused by rounding up a request for storage, rather than allocating only the exact number of words required. The most striking result is the apparently general rule that rounding up requests for storage, to reduce the number of different sizes of blocks coexisting in storage, causes more loss of storage by increased internal fragmentation than is saved by decreased external fragmentation. Described also are a method of segment allocation and an accompanying technique for segment addressing which take advantage of the above result. Evidence is presented of possible advantages of the method over conventional paging techniques.

••

TL;DR: The algorithm presented causes the elimination of hidden lines in the representation of a perspective view of concave and convex plane-faced objects on the picture plane by taking advantage of a reduced number of concaves and automatically recognizing if only one object with no concave points is considered.

Abstract: The algorithm presented causes the elimination of hidden lines in the representation of a perspective view of concave and convex plane-faced objects on the picture plane. All the edges of the objects are considered sequentially, and all planes which hide every point of an edge are found.The computing time increases roughly as the square of the number of edges. The algorithm takes advantage of a reduced number of concave points and automatically recognizes if only one object with no concave points is considered. In this last case, the result is obtained in a much simpler way.

••

••

TL;DR: Using this procedure, an LR(1) parser for ALGOL has been obtained, based on the original method described by Knuth, but decreases both the effort required to construct the processor and the size of the processor produced.

Abstract: A practical method for constructing LR(k) processors is developed. These processors are capable of recognizing and parsing an input during a single no-backup scan in a number of steps equal to the length of the input plus the number of steps in its derivation.The technique presented here is based on the original method described by Knuth, but decreases both the effort required to construct the processor and the size of the processor produced. This procedure involves partitioning the given grammar into a number of smaller parts. If an LR(k) processor can be constructed for each part (using Knuth's algorithm) and if certain conditions relating these individual processors are satisfied, then an LR(k) processor for the entire grammar can be constructed for them. Using this procedure, an LR(1) parser for ALGOL has been obtained.

••

TL;DR: Methods are presented for increasing the efficiency of the object code produced by first factoring the expressions, i.e. finding a set of subexpressions each of which occurs in two or more other expressions or sub expressesions.

Abstract: Given a set of expressions which are to be compiled, methods are presented for increasing the efficiency of the object code produced by first factoring the expressions, i.e. finding a set of subexpressions each of which occurs in two or more other expressions or subexpressions. Once all the factors have been ascertained, a sequencing procedure is applied which orders the factors and expressions such that all information is computed in the correct sequence and factors need be retained in memory a minimal amount of time. An assignment algorithm is then executed in order to minimize the total number of temporary storage cells required to hold the results of evaluating the factors. In order to make these techniques computationally feasible, heuristic procedures are applied, and hence global optimal results are not necessarily generated. The factorization algorithms are also applicable to the problem of factoring Boolean switching expressions and of factoring polynomials encountered in symbol manipulating systems.

••

TL;DR: A discussion is given of alterations that were made to a typical university operating system to record the results of programming exercises in three different languages, includeing assembly language.

Abstract: A discussion is given of alterations that were made to a typical university operating system to record the results of programming exercises in three different languages, includeing assembly language. In this computer-controlled grading scheme provision is made for testing with programmer-supplied data and for final runs with system-supplied data. Exercises run under the scheme may be mixed with other programs, and no special recognition of exercises by the operators is necessary.

••

TL;DR: The Swym system permits a list to be chained, compact, or any combination of the two; the system garbage collector attempts to make all lists compact; it relocates and rearranges all of list storage using temporary storage.

Abstract: Compact lists are stored sequentially in memory, rather than chained with pointers. Since this is not always convenient, the Swym system permits a list to be chained, compact, or any combination of the two. A description is given of that list representation and the operators implemented (most are similar to those of LISP 1.5). The system garbage collector attempts to make all lists compact; it relocates and rearranges all of list storage using temporary storage. This unique list-compacting garbage collection algorithm is presented in detail. Several classes of the macros used to implement the system are described. Finally, consideration is given to those design factors essential to the success of a plex processing system implementation.

••

TL;DR: Generalized techniques are developed whose use can simplify the solution of problems relating to contour maps and have been applied to the problem of locating the ground track of an aircraft from elevation readings obtained during a flight.

Abstract: Generalized techniques are developed whose use can simplify the solution of problems relating to contour maps. One of these techniques makes use of the topological properties of contour maps. The topology is represented by a graphical structure in which adjacent contour lines appear as connected nodes. Another generalized technique consists of utilizing geometrical properties to determine the characteristics of straight lines drawn on the contour map. Both of these techniques have been applied to the problem of locating the ground track of an aircraft from elevation readings obtained during a flight.

••

IBM

^{1}TL;DR: An affirmative partial answer is provided to the question of whether it is possible to program parallel-processor computing systems to efficiently decrease execution time for useful problems and it is shown that, with proper programming, solution time when NP processors are applied approaches 1/NP times the solution time for a single processor, while improper programming can actually lead to an increase of solution time with the number of processors.

Abstract: An affirmative partial answer is provided to the question of whether it is possible to program parallel-processor computing systems to efficiently decrease execution time for useful problems. Parallel-processor systems are multiprocessor systems in which several of the processors can simultaneously execute separate tasks of a single job, thus cooperating to decrease the solution time of a computational problem. The processors have independent instruction counters, meaning that each processor executes its own task program relatively independently of the other processors. Communication between cooperating processors is by means of data in storage shared by all processors.A program for the determination of the distribution of current in an electrical network was written for a parallel-processor computing system, and execution of this program was simulated. The data gathered from simulation runs demonstrate the efficient solution of this problem, typical of a large class of important problems. It is shown that, with proper programming, solution time when NP processors are applied approaches 1/NP times the solution time for a single processor, while improper programming can actually lead to an increase of solution time with the number of processors. Storage interference and other measures of performance are discussed. Stability of the method of solution was also investigated.

••

TL;DR: Some methods for contour mapping by means of a digital plotter are dicussed, and a new method is presented that is simple enough to be implemented by programs with a rather small number of instructions.

Abstract: Some methods for contour mapping by means of a digital plotter are dicussed, and a new method is presented that is simple enough to be implemented by programs with a rather small number of instructions (about 120 FORTRAN IV instructions are required). Comparisons with some methods proposed by other authors are also performed.A FORTRAN IV program implementing the proposed method is availabel at the Istituto di Elettrotecnica ed Elettronica, Politecnico di Milano.

••

IBM

^{1}TL;DR: A description is given of how a tree representing the evaluation of an arithmetic expression can be drawn in such a way that the number of accumulators needed for the computation can be represented in a straightforward manner.

Abstract: A description is given of how a tree representing the evaluation of an arithmetic expression can be drawn in such a way that the number of accumulators needed for the computation can be represented in a straightforward manner. This representation reduces the choice of the best order of computation to a specific problem under the theory of graphs. An algorithm to solve this problem is presented.

••

TL;DR: A new algorithm is presented for obtaining the linear precedence functions when given the precedence matrix; this algorithm is shown to possess several computational adavantages.

Abstract: The precedence relations of a precedence grammar can be precisely described by a two-dimensional precedence matrix. Often the information in the matrix can be represented more concisely by a pair of vectors, called linear precedence functions. A new algorithm is presented for obtaining the linear precedence functions when given the precedence matrix; this algorithm is shown to possess several computational adavantages.