scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1966"


Journal ArticleDOI
TL;DR: A discussion of some psychological issues relevant to the ELIZA approach as well as of future developments concludes the paper.
Abstract: ELIZA is a program operating within the MAC time-sharing system of MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which ELIZA is concerned are: (1) the identification of key words, (2) the discovery of minimal context, (3) the choice of appropriate transformations, (4) generation of responses in the absence of key words, and (5) the provision of an editing capability for ELIZA “scripts”. A discussion of some psychological issues relevant to the ELIZA approach as well as of future developments concludes the paper.

2,873 citations


Journal ArticleDOI
TL;DR: This paper is an introduction to SIMULA, a programming language designed to provide a systems analyst with unified concepts which facilitate the concise description of discrete event systems.
Abstract: This paper is an introduction to SIMULA, a programming language designed to provide a systems analyst with unified concepts which facilitate the concise description of discrete event systems. A system description also serves as a source language simulation program. SIMULA is an extension of ALGOL 60 in which the most important new concept is that of quasi-parallel processing.

875 citations


Journal ArticleDOI
TL;DR: In the first part of the paper, flow diagrams are introduced to represent inter ah mappings of a set into itself due to a suitable extension of the given set and of the basic mappings defined in it.
Abstract: In the first part of the paper, flow diagrams are introduced to represent inter ah mappings of a set into itself. Although not every diagram is decomposable into a finite numbm of given base diagrams, this becomes hue at a semantical level due to a suitable extension of the given set and of the basic mappings defined in it. Two normalization methods of flow diagrams are given. The first has |hree base diagrams; the second, only two. In the second part of the paper, the second method is applied to 'lhe theory of Turing machines. With every Turing maching provided with a two-way half-tape, ihere is associated a similar machine, doing essentially 'lhe same job, but working on a tape obtained from the first one by interspersing alternate blank squares. The new machine belongs to the family, elsewhere introduced, generated by composition and iteration from the two machines X and R. That family is a proper subfamily of the whole family of Turing machines.

729 citations


Journal ArticleDOI
TL;DR: The semantics are defined for a number of meta-instructions which perform operations essential to the writing of programs in multiprogrammed computer systems.
Abstract: The semantics are defined for a number of meta-instructions which perform operations essential to the writing of programs in multiprogrammed computer systems. These meta-instructions relate to parallel processing, protecting of separate computations, program debugging, and the sharing among users of memory segments and other computing objects, the names of which are hierarchically structured. The language sophistication contemplated is midway between an assembly language and an advanced algebraic language.

720 citations


Journal ArticleDOI
P. J. Landin1
TL;DR: A family of unimplemented computing languages is described that is intended to span differences of application area by a unified framework that dictates the rules about the uses of user-coined names, and the conventions about characterizing functional relationships.
Abstract: A family of unimplemented computing languages is described that is intended to span differences of application area by a unified framework. This framework dictates the rules about the uses of user-coined names, and the conventions about characterizing functional relationships. Within this framework the design of a specific language splits into two independent parts. One is the choice of written appearances of programs (or more generally, their physical representation). The other is the choice of the abstract entities (such as numbers, character-strings, list of them, functional relations among them) that can be referred to in the language.The system is biased towards “expressions” rather than “statements.” It includes a nonprocedural (purely functional) subsystem that aims to expand the class of users' needs that can be met by a single print-instruction, without sacrificing the important properties that make conventional right-hand-side expressions easy to construct and understand.

637 citations


Journal ArticleDOI
TL;DR: In this section the algorithmic language EULER is described first informally and then formally by its syntax and semantics, creating a language which is simpler and yet more flexible than ALGOL 60.
Abstract: In this section the algorithmic language EULER is described first informally and then formally by its syntax and semantics. An attempt has been made to generalize and extend some of the concepts of ALGOL, thus creating a language which is simpler and yet more flexible than ALGOL 60. A second objective in developing this language was to show that a useful programming language which can be processed with reasonable efficiency can be defined in rigorous formality.

233 citations


Journal ArticleDOI
TL;DR: The main changes were: (1) verbal improvements and clarifications, many of which were kindly suggested by recipients of the original draft; (2) additional or altered language features, in particular the replacement of tree structures by records as proposed by the second author.
Abstract: A programming language similar in many respects to ALGOL 60, but incorporating a large number of improvements based on six years' experience with that language, is described in detail. Part I consists of an introduction to the new language and a summary of the changes made to ALGOL 60, together with a discussion of the motives behind the revisions. Part II is a rigorous definition of the proposed language. Part III describes a set of proposed standard procedures to be used with the language, including facilities for input/output. A preliminary version of this report was originally drafted by the first author on an invitation made by IFIP Working Group 2.1 at its meeting in May, 1965 at Prince-ton. It incorporated a number of opinions and suggestions made at that meeting and in its subcommittees, and it was distributed to members of the Working Group as \"Proposal for a Report on a Successor of ALGOL 60\" it was felt that the report did not represent a sufficient advance on A~GOL 60, either in its manner of language definition or in the content of the hmguage itself. The draft therefore no longer had the status of an official Working Document of the Group and by kind permission of the Chairman it was released for wider publication. At that time the authors agreed to collaborate on revising and supplementing the draft. The main changes were: (1) verbal improvements and clarifications, many of which were kindly suggested by recipients of the original draft; (2) additional or altered language features, in particular the replacement of tree structures by records as proposed by the second author; (3) changes which appeared desirable in the course of designing a simple trod efficient implementation of the 1 anguage; (4) addition of introductory and explanatory material , and further suggestions for standard procedures, in particular on input/output; (5) use of a convenient notational facility to abbreviate the description of syntax, as suggested by van Wijn-gaarden in \"Orthogonal Design and Description of a Fornial Language\" (MR76, Mathemat.ical Centre, Am-sterdam, Oct. 1965). The incorporation of the revisions is not intended to reinstate the report as a candidate for consideration as a successor to ALGOL 60. However, it is believed that its publication will serve three purposes: (1) To present to a wider public a view of the general direction in which the development of ALGOL is proceeding; (2) To provide an opportunity …

208 citations


Journal ArticleDOI
TL;DR: Professor Dijkstra's ingenious construction and Hyman's "simplification" for the case of two computers hardly works at all, and it is hoped that this letter can save people some of the problems they would encounter if they were to use either of those methods.
Abstract: Professor Dijkstra's ingenious construction [Solution of a Problem in Concurrent Programming Control, Comm. ACM 8 (Sept. 1965), 569] is not quite a solution to a related problem almost identical to the problem he posed there, and Mr. Hyman's "simplification" for the case of two computers ]Comm. ACM 9 (Jan. 1965), 45] hardly works at all. I hope that by this letter I can save people some of the problems they would encounter if they were to use either of those methods.

185 citations


Journal ArticleDOI
TL;DR: The structure of syntactic descriptive models by considering their specific application to bubble chamber pictures and how the description generated in this phase can be embedded in a larger “conversation” program is explained.
Abstract: A descriptive scheme for classes of pictures based on labeling techniques using parallel processing algorithms was proposed by the author some years ago. Since then much work has been done in applying this to bubble chamber pictures. The parallel processing simulator, originally written for an IBM 7094 system, has now been rewritten for a CDC 3600 system. This paper describes briefly the structure of syntactic descriptive models by considering their specific application to bubble chamber pictures. How the description generated in this phase can be embedded in a larger “conversation” program is explained by means of a certain specific example that has been worked out. A partial generative grammar for “handwritten” English letters is given, as are also a few computer-generated outputs using this grammar and the parallel processing simulator mentioned earlier.

160 citations


Journal ArticleDOI
D. McIlroy1

134 citations


Journal ArticleDOI
B. M. Leavenworth1
TL;DR: A translation approach is described which allows one to extend the syntax and semantics of a given high-level base language by the use of a new formalism called a syntax-macro.
Abstract: A translation approach is described which allows one to extend the syntax and semantics of a given high-level base language by the use of a new formalism called a syntax-macro. Syntax-macros define string transformations based on syntactic elements of the base language. Two types of macros are discussed, and examples are given of their use. The conditional generation of macros based on options and alternatives recognized by the scan are also described.

Journal ArticleDOI
TL;DR: The conclusion of the paper is that the packages now available for computer simulation offer features which none of the more general-purpose packages do and that analysis of strengths and weaknesses of each suggests ways in which both current and future simulation languages and packages can be improved.
Abstract: The purpose of this paper is to present a comparison of some computer simulation languages and of some of the packages by which each is implemented. Some considerations involved in comparing software packages for digital computers are discussed in Part I. The issue is obvious: users of digital computers must choose from available languages or write their own. Substantial costs can occur, particularly in training, implementation and computer time if an inappropriate language is chosen. More and more computer simulation languages are being developed: comparisons and evaluations of existing languages are useful for designers and implementers as well as users.The second part is devoted to computer simulation and simulation languages. The computational characteristics of simulation are discussed with special attention being paid to a distinction between continuous and discrete change models. Part III presents a detailed comparison of six simulation languages and packages: SIMSCRIPT, CLP, CSL, GASP, GPSS and SOL. The characteristics of each are summarized in a series of tables. The implications of this analysis for designers of languages, for users, and for implementers are developed.The conclusion of the paper is that the packages now available for computer simulation offer features which none of the more general-purpose packages do and that analysis of strengths and weaknesses of each suggests ways in which both current and future simulation languages and packages can be improved.

Journal ArticleDOI
TL;DR: An informal discussion of the metalanguage based on the example of a complete translator for a small language is presented and a semantic metalanguage has been developed for representing the meanings of statements in a large class of computer languages.
Abstract: e l y g e r m a n dissemine puters. t h e publics well t r y) adopt net r to be o:. f o r m e d i n t c o m p o s i t i o : b e u s e d t,. t y p e s e t t i n g r o c e s s c o u l i h a s a u t h o ~ a t e ~ t i d e n t in d r e t r i e v e [ t a n s w e r t(b e f o r e t h e s e f a s h i o n , i: tndle m a n n-T h e follow. A semantic metalanguage has been developed for representing the meanings of statements in a large class of computer languages. This metalanguage has been the basis for construction of an efficient, functioning compiler-compiler. An informal discussion of the metalanguage based on the example of a complete translator for a small language is presented. One of the most significant developments in the study of computer languages has been the formalization of syntax. Besides greatly improving communications between people, formalized syntax has led to new results in the theory and practice of programming. As early as 1960, Irons [7] was able to construct a compiler whose syntax phase was independent of the source language being trans-late& This work and that of other early contributors such as Brooker and 5Iorris [1] led to speculation that the entire compilation process could be automated. The problem was to develop a single program which could act as a translator for a large class of languages differing from each other in substantial ways. To solve this so-called compiler-compiler problem, one must find appropriate formalizations of the syntax and semantics of computer languages. The formalization of semantics for some language, L, will involve representing the meanings of statements in L in terms of an appropriate mete-language. The meanings of statements in the mete-language are assumed to be known (primitive). If L is a computer language, the semantics of L nmst involve a description of its translator. One example of a semantic mete-language is the order code of a computer. An order code or assembly language can certainly describe any translation …

Journal ArticleDOI
Jean E. Sammet1
TL;DR: The purpose of this talk is to make a personal plea, backed up by some practical comments, for the use of English or anyone else's natural language as a programming language.
Abstract: The purpose of this talk is to make a personal plea, backed up by some practical comments, for the use of English or anyone else's natural language as a programming language. This seems to be a suitable subject for the conference, since whatever definition of pragmatics is decided upon, it certainly seems to be tied in with the users of any programming language and what the language means to them.

Journal ArticleDOI
TL;DR: In May, 1965, System Development Corporation (SDC) proposed to do some research to study program organization with respect to dynamic program behavior, and suggested that simulation techniques might be used to study the problem of resource allocation in a multiprocessor time-sharing system.
Abstract: In May, 1965, System Development Corporation (SDC) proposed to do some research to study program organization with respect to dynamic program behavior. Further, the proposal suggested that simulation techniques might be used to study the problem of resource allocation in a multiprocessor time-sharing system. Some of the reasons for the proposal related to the prospective utilization of the time-sharing hardware features of the GE and IBM time-sharing computers. At the time, there was considerable interest in investigating the concepts of program segmentation and page turning, both at SDC and in the time-sharing community at large. The concept of fixed-size paging on demand particularly, raised some questions of practicality. One of the early papers on the subject by Dennis and Glaser1 states that the concept of page-turning can be either useful or disastrous, depending on the class of information to which it is applied. However, the theory appeared to be both advantageous and elegant, so that the future of time-sharing seemed to be committed to the concept.

Journal ArticleDOI
TL;DR: It is a very great pleasure for me to have been invited to this conference and to speak in this opening session, and I was sure that some historical background probably prompted the choice of the organizers.
Abstract: 1. A Very Pragmatic Introduction I t is a very great pleasure for me to have been invited to this conference and to speak in this opening session. When I accepted the invitation, I was sure that some historical background probably prompted the choice of the organizers, as this conference and its title have some specific relations to the town of Vienna and to the country of Austria, where I come from. Let me explain.

Journal ArticleDOI
TL;DR: The TRAC language was developed as the basis of a software package for the reactive typewriter as well as an extension and generalization to character strings of the programming concept of the “macro.”
Abstract: A description of the TRAC (Text Reckoning And Compiling) language and processing algorithm is given. The TRAC language was developed as the basis of a software package for the reactive typewriter. In the TRAC language, one can write procedures for accepting, naming and storing any character string from the typewriter; for modifying any string in any way; for treating any string at any time as an executable procedure, or as a name, or as text; and for printing out any string. The TRAC language is based upon an extension and generalization to character strings of the programming concept of the “macro.” Through the ability of TRAC to accept and store definitions of procedures, the capabilities of the language can be indefinitely extended. TRAC can handle iterative and recursive procedures, and can deal with character strings, integers and Boolean vector variables.

Journal ArticleDOI
TL;DR: A simple extension has been found to the conventional orthogonalization method for inverting nonsingular matrices, which gives the generalized inverse with little extra effort and with no additional storage requirements.
Abstract: The generalized inverse of a matrix is important in analysis because it provides an extension of the concept of an inverse which applies to all matrices. It also has many applications in numerical analysis, but it is not widely used because the existing algorithms are fairly complicated and require considerable storage space. A simple extension has been found to the conventional orthogonalization method for inverting nonsingular matrices, which gives the generalized inverse with little extra effort and with no additional storage requirements. The algorithm gives the generalized inverse for any m by n matrix A, including the special case when m = n and A is nonsingular and the case when m > n and rank ( A ) = n. In the first case the algorithm gives the ordinary inverse of A. In the second case the algorithm yields the ordinary least squares transformation matrix (ATA)-IA T and has the advantage of avoiding the loss of significance which results in forming the product ATA explicitly.

Journal ArticleDOI
Kenneth C. Knowlton1
TL;DR: Bell Telephone Laboratories' Low-Level Linked List Language (pronounced “L-six”) is a new programming language for list structure manipulations that permits the user to get much closer to machine code in order to write faster-running programs, to use storage more efficiently and to build a wider variety of linked data structures.
Abstract: Bell Telephone Laboratories' Low-Level Linked List Language L6 (pronounced “L-six”) is a new programming language for list structure manipulations. It contains many of the facilities which underlie such list processors as IPL, LISP, COMIT and SNOBOL, but permits the user to get much closer to machine code in order to write faster-running programs, to use storage more efficiently and to build a wider variety of linked data structures.

Journal ArticleDOI
TL;DR: In this article, an extension to ALGOL is proposed for adding new data types and operators to the language, where definitions may occur in any block heading and terminate with the block.
Abstract: An extension to ALGOL is proposed for adding new data types and operators to the language. Definitions may occur in any block heading and terminate with the block. They are an integral part of the program and are not fixed in the language. Even the behavior of existing operators may be redefined. The processing of text containing defined contexts fealures a \"replacement rule\" that eliminates unnecessary iterations and temporary storage. Examples of definition sets are given for real and complex malrices, complex r~umbers, file processing, and list manipulation.

Journal ArticleDOI
TL;DR: The purpose of this paper is to give precise descriptions of certain probabilistic models for roundoff error, and to describe a series of experiments for testing the validity of these models.
Abstract: In any prolonged computation it is generally assumed that the accumulated effect of roundoff errors is in some sense statistical.The purpose of this paper is to give precise descriptions of certain probabilistic models for roundoff error, and then to describe a series of experiments for testing the validity of these models.It is concluded that the models are in general very good. Discrepancies are both rare and mild.The test techniques can also be used to experiment with various types of special arithmetic.

Journal ArticleDOI
TL;DR: Computer simulation experiments design for industrial systems, considering variance techniques, multiple ranking procedures, sequential sampling and spectral analysis, and single ranking procedures.
Abstract: Computer simulation experiments design for industrial systems, considering variance techniques, multiple ranking procedures, sequential sampling and spectral analysis


Journal ArticleDOI
Joel Moses1
TL;DR: The elimination procedure as described by Williams has been coded in LISP and FORMAC and used in solving systems of polynomial equations and it is found that the method is very effective in the case of small systems, where it yields all solutions without the need for initial estimates.
Abstract: The elimination procedure as described by Williams has been coded in LISP and FORMAC and used in solving systems of polynomial equations. It is found that the method is very effective in the case of small systems, where it yields all solutions without the need for initial estimates. The method, by itself, appears inappropriate, however, in the solution of large systems of equations due to the explosive growth in the intermediate equations and the hazards which arise when the coefficients are truncated. A comparison is made with difficulties found in other problems in non-numerical mathematics such as symbolic integration and simplification.

Journal ArticleDOI
C. M. Reeves1
TL;DR: An ALGOL procedure is given for automatically generating formulas for matrix elements arising in the variational solution of the Schrödinger equation for many-electron systems.
Abstract: An ALGOL procedure is given for automatically generating formulas for matrix elements arising in the variational solution of the Schrodinger equation for many-electron systems.

Journal ArticleDOI
George E. Collins1
TL;DR: PM is an IBM 7094 program system for formal manipulation of polynomials in any number of variables, with integral coefficients unrestricted in size, which is described and compared with the LISP and SLIP systems.
Abstract: PM is an IBM 7094 program system for formal manipulation of polynomials in any number of variables, with integral coefficients unrestricted in size Some of the formal opeartions which can be performed by the system are sums, differences, products, quotients, derivatives, substitutions and greatest common divisors PM is based on the REFCO III list processing system, which is described and compared with the LISP and SLIP systems The PM subroutines for arithmetic of large integers are described as constituting an independently useful subsystem PM is compared with the ALPAK system in several respects, including the choices of canonical forms for polynomials A new algorithm for polynomial greatest common divisor calculation is mentioned, and examples are included to illustrate its superiority

Journal ArticleDOI
C. V. Ramamoorthy1
TL;DR: This work shall broaden the concept of the “look-ahead” function to the anticipation of the next “event” of a specified type whether it is a functional subroutine, instruction or an input-output function.
Abstract: Since the computer operation when viewed as the execution of subroutines or instructions or logical subcommands is a discrete sequential process, its action or behavior can be predicted to a degree of certainty when sufficient knowledge is available about the structure of its information flow. Conventionally, the “look-ahead” function implies fetching the next set of instructions and their associated data and preparing ahead of time for their execution. We shall, however, broaden the concept to the anticipation of the next “event” of a specified type whether it is a functional subroutine, instruction or an input-output function. In some sense, one can consider “looking-ahead” as the anticipatory simulation of important events in the execution sequence. By anticipating the most probable events, their requirements can be preplanned and furnished, if possible, without stalling the program, e.g., by alerting I/O units, by providing memory space for the next set of subroutines to be executed, etc. This is particularly important in a multi-programmed, multi-processor environment

Journal ArticleDOI
TL;DR: A new multiplicative congruential pseudorandom number generator is discussed, in which the modulus is the largest prime within accumulator capacity and the multiplier is a primitive root of that prime.
Abstract: A new multiplicative congruential pseudorandom number generator is discussed, in which the modulus is the largest prime within accumulator capacity and the multiplier is a primitive root of that prime. This generator passes the usual statistical tests and in addition the least significant bits appear to be as random as the most significant bits--a property which generators having modulus 2 k do not possess.

Journal ArticleDOI
TL;DR: Cost functions among five System/360 models are analyzed through examinations of instruction times, program kernels and a "typical" instruction mix, and the most interesting question is that of centralization vs. decentralization in computing machinery.
Abstract: Cost functions among five System/360 models are analyzed through examinations of instruction times, program kernels and a \"typical\" instruction mix. Comparisons are made between the data developed here and Grosch's Law which seems to be applicable to much of the data. Sizable economies of scale are unquestionably present in computing equipment. :\\ most interesting question is that of centralization vs. decentralization in computing machinery. A survey of ('oinputer systems woul(l reveal a wide variety of configurations ranging from complete centralization in which onc la.rgc computer does most or all of the work for a firm to complete decentralization in which inany small com-1)utcrs fill the coinputation [leeds. Much disagreement exisls concerning this question. Some managers and specialists claim that highly centralized computing facilities are most econolnical while others claim the opposite. The purpose of tiffs paper is to shed additional light on the topic. The subject of formal computer cost functions has received rather minimal attention in the past.' The reason is not that economists have been remiss, but that the problem has been complex. The output of computers is rather difficult to measure in any meaningful way'-' arid until recently each individual commercially available conlputcr was considerably different in operating characteristics , capabilities and configuration. In addition, comparisons anmng computers of different, manufacturers are exceedingly difficult because not only is the hardware different in kind but the other services that the manu-factm'ers perform vary greatly in both quality and quantity. An increasingly important area of manufacturers' support is plvgramming effort. Some manufacturers supplS, * Computing Center and College of Business and Economics. Some classical work has been done by Knight [ll; also, in a less serious manner, by Adams [2]. z Knight developed an extremely elaborate function to compute the \"power\" of a computer [l, pp. IV-1 to IV-15]. Although the fimction seeins to hold empirical validity for the machines compared , it appears to contain sonm features which probably make it inapplicable to third generation computers. many elat)oratc programs to the user while others provide few. The area of software (or programnfing) support becomes increasingly important as computers become more complex; programs are more complex and thus more difficult for the cnstomer to analyze and maintain. Therefore one cannot easily compare even similar computers manufactured by say IBM and CDC. l{ccently a new era in computing began in which manufacturers market \"compatible\" lines of computing equipment ranging from …

Journal ArticleDOI
TL;DR: A modification of the rule mask technique is discussed which takes into account both rule frequencies and the relative times for evaluating conditions, which can materially improve object program run time.
Abstract: The rule mask technique is one method of converting limited entry decision tables to computer programs. Recent discussion suggests that in many circumstances it is to be preferred to the technique of constructing networks or trees. A drawback of the technique as hitherto presented is its liability to produce object programs of longer run time than necessary. In this paper a modification of the technique is discussed which takes into account both rule frequencies and the relative times for evaluating conditions. This can materially improve object program run time.