scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1967"


Journal ArticleDOI
TL;DR: Since communications lines are the principal user-processor links, the authors suggested a complex series of protective measures, including terminal and user identification, the use of passwords, disposal of carbon papers and typewriter ribbons, physical security of the terminal, and privacy transformations, which are techniques for coding data.
Abstract: A half-dozen computer users and designers devoted two complete sessions of the Spring Joint Computer Conference in April to their attempts to protect sensitive information in multiple-access computers. Concern over this type of information developed in Congress just a year ago when the Budget Bureau proposed a Na-~ional Data Center for the consolidation of govermnent statistical work. During a session on security and privacy , a consensus was developed by the speakers that this information, whether of a personal or a classified nature, can be protected in the computer, but once it begins travelling along communication lines to switehing centers or to remote terminals, it is vulnerable to intrusion. The speakers said the central processor and the files can be protected against invasion by a series of countermeasures, including use of a monitor that guards the entire software; memory protect and privileged instructions; placement of the computer in a secure location; clearances for operating personnel; logging of sig-niticant events; access management; and various processing restrictions, such as a ban on copying of complete files. The speakers seemed confident that these measures are within their grasp, although some are still to be implemented. The protection of communications lines, however, seems to be far from solution. It is too simple to tap these lines. In a joint paper, Harold E. Petersen and Rein Turn, of the Rand Corporation, said that you can penetrate communications lines with a $100 tape recorder and a code conversion table. They also said that digital transmission of information provides no more privacy than Morse code, for example. \"Nevertheless,\" they said, \"some users seem willing to entrust to digital systems vaIuable information that they would not communicate over a telephone.\" According to Petersen and Turn, information can be picked off communications lines by wiretapping, electremag-netic pickup, or the use of special terminals that can intercept information between the user and the processor, modify it, or replace it with other information. Shielding of the lines would help, of course, but this is so expensive that it would be feasible in only a few cases, such as for lines carrying highly classi-fled information. Since communications lines are the principal user-processor links, the authors suggested a complex series of protective measures, including terminal and user identification, the use of passwords, disposal of carbon papers and typewriter ribbons, physical security of the terminal, and privacy transformations, which are techniques for coding data. …

343 citations


Journal ArticleDOI
TL;DR: A new algorithm is presented which offers significant advantages of speed and storage utilization and can be written in the list language with which it is to be used, thus insuring a degree of machine independence.
Abstract: A method for returning registers to the free list is an essential part of any list processing system. In this paper, past solutions of the recovery problem are reviewed and compared. A new algorithm is presented which offers significant advantages of speed and storage utilization. The routine for implementing this algorithm can be written in the list language with which it is to be used, thus insuring a degree of machine independence. Finally, the application of the algorithm to a number of different list structures appearing in the literature is indicated.

253 citations


Journal ArticleDOI
TL;DR: A modified Newton method for polynomials is discussed and it is shown that under appropriate conditions, two of the variations are cubically convergent.
Abstract: A modified Newton method for polynomials is discussed. It is assumed one has approximations for all the roots of the polynomial. Three variations are described. If the roots are simple, it is shown that under appropriate conditions, two of the variations are cubically convergent, 1. ! ,l t r o d u e t io, l,et t:.here be given film polynomial J'(:C) = aMt(' @ a,:.-.tX\"' @ \"'\" + aix-I-ao (1) ,i i,! where we assume the roots are distinct. Assume X~0), ... , ,~,, tare n distinct guesses for the roots of f(x).

214 citations


Journal ArticleDOI
TL;DR: Any region can be regarded as a union of maximal neighborhoods of its points, and can be specified by the centers and radii of these neighborhoods; this set is a sort of "skeleton" of the region.
Abstract: Any region can be regarded as a union of maximal neighborhoods of its points, and can be specified by the centers and radii of these neighborhoods; this set is a sort of \"skeleton\" of the region. The storage required to represent a region in this way is comparable to that required when it is represented by encoding its boundary. Moreover, the skeleton representation seems to have advantages when it is necessary to determine repeatedly whether points are inside or outside the region, or to perform set-theoretic operations on regions.

151 citations


Journal ArticleDOI
TL;DR: The fast Fourier transform algorithm is briefly reviewed and fast difference equation methods for accurately computing the needed trigonometric function values are given and the problem of computing a large Fouriertransform on a system with virtual memory is considered, and a solution is proposed.
Abstract: and have shown major time savings in using it to compute large transforms on a digital computer. With n a power of two, computing time for this algorithm is proportional to n log2 n, a major improvement over other methods with computing time proportional to n 2. In this paper, the fast Fourier transform algorithm is briefly reviewed and fast difference equation methods for accurately computing the needed trigonometric function values are given. The problem of computing a large Fourier transform on a system with virtual memory is considered, and a solution is proposed. This method has been used to compute complex Fourier transforms of size n = 2 z6 on a computer with 215 words of core storage; this exceeds by a factor of eight the maximum radix two transform size with fixed allocation of this amount of core storage. The method has also been used to compute large mixed radix transforms. A scaling plan for computing the fast Fourier transform with fixed-point arithmetic is also given.

142 citations


Journal ArticleDOI
Helmut H. Weber1
TL;DR: An experimental processing system for the algorithmic ~anguage EULER has been implemented in microprogramming on an IBM System/360 Model 30 using a second Read-Only Storage unit and results are given in terms of microprogram and main storage space required and compiler and interpreter performance.
Abstract: An experimental processing system for the algorithmic ~anguage EULER has been implemented in microprogramming on an IBM System/360 Model 30 using a second Read-Only Storage unit. The system consists of a mlcroprogrammed compiler and a microprogrammed String Language Interpreter, and of an I/O control program written in 360 machine language. The system is described and results are given in terms of microprogram and main storage space required and compiler and interpreter performance obtained. The role of microprogramming is stressed, which opens a new dimension in the processing of interpretive code. The structure and content of a higher level language can be matched by an appropriate interpretive language which can be executed efficiently by microprograms on existing computer hardware.

120 citations


Journal ArticleDOI
TL;DR: The state of the art of system performance evaluation is reviewed and evaluation goals and problems are examined, and the central role of measurement and in the development of evaluation methods is explored.
Abstract: The state of the art of system performance evaluation is reviewed and evaluation goals and problems are examined. Throughput, turnaround, and availability are defined as fundamental measures of performance; overhead and CPU speed are placed in perspective. The appropriateness of instruction mixes, kernels, simulators, and other tools is discussed, as well as pitfalls which may be encountered when using them. Analysis, simulation, and synthesis are presented as three levels of approach to evaluation, requiring successively greater amounts of information. The central role of measurement in performance evaluation and in the development of evaluation methods is explored.

101 citations


Journal ArticleDOI
TL;DR: It is argued that the adequacy of the level of understanding achieved in a particular conversation depends on the purpose of that conversation, and that absolute understanding on the part of either humans or machines is impossible.
Abstract: A further development of a computer program (ELIZA) capable of conversing in natural language is discussed. The importance of context to both human and machine understanding is stressed. It is argued that the adequacy of the level of understanding achieved in a particular conversation depends on the purpose of that conversation, and that absolute understanding on the part of either humans or machines is impossible.

101 citations


Journal ArticleDOI
TL;DR: The Free Storage Package of the AED-1 Compiler System allows blocks of available storage to be obtained and returned for reuse and performs high level functions automatically, but also allows access and control of fine internal details as well.
Abstract: The most fundamental underlying problem in sophisticated software systems involving elaborate, changing data structure is dynamic storage allocation for flexible problem modeling. The Free Storage Package of the AED-1 Compiler System allows blocks of available storage to be obtained and returned for reuse. The total available space is partitioned into a hierarchy of free storage zones, each of which has its own characteristics. Blocks may be of any size, and special provisions allow efficient handling of selected sizes, control of shattering and garbage collection, and sharing of physical space between zones. The routines of the package perform high level functions automatically, but also allow access and control of fine internal details as well.

83 citations


Journal ArticleDOI
TL;DR: The problem of enumerating the number of topologies which can be formed from a finite point set is considered both theoretically and computationally, leading to an algorithm for enumerating finite topologies, and computed results are given for n N 7.
Abstract: The problem of enumerating the number of topologies which can be formed from a finite point set is considered both theoretically and computationally. Certain fundamental results are established, leading to an algorithm for enumerating finite topologies, and computed results are given for n N 7. An interesting side result of the computational work was the unearthing of a theoretical error which had been induced into the literature; the use of the computer in combinatorics represents, chronologically, an early application, and this side result underscores its continuing usefulness in this area. It seems to have become an almost classic remark that there nre no interesting problems concerning topologies on a tinite munber of points. To a topologist this may be true; however, from t~ combin~torial point of view, it is irtterest-ing to determine how many different topologies there +are on n points. A word of explauation is in order. There are really two distinct, although related, enumeration problems: either we may consider th.e points as distinguished (the labeled case), or we may only count the number of homaeomorph-ism classes of topological spaces (the unlabeled case). Our object is to enumerate the labeled topologies with n points. A finite topology is characterized axiomatieMly by taking t~ prescribed collection of the subsets of a set V with n points :~s open, such that the union and intersection of two open sets are <)pen, as are the empty set and V itself. A \"labeled topology\" has its points labeled with the integers 1, ') ... ~, , n. Two labeled topologies arc called homeo-morphic if there is a 1-1 correspondence t.c.txxeen their point sets which preserves open sets. By an \"unlabeled Lopology\" or just a topology is me~mt a homeomorphism <:lass of labeled topologies. t)l this paper, we estab ish cerlain fundamental results leading to an algorithm for enumerating finite t;opolog'ies and give computed results for n .< 7. A side result of this computational work was to unearth art error which had previously appeared in the literature (see section on T0-Topologies), perhaps underscoring the continuous useful-hess of the computer in combinatorics. The enumeration of labeled topologies will be formulated with the help of a lemma, anticipated by Krish-namurthy [6], who expressed the observation in terms of matrices. We use the terminology of directed graphs given in [4]. A labeled digraph D has its set V of n points labeled with the integers 1, …

81 citations


Journal ArticleDOI
A. Michael Noll1
TL;DR: A digital computer and automatic plotter have been used to generate three- dimensional stereoscopic movies of the three-dimensional parallel and perspective projections of four-dimensional hyperobjects rotating in four- dimensional space.
Abstract: A digital computer and automatic plotter have been used to generate three-dimensional stereoscopic movies of the three-dimensional parallel and perspective projections of four-dimensional hyperobjects rotating in four-dimensional space. The observed projections and their motions were a direct extension of three-dimensional experience, but no profound "feeling" or insight into the fourth spatial dimension was obtained. The technique can be generalized to n-dimensions and applied to any n-dimensional hyperobject or hypersurface.

Journal ArticleDOI
Cyril N. Alberga1
TL;DR: The problem of programming a computer to determine whether or not a string of characters is a misspelling of a given word was considered and a number of algorithms were evaluated--some proposed by other writers, some by the author.
Abstract: The problem of programming a computer to determine whether or not a string of characters is a misspelling of a given word was considered. A number of algorithms were evaluated--some proposed by other writers, some by the author. These techniques were tested on a collection of misspMlings made by students at various grade levels. While many of the methods were clearly unsatisfactory, some gave as few as 2.1 percent incorrect determinations.

Journal ArticleDOI
TL;DR: A scheme for binding variables is described which is good in this environment and allows for complete compatibility between compiled and interpreted programs with no special declarations.
Abstract: In an ideal list-processing system there would be enough core memory to contain all the data and programs. Described in this paper are a number of techniques that have been used to build a LISP system utilizing a drum for its principal storage medium, with a surprisingly low time penalty for use of this slow storage device. The techniques include careful segmentation of system programs, allocation of virtual memory to allow address arithmetic for type determination, and a special algorithm for building reasonably linearized lists. A scheme for binding variables is described which is good in this environment and allows for complete compatibility between compiled and interpreted programs with no special declarations.


Journal ArticleDOI
Ikuo Nakata1
TL;DR: Algorithm concerning arithmetic expressions used in a FORTRAN IV compiler for a HITAC-5020 computer having n accumulators generates an object code which minimizes the frequency of storing and recovering the partial results of the arithmetic expressions in cases where there are several accumulators.
Abstract: This paper deals with algorithms concerning arithmetic expressions used in a FORTRAN IV compiler for a HITAC-5020 computer having n accumulators. The algorithms generate an object code which minimizes the frequency of storing and recovering the partial results of the arithmetic expressions in cases where there are several accumulators.

Journal ArticleDOI
TL;DR: The development of a comprehensive simulation model to assist in the investigation of these questions is described and has a general purpose design and can be used to study a variety of time-sharing systems.
Abstract: The development of new large scale time-sharing systems has raised a number of problems for computation center management. Not only is it necessary to develop an appropriate hardware configuration for these systems, but appropriate software adjustments must be made. Unfortunately, these systems often do not respond to changes in the manner that intuition would suggest, and there are few guides to assist in the analysis of performance characteristics. The development of a comprehensive simulation model to assist in the investigation of these questions is described in this paper. The resulting model has a general purpose design and can be used to study a variety of time-sharing systems. It can also be used to assist in the design and development of new time-sharing algorithms or techniques. For the sake of efficiency and greater applicability, the model was implemented in a limited FORTRAN subset that is compatible with most FORTRAN IV compilers. The use of the simulation is demonstrated by a study of the IBM 360/67 time-sharing system.

Journal ArticleDOI
TL;DR: DITRAN (DDITRAN DIagnostic ForTRAN) is an implementation of ASA Basic FORTRAN with rather extensive error checking capabilities both at compilation time and during execution of a program.
Abstract: DITRAN DIagnostic FORTRAN) is an implementation of ASA Basic FORTRAN with rather extensive error checking capabilities both at compilation time and during execution of a program. The need for improved diagnostic capabilities and some objectives to be met by any compiler are discussed. Attention is given to the design and implementation of DITRAN and the particular techniques employed to provide the diagnostic features. The handling of error messages by a general macro approach is described. Special features which provide teaching aids for use by instructors are noted.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: The Relational Data File (RDF) project as discussed by the authors was concerned with the use of computers as assistants in the logical analysis of large collections of factual data and was developed for this purpose.
Abstract: This paper presents a RAND project concerned with the use of computers as assistants in the logical analysis of large collections of factual data.A system called the Relational Data File was developed for this purpose. The Relational Data File is briefly detailed and problems arising from its implementation are discussed.


Journal ArticleDOI
TL;DR: A general purpose macro processor called ML/I is described, intended as a tool to allow users to extend any existing programming language by incorporating new statements and other syntactic forms of their own choosing and in their own notation.
Abstract: Condusion l:or centuries astronomers have given the name ALGOL to a star which is also called Medusa's head. The author has tried to indicate every Mmwu blemish in [2]; and he hopes that nobody will ever scrutinize any of his own writings as meticulously as he and others have examined the ALGOL Report,. A general purpose macro processor called ML/I is described. ML/I has been implemented on the PDP-7 and I.C.T. Atlas 2 computers and is intended as a tool to allow users to extend any existing programming language by incorporating new statements and other syntactic forms of their own choosing and in their own notation. This allows a complete user-oriented language to be built up with relative ease.

Journal ArticleDOI
TL;DR: After a review of the power of contemporary computers, computer science is defined in several ways and it is asserted that in a North American university these will be achieved only through a computer science department.
Abstract: After a review of the power of contemporary computers, computer science is defined in several ways The objectives of computer science education are stated, and it is asserted that in a North American university these will be achieved only through a computer science department The program at Stanford University is reviewed as an example The appendices include syllabi of PhD qualifying examinations for Stanford's Computer Science Department

Journal ArticleDOI
G. S. Shedler1
TL;DR: A technique is given for the development of numerical procedures which provide, at each stage, several approximations to a solution of an equation, making the methods of interest in a parallel processing environment.
Abstract: Classical iterative procedures for the numerical solution of equations provide at each stage a single new approximation to the root in question. A technique is given for the development of numerical procedures which provide, at each stage, several approximations to a solution of an equation. The several approximations obtained in any iteration are computationally independent, making the methods of interest in a parallel processing environment. Convergence is insured by extracting the \"best information\" at each iteration. Several families of numerical procedures which use the technique are given. Statistics for the evaluation of the performance of the procedures in a parallel processing environment are developed and measurements of these statistics are reporte& These measurements are interpreted in a parallel processing environment. In such an environment the procedures obtained are superior to standard algorithms.

Journal ArticleDOI
TL;DR: It is concluded from these authors that such a distinction is not fundamental to the structure of the language, given appropriate programming or "activity subroutines."
Abstract: Authors Teichroew and Lubin [CACM 9, 10 (Oct. 66)] deserve credit for their excellent paper on Simulation Languages. As a member of \"the other camp,\" which is concerned with \"con-ftinuous systems simulation languages,\" I am particularly grateful lor the insight gained from the analysis of discrete event simu-ators. The authors included a brief discussion of \"continuous-change simulation languages\" and gave reference to the appropriate literature on the subject. I should like here to add some thoughts on this topic, within the framework of the subject paper. First, let me comment that the Simulation Software Committee of the Simulation Councils, Inc. (an AFIPS member) was formed in 1965 for the express purpose of preparing language standards for the class of simulation languages it has chosen to call \"continuous system simulation language\" (CSSL). As noted by Teichroew and Lubin, there have been many such programs developed since the first one in 1957-the count is at least 23. The committee expects to publish the completed standard this spring. It is customary in casual discussion to distin~fish between the two classes of languages by use of the terms \"continuous\" and \"discrete\" simulations. While it is true that these words characterize the typical models represented in the two kinds of languages , I conclude from these authors that such a distinction is not fundamental to the structure of the language, given appropriate programming or \"activity subroutines.\" I suspect that CSL can approximate continuous simulation, and that a present-day CSSL certainly can represent discrete behavior. The distinction that is fundamental is characterized by these excerpts: CSSL: the system simulation consists of \"a continuous flow of information or material counted in the aggregate rather than individual items.\" \"Discrete\" Simulators: \"items flow through the system.\" \"This type of sinmlation consists.., in keeping track of where individual items are,\" (italics mine) It is possible with CSSL to represent flow of discrete items through a system, as well as queueing and actions that are conditional upon the size of the queue. However, the flow of items must be homogeneous: individual items cannot be distinguished; core space is not required for all items of a queue, only the current size of the queue is retained. The authors have taken care in clarifying the terminology of the languages analyzed. Moreover, they have suggested a basic set of terms, in the legends of the tables. Looking at these from a different point of …

Journal ArticleDOI
TL;DR: This paper lists the ambiguities remaining in the language ALGOL 60, which have been noticed since the publication of the Revised ALGol 60 Report in 1963.
Abstract: This paper lists the ambiguities remaining in the language ALGOL 60, which have been noticed since the publication of the Revised ALGOL 60 Report in 1963.

Journal ArticleDOI
TL;DR: An invariant imbedding technique is presented which is useful in overcoming these frequently encountered instabilities, and the results of some numerical experiments are presented.
Abstract: : In such diverse areas as radiative transfer in planetary atmospheres and optimal guidance and control, two-point boundary-value problems for unstable systems arise, greatly complicating the numerical solution. An invariant imbedding technique is presented which is useful in overcoming these frequently encountered instabilities, and the results of some numerical experiments are presented. (Author)


Journal ArticleDOI
TL;DR: The algorithm presented in this paper finds a spanning tree and then constructs the set of fundamental cycles and is slower than an algorithm presented by Welch by a ratio of N/3 (N is the number of nodes) but requires less storage.
Abstract: Given the adjacency matrix of the graph, the algorithm presented in this paper finds a spanning tree and then constructs the set of fundamental cycles. Our algorithm is slower than an algorithm presented by Welch by a ratio of N/3 (N is the number of nodes) but requires less storage. For graphs with a large number of nodes and edges, when storage is limited our algorithm is superior to Welch's; however, when the graphs are small, or machine storage is very large, Welch's algorithm is superior. Timing estimates and storage requirements for both methods are presented.

Journal ArticleDOI
TL;DR: An online, interactive system for text editing is described in detail, with remarks on the theoretical and experimental justification for its form.
Abstract: An online, interactive system for text editing is described in detail, with remarks on the theoretical and experimental justification for its form. Emphasis throughout the system is on providing maximum convenience and power for the user. Notable features are its ability to handle any piece of text, the content-searching facility, and the character-by-character editing operations. The editor can be programmed to a limited extent.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the optimal starting value for the Newton-Raphson calculation of √x on a digital computer is considered, and it is shown that the conventionally used best uniform approximations do not provide optimal starting values.
Abstract: The problem of obtaining starting values for the Newton-Raphson calculation of √x on a digital computer is considered. It is shown that the conventionally used best uniform approximations to √x do not provide optimal starting values. The problem of obtaining optimal starting values is stated, and several basic results are proved. A table of optimal polynomial starting values is given.