scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1961"


Journal ArticleDOI
TL;DR: The phrase "direct search" is used to describe sequential examination of trial solutions involving comparison of each trial solution with the "best" obtained up to that time together with a strategy for determining (as a function of earlier results) what the next trial solution will be.
Abstract: In dealing with numerical problems for which classical methods of solution are unfeasible, many people have tried various procedures of searching for an answer on a computer. Our efforts in this direction have produced procedures which seem to have had (for us and for others who have used them) more success than has been achieved elsewhere, so that we have been encouraged to publish this report of our studies. We use the phrase \"direct search\" to describe sequential examination of trial solutions involving comparison of each trial solution with the \"best\" obtained up to that time together with a strategy for determining (as a function of earlier results) what the next trial solution will be. The phrase implies our preference, based on experience, for straightforward search strategies which employ no techniques of classical analysis except where there is a demonstrable advantage in doing so. We have found it worthwhile to study direct search methods for the following reasons: (a) They have provided solutions to some problems, of importance to us, which had been unsuccessfully attacked by classical methods. (Examples are given below.) (b) They promise to provide faster solutions for some problems that are solvable by classical methods. (For example, a method for solving systems of linear equations, proposed in Section 5, seems to take an amount of time that is proportional only to the first power of the number of equations.) (c) They are well adapted to use on electronic computers, since they tend to use repeated identical arithmetic operations with a simple logic. Classical methods, developed for human use, often stress minimization of arithmetic by increased sophistication of logic, a goal which may not be desirable when a computer is to be used. (d) They provide an approximate solution, improving all the while, at all stages of the calculation. This feature can be important when a tentative solution is needed before the calculations are completed. (e) They require (or permit) different kinds of assumptions about the functions involved in various problems, and thus suggest new classifications of functions which may repay study. Direct search is described roughly in Section 2, and explained heuristically in Section 3. Section 4 describes a kind of strategy. Sections 5 and 6 describe

4,184 citations


Journal ArticleDOI
M. E. Maron1
TL;DR: The design, execution and evaluation of a modest experimental study aimed at testing empirically one statistical technique for automatic indexed documents according to their subject content are described.
Abstract: This inquiry examines a technique for automatically classifying (indexing) documents according to their subject content. The task, in essence, is to have a computing machine read a document and on the basis of the occurrence of selected clue words decide to which of many subject categories the document in question belongs. This paper describes the design, execution and evaluation of a modest experimental study aimed at testing empirically one statistical technique for automatic indexing.

538 citations


Journal ArticleDOI
TL;DR: This paper determines error bounds for a number of the most effective direct methods of inverting a matrix by analyzing the effect of the rounding errors made in the solution of the equations.
Abstract: 1. In order to assess the relative effectiveness of methods of inverting a matrix it is useful to have a priori bounds for the errors in the computed inverses. In this paper we determine such error bounds for a number of the most effective direct methods. To illustrate fully the techniques we have used, some of the analysis has been done for floating-point computat ion and some for fixed-point. In all cases it has been assumed tha t the computat ion has been performed using a precision of t binary places, though it should be appreciated tha t on a computer which has both fixed and floating-point facilities the number of permissible digits in a fixed-point number is greater than the number of digits in the mantissa of a floating-point number. The techniques used for analyzing floating-point computat ion are essentially those of [8], and a familiarity with tha t paper is assumed. 2. The error bounds are most conveniently expressed in terms of vector and matr ix norms, and throughout we have used the Euclidean vector norm and the spectral matrix norm except when explicit reference is made to the contrary. For convenience the main properties of these norms are given in Section 9. In a recent paper [7] we analyzed the effect of the rounding errors made in the solution of the equations

381 citations


Journal ArticleDOI
TL;DR: Wanted: new ways to use computers Classification vs. coordination Topical intersections Associations and their consequences Searching the literature and the frequency spotlight on information retrieval are highlighted.
Abstract: : Contents: Wanted: new ways to use computers Classification vs. coordination Topical intersections Associations and their consequences Searching the literature The frequency spotlight on information retrieval

146 citations


Journal ArticleDOI
TL;DR: A discussion of the process of numerical (digital) filtering is presented together with formulae for a particularly simple but effective design and the problem of trend type errors and their elimination by the inclusion of additional constraints on the filter weights is included.
Abstract: Abstract. A discussion of the process of numerical (digital) filtering is presented together with formulae for a particularly simple but effective design. The error is discussed in terms of the quantity ~(~, N) with k the normalized frequency and N the number of filter weights. Later empirieM design curves with e as a parameter are given. In addition, the effect of allowing slope discontinuities in the filter transfer function is noted. Next, the problem of trend type errors and their elimination by the inclusion of additional constraints on the filter weights is included. Designs for high-pass, band-pass and quadrature filters, with a comparison of two band-pass design approaches, are given. Specific applications are given to illustrate the effect of the filtering process as well as the various ways filters can be used in processing digital data.

131 citations


Journal ArticleDOI
TL;DR: An all computer document retrieval system which can find documents related to a request even though they may not be indexed by the exact terms of the request, and can present these documents in the order of their relevance to the request is described.
Abstract: This paper describes an all computer document retrieval system which can find documents related to a request even though they may not be indexed by the exact terms of the request, and can present these documents in the order of their relevance to the request. The key to this ability lies in the application of a statistical formula by which the computer calculates the degree of association between pairs of index terms. With proper manipulation of these associations (entirely within the machine) a vocabulary of synonyms, near synonyms and other words closely related to any given term or group of terms is derived. Such a vocabulary related to a group of request terms is believed to be a much more powerful tool for selecting documents from a collection than has been available heretofore. By noting the number of matching terms between this extended list of request terms and the terms used to index a document, and with due regard for their degree of association, documents are selected by the computer and arranged in the order of their relevance to the request. Like all other documentalists who are operating large coordinate indexes, we are searching for better ways to exploit this type of information system. In our library we have already eliminated the time-consuming job of posting document numbers manually by enlisting the aid of a 705 computer. (The computer periodically prepares revised posting cards to replace the outdated ones.) Now we are searching for better solutions to our retrieval problems. One obvious retrieval problem in any large system is the time required to \"coordinate\" heavily posted terms. We are convinced we must mechanize if we are to allow our collection to grow indefinitely. A second problem is the retrieval of so many documents related to a single request that the customer finds it difficult to decide which to examine first. Since he has no precise means of determining which document is most closely related to a request, we have been forced into assisting him to use somewhat arbitrary or subjective means. The date of the document is sometimes used as a relevance criterion with the hope that the most recent document will be the most pertinent, or the name of the author is used with the hope that a known author will answer the request better than an unknown one. The pitfalls of such criteria are apparent. The third, and …

100 citations


Journal ArticleDOI
TL;DR: It is tame to consider methods of more uniform solutions to the problem of finding numerical approximations to the roots of a polynomial that may be far too laborious to carry out by hand but which nevertheless are sufficiently easy for an automatic computer.
Abstract: The problem of finding numerical approximations to the roots of a polynomial has a long and interesting history. The various methods have always been proposed in terms of the state of the art of computataon then current. Since the advent of high-speed computing systems there has been, the writer feels, an unusual lag m the development of new techniques for deahng with this timehonored problem, techmques bet ter stated to the capabihties and inadequacies of automatic computers. A recent survey of available methods [1] indicates tha t no one method is desirable for automatic computers. I t is true that methods such as those of Newton or Bernoulli for real roots and Graeff~ or Bairstow for complex roots have been programmed for automatm computers. Each of these classic methods requires a good deal of judgment in connection with the isolation or separation of roots. These judicial decasions are relatively easy for a human being to make when operating a desk calculator but are more difficult to anticipate and furnish to the machine's program. On the other hand, the machine is prepared to undertake thousands of times more arithmetical activity than was ever contemplated by the inventors of the classmal methods. Hence it is tame to consider methods of more uniform applmcabihty, ~dth possibly slower convergence rates, that may be far too laborious to carry out by hand but which nevertheless are sufficiently easy for an automatic computer. Such a method should be applicable to polynomials with complex coefficients whose roots are therefore any arbitrary finite set of points an the complex plane, distinct or not. I t thus becomes a problem of searching the complex plane for roots. One such method has already been tried by J. A. Ward [1]. I t seeks to minimize

81 citations


Journal ArticleDOI
TL;DR: The object of this paper is to expand the practical and theoretical scope of the tree circuit by the formulation of a generalized tree circuit, which in actuality is a set of circuits having basic tree circuit characteristics.
Abstract: From the very founding of switching theory by Claude E. Shannon, the tree circuit has been a useful instrument in the design of logic networks. Besides being a valuable practical addition to the designer's “tool kit”, it has been a theoretical asset in the study of circuit complexity and the establishment of general bounds on the relative costs of switching networks. The object of this paper is to expand the practical and theoretical scope of the tree circuit. The expansion is effected by the formulation of a generalized tree circuit, which in actuality is a set of circuits having basic tree circuit characteristics.

58 citations


Journal ArticleDOI
TL;DR: It will be the purpose of this paper to present a more suitable notation for description of compilers and other complicated symbol manipulation algorithms.
Abstract: The algebraic command languages (ALGOL, IT, FORTRAN, UNICODE), although useful in preparing numerical algorithms, have not in the author's opinion proven themselves useful for symbol manipulation algorithms, particularly compilers. List processors, in fact, have been designed primarily to fill this gap. Analogously, the traditional flowchart serves well as a descriptive language for numerical algorithms, but does not lend itself to description of symbol manipulation algorithms in such a way that the intent of the process is clear. I t will be the purpose of this paper to present a more suitable notation for description of compilers and other complicated symbol manipulation algorithms. The algorithms used in formula translation consist principally of the following elements: (1) A set of linguistic transformations upon the input string, together with conditions determining the applicability of each transformation. (2) A set of actions, such as the generation of machine language coding, associated with each transformation. (3) A rule for transfering the attention of the translator from one portion of the input string to another. The notation presented here greatly simplifies the representation of the first and third elements. For illustrative purposes, a compilation process for a small subset of ALGOL is described below. The subset consists of assignment statements constructed from identifiers, the five binary arithmetic operators ( T, × , / , 4 , ), the two unary arithmetic operators (-{-, --), the replacement operator ( : = ) , parentheses, and the library functions of one variable (sin, exp, sqrt, etc.). The assignment statement Z to be translated is initially taken in the augmented form ~ A2~ ~ , where the characters ~and ~ serve as termination symbols and a is a pointer which indicates the portion of the statement where the translator's attention is currently focused. The following productions and the associated generation rules respectively decompose the original statement in accordance with its structure, and simultaneously create coding to implement the statement. Coding will be represented by ALGOL statements with at most one operator, to avoid reference to particular computers.

54 citations


Journal ArticleDOI
TL;DR: This study investigated various techniques for systematically abbreviating English words and names and particular attention was paid to techniques that could process incoming information without prior knowledge of its existence.
Abstract: This study investigated various techniques for systematically abbreviating English words and names. Most of the attention was given to the techniques which could be mechanized with a digital device such as a general purpose digital computer. Particular attention was paid to techniques that could process incoming information without prior knowledge of its existence (i.e., no table lookups). Thirteen basic techniques and their modifications are described. In addition, most of the techniques were tested on a sample of several thousand subject words and several thousand proper names in order to provide a quantitative measure of comparison.

47 citations


Journal ArticleDOI
TL;DR: It is of interest to design a universal Turing machine smaller than any ever previously published in the literature, and one machine with a product of 40--five symbols and eight s ta tes is presented, provided that symbols can be printed on the infinitely many squares of the input tape.
Abstract: I t is of interest to design a universal Turing machine smaller than any ever previously published in the literature. According to Shannon's suggestion [1], the product of the number of states and the number of symbols would be an appropriate measure of the size of a Universal Turing machine (U.T.M.) . We present here one machine with a product of 40--five symbols and eight s ta tes and one with a product of 30--five symbols and six s ta tes--provided that symbols can be printed on the infinitely many squares of the input tape. The former is correctly an ordinary Universal Turing machine, but the latter is a slightly extended one. According to Davis ' definition [2] of the Turing machine, the input condition must be expressed as follows: the input tape is always finite but can be extended by a certain given rule (presented below). The machine must have the property that , whenever it is about to run off an end of the tape, a row of new squares in which appear certain given symbols is spliced onto the end of the tape. Some published results are listed in Table 1.

Journal ArticleDOI
TL;DR: This note describes a method of reformulating the problem of finding zeros of nonlinear functions in terms of the solution of simultaneous ordinary differential equations.
Abstract: The methods described in this note are actually applicable to a wider class of problems than that indicated in the title. However, it was this problem which originally motivated the development of the methods, and it remains the application of most interest. During the past several years powerful methods have been developed for the numerical integration of simultaneous ordinary differential equations on digital computers. The striking feature of these methods is the inclusion of algorithms for automatic modification of the interval of integration in order to preserve a specified degree of accuracy in the integrated results with nearly optimum efficiency of the entire process. This has motivated the reformulation of problems in terms of simultaneous ordinary differential equations whenever possible. For example, the computer time savings involved in automatic and continuous selection of the optimum interval in the evaluation of multiple definite integrals is apparent. This note describes a method of reformulating the problem of finding zeros of nonlinear functions in terms of the solution of simultaneous ordinary differential equations.


Journal ArticleDOI
TL;DR: It is established that any regular expression with a finite number of connectives describes a regular set of sequences that can be recognized by a finite state machine.
Abstract: Procedures are given to convert any regular expression into a state diagram description and neural net realization of a machine that recognizes the regular set of sequences described by the given expression. It is established that any regular expression with a finite number of connectives describes a regular set of sequences that can be recognized by a finite state machine. All the procedures given are guaranteed to terminate in a finite number of steps, and a generalized computer program can be written to handle the entire conversion. An incidental result of the theory is the design of multiple output sequential machines. The potential usefulness of regular expressions and a long neglected form of a state diagram are demonstrated.

Journal ArticleDOI
TL;DR: This paper considers the problem of determining the state of a sequential machine, and shows that the best bound, over all complete machines, for uniform experiments is the same as that for experiments, and the same result for complete input-independent machines.
Abstract: In this paper is considered the problem, first introduced in [1], of determining the state of a sequential machine. I t is assumed that information about the state of a machine can be obtained only by applying an input and observing the machine's response (output) . The application of an input generally causes a machine to change state; this terminal state may be known even if the initial state remains unknown. To deduce the terminal state after applying a sequence of inputs and observing the resulting outputs is to learn the state of the machine. A procedure for so learning the state of a machine will be called a terminal state experiment, or simply experiment. 1 In [1] it was shown that , given any machine with n distinguishable states, an experiment can be designed which requires not more than n ( n -1) /2 inputs in the worst case. Although [1] was restricted to input-independent machines (defined below, Section I) , the result and the method of obtaining it are applicable with only minor changes [5] to the whole class of complete machines. In Theorem 1, below, it is shown that n ( n 1)/2 is the best bound possible. In Theorem 2, below, it is shown that the bound can be lowered slightly for input-independent machines .2 The uni form experiment problem was first formulated in [2]. In a uniform experiment the experimenter must choose his entire sequence of inputs before beginning the experiment and may not change this choice during the performance of the experiment. Theorem 1 shows that the best bound, over all complete machines, for uniform experiments is the same as that for experiments. Theorem 2 shows the same result for complete input-independent machines. Section I is given to defining terms and explaining the notation. In the main the notation used in [3] will be followed. Sections II and I I I are given to proving Theorems 1 and 2, respectively. The author is indebted to Seymour Ginsburg of the System Development Corporation for an introduction to the subject of sequential machines and for many valuable criticisms of the present work.

Journal ArticleDOI
TL;DR: The "me thod" in question simply consists of evaluating a finite sequence of real numbers by means of a linear first-order recurrence relation and a detailed analysis of the process is thought justified.
Abstract: 1. I t is well known tha t only very few computing methods yield the desired numerical answer in all circumstances. There are usually cases--not only artificially constructed onesin which a part icular method simply fails to work. For tunate ly , suitable modifications are often at hand which m ay turn the method into one of general applicability. What follows may be considered an elementary example in illustration of this remark. The \"me thod\" in question simply consists of evaluating a finite sequence of real numbers by means of a linear first-order recurrence relation. Such calculations occur quite frequently, particularly in connection with the evaluation of definite integrals, so tha t a detailed analysis of the process is thought justified.

Journal ArticleDOI
TL;DR: Two-dimensional arrays of digital information occur frequently in automatic computer systems and redundancy can be incorporated into an array of binary digits, or logical matrix, by the imposition of certain constraints upon specified groups of digits in the array.
Abstract: Two-dimensional arrays of digital information occur frequently in automatic computer systems. Sometimes the two dimensions are physical, as on a magnetic tape or in a magnetic core matrix. Often the two dimensions are conceptual, as in a numerical matrix, or when a two-dimensional structure has been imposed on information by the programmer. For purposes of error detection and correction, redundancy can be incorporated into an array of binary digits, or logical matrix, by the imposition of certain constraints upon specified groups of digits in the array. A common and practical type of constraint is the specification of the pari ty of the sum of the digits in the group. The systematic specification of parity check groups for the purpose of detecting and /or correcting errors with economical circuitry (or programming) has been discussed by Hamming [1], by Sacks [2], and by Bose and Ray-Chaudhuri [3]. Because access to digits within a single row or a single column of the matrix is usually easier than access to scattered digits, it is natural to require tha t each pari ty check group lie wholly within a given row or column. Lower redundancy can be achieved by selection of parity check groups both from rows and from columns than from either alone.

Journal ArticleDOI
TL;DR: The methods available for sorting information within a computer using its internal memory are described and discussed with logical flow diagrams, which break down the steps required to do the sorting into a number of fundamental types.
Abstract: The purpose of this paper is to describe and discuss the methods available for sorting information within a computer using its internal memory. Considerable detail is obtained by analyzing sorting with logical flow diagrams. These flow diagrams break down the steps required to do the sorting into a number o:f fundamental types and show the sequences and relations among the steps. From the flow diagrams one can prepare programs for most of the presently available computers. The report is divided into three parts. Part A describes each kind of internal sort and illustrates the discussion with appropriate flow charts. Part B derives time formulae for these sorting processes. Part C compares and contrasts the kinds of sorting. PART A. DESCRIPTION AND FLOW CHARTS OF METHODS FOR INTERNAL SORTING

Journal ArticleDOI
E. E. Osborne1
TL;DR: Although the present investigation was begun in order to improve results in the homogeneous case, the results may be applied to the nonhomogeneous case because the conditioning of A*A is worse than that of A.
Abstract: I t is well known [5] that the conditioning of A*A is worse than that of A. This frequently leads to difficulties which cannot be remedied by the application of conditioning processes to the matrix A*A once it has been formed. This last statement should be emphasized. Although the present investigation was begun in order to improve results in the homogeneous case, the results may be applied to the nonhomogeneous case. Section 2 concerns a representation of the matrix A. Section 3 is devoted to theorems dealing with the effects of computational errors. These indicate fairly clearly how the various steps of the computations should be performed. Section 4 deals with techniques for insuring good results. Included among these is one due to R. E. yon Holdt [6] and for which he gives analytical justification. Formal proofs that the methods work are not. included. It is felt that the theorems in Section 3 are sufficient indication. The remainder of the present section is given to preliminaries.

Journal ArticleDOI
J. Heller1
TL;DR: The sequencing or scheduling aspects of multiprogramming are discussed, which have not been studied in mathematical detail in the machine shop scheduling context, but have been discussed under the name of job-lot MSS.
Abstract: The large newer computers available today and the computers of the future will all have features of simultaneous operation. Because of this simultaneity, the question of planning a program and the planning of groups of independent programs using the simultaneous operation capabilities of the computer becomes important. All these questions have been grouped under the heading of multiprogramming. In this paper we will discuss the sequencing or scheduling aspects of multiprogramming. Roughly speaking, the sequencing aspects revolve around questions of parts of a computer being idle because the data to be processed in one computer part is still being processed elsewhere in the computer; and if we have a group of independent programs, how can we stagger the parts of the programs through the computer such that some defined objective of all the programs is optimized. These considerations have been studied partly in a different context: the socalled machine shop scheduling [cf. [7] for a general review and further references]. There are features of similarity between machine shop (MS) scheduling (S) and multiprogramming (MP) scheduling. There are also features of dissimilarity. In the simple case of MSS [3], each job has a given order of processing on the machines of the MS. This ordering, one for each job, is called a technological ordering. The only cases of MSS studied in mathematical detail have assumed that each job goes on each machine at most one time. If we liken the machine shop to the computer, the program to the job, the computer parts--input channels, processing units and output channels--to the machines of the machine shop and the ordering of the subparts of the program on the computer parts, there appears to be a difference between MPS and MSS as described above. Whereas a job (equivalent to a program in MPS) in MSS scheduling goes on each machine (computer part) at most once, the subprograms of a program (job) can return to the computer part (machine) more than once. The latter difference makes for a more complex problem, which has not been studied in mathematical detail in the machine shop scheduling context, but has been discussed under the name of job-lot MSS [7]. In order to gain an insight into MP and see through the maze of intricacies

Journal ArticleDOI
TL;DR: An automatic sequencing procedure for assigning sets of instructions to predesignated autonomous units of a multiprocessor is described and the solution obtained is guaranteed to be feasible although it is not necessarily unique.
Abstract: An automatic sequencing procedure for assigning sets of instructions to predesignated autonomous units of a multiprocessor is described. The procedure is based upon an assignment matrix developed from a precedence matrix. By associating a column vector whose elements are the operation time of each instruction set with the assignment matrix, numerical computation is made possible. A topological index, the precedence number, stating the position of each instruction set in relation to the last set in its path is contained in a second column vector. A transfer matrix in conjunction with an automatically derived path table is employed in a multipath program.Six generalized rules are derived for the procedure which proceeds directly to a solution without obtaining and testing all permutations of the sets of instructions, and seeks to establish a sequence of operations to minimize the total operation time while satisfying all precedence restrictions. Since the procedure does not require looking at the last instruction set before releasing the first sets for assignment to units, computer processing may start after the first assignment period is completed with processing and subsequent sequencing taking place concurrently.The procedure is readily adaptable to computer operation and automatic development of the assignment matrix is described. A flow chart of the procedure and its use for solution of a problem is presented. The solution obtained is guaranteed to be feasible although it is not necessarily unique. In the examples tested, an optimum or near optimum solution has been obtained. Computational experience with the procedure in complex problems is required to determine its effectiveness in such cases.

Journal ArticleDOI
TL;DR: A scheme which is feasible for application on a high speed computer is developed for determining the huge number of regression relations that can arise among subsets of the estimation variables and is based on common sense rather than a theoretical analysis.
Abstract: The data ordinarily available for determining the regression function of a variable to be estimated on the variables used for the estimation consist of a number of multivariate observations, where each observation contains values for the estimation variables and the corresponding value for the estimated variable. However, in many situations involving biological, medical, and other types of data, some of the values for the variables are missing. This can happen among the observations used in determining the regression function and also in the application of the regression function for estimation purposes. This paper presents a method for handling these two problems. The underlying procedure involves the estimation of the missing value for an estimation variable from its regression function on the estimation variables with known values. A scheme which is feasible for application on a high speed computer is developed for determining the huge number of regression relations that can arise among subsets of the estimation variables. This scheme consists in first establishing a basic set of regression relations among sufficiently small subsets of the estimation variables and then determining the remaining regression relations in terms of appropriately weighted sums of functions in the basic relations. Also, for cases where the forms specified for the regression functions are linear in unknown constants, a special type of least-squares curve fitting technique is developed for determining the constants on the basis of incomplete data The method is aimed at cases where there exist reasonably high correlations among some of the estimation variables and is based on common sense rather than a theoretical analysis. In some cases, the method presented may be inefficient or not meaningful. A discussion of such cases is given at the end of the paper.

Journal ArticleDOI
TL;DR: A natural language information processing system with high input rate, a great variety of input, and requirements not only for interpretation, storage, and retrieval of the data, but also for the logical processing, correlation, and combination of theData to develop a different body of information for retrival and analysis is discussed.
Abstract: The apphcation of digital computers to the processing of natural language (i.e., non-numerical) data is being examined and attempted along many different lines. The most highly publicized practical applications are translations of text from one language to another, indexing and retrieval schemes for collections of documents, and automatic means of developing indexes or abstracts for individual documents. Several more basic investigations--general problem solvers, information processing languages, and list type memory structures--are also being conducted at our universities and elsewhere and being reported on regularly with much optimism for the future. In this paper we wish to discuss a further application--a natural language information processing system with high input rate, a great variety of input, and requirements not only for interpretation, storage, and retrieval of the data, but also for the logical processing, correlation, and combination of the data to develop a different body of information for retrival and analysis. Moreover, this system, though designed with one particular practical operation in mind, has characteristics that would make it easily adaptable to many other activities. These basic concepts have been tested in a major operational simulation of a model of the system, which is described also in this paper. The system has been designed as part of a study nicknamed Project AcsIMATIC. This work is being conducted by RCA to determine the potential uses of modern data-processing equipment and procedures in the activities of certain headquarters military intelligence operations of the Department of the Army. The project is sponsored by the Office of the Assistant Chief of Staff for Intelligence (OACSI), Hdqtrs., Department of the Army. 1

Journal ArticleDOI
TL;DR: The most practical way discovered for minimizing latency has been to let people do the job manually with a semi-systematic procedure and to use computers for the routine phases and for checking the human output.
Abstract: The need for latency minimization in many programs for certain computers is quite pronounced, as quantitative results indicate. Optimizing computer instructions includes many things besides merely assigning locations optimally. There are rigorous mathematical methods for formulating the latency problem into a series of integer programming problems, and a mechanical technique has also been developed for iterating on chosen locations in an attempt to improve the choices. The most practical way discovered for minimizing latency, however, has been to let people do the job manually with a semi-systematic procedure and to use computers for the routine phases and for checking the human output.

Journal ArticleDOI
C. Y. Lee1
TL;DR: Using a simple program structure introduced in a paper by Hao Wang, it is possible to classify subclasses of Taring machines by the deletion of various types of instructions.
Abstract: Using a simple program structure introduced in a paper by Hao Wang, it is possible to classify subclasses of Taring machines by the deletion of various types of instructions. A number of algorithms are given to show how one can convert internal descriptions of machines to classes of programs.

Journal ArticleDOI
TL;DR: It is shown that sets of positive integers “accepted” by finite automata are recursive; and a strengthened form of a theorem of Kleene is proved.
Abstract: This paper1 compares the notions of Turing machine, finite automaton and neural net. A new notation is introduced to replace net diagrams. “Equivalence” theorems are proved for nets with receptors, and finite automata; and for nets with receptors and effectors, and Turing machines. These theorems are discussed in relation to papers of Copi, Elgot and Wright; Rabin and Scott; and McCulloch and Pitts. It is shown that sets of positive integers “accepted” by finite automata are recursive; and a strengthened form of a theorem of Kleene is proved.

Journal ArticleDOI
Kurt Spielberg1
TL;DR: A method of obtaining efficient rational approximations is developed and constitutes a novel approach to this problem, well suited for computer programming, and an application to the function log2x is given.
Abstract: The first part of this paper is devoted to a discussion of a digital computer program, developed for the IBM 704 computer, which furnishes polynomial approximations with accuracy up to 16 digits for power, series or other polynomials. The method used is essentially the economization procedure proposed by C. Lanczos [1, 2], R. C. Minnick [3] and others. In the second part of the paper a method of obtaining efficient rational approximations is developed. I t presupposes the existence of the program described in the first part and constitutes, we believe, a novel approach to this problem, well suited for computer programming. Finally, a computer program based on the ideas of part two is described and an application to the function log2x is given. The program computes double precision polynomial approximations, and rational and continued fraction approximations; gives error estimates for each approximation; and if so specified compares each approximation with the given function at up to 106 points in the interval (--1 _-< x ~ 1).

Journal ArticleDOI
TL;DR: The purpose of this note is to present a number of different types of one-way automata and show that the family of sets of tapes accepted by at least one automaton of a particular type is the same for all types.
Abstract: The term \"au tomaton\" as yet does not have a standard definition in the computer literature. For several different types ( that is, definitions) of automata, the family of those sets of tapes accepted by at least one automaton of a particular type is the same for all types, the so-called family of regular sets [1, 2, 3]. The purpose of this note is to present a number of different types of one-way automata ( tha t is, au tomata that read tapes from left to right only) and then show that the family of sets of tapes accepted by at least one automaton of a particular type is the same for all types. The latter will be accomplished, as in [3], by exhibiting for each automaton of each type an automaton of a previously considered type, and conversely, such tha t both sets of accepted tapes are the same. Let ~ and A be two finite, nonempty, fixed sets, with A being the set of integers h = {0,1, . . . , m l } , where m ~ 2. The typical element of ~ is denoted by I and that of A by E. The elements of ~ and A are called inputs and outputs respectively. Each sequence Ii • • • Ikel , for all k ~ 0, of elements of Z is called a tape. The two symbols K and K* refer to finite, nonempty, not necessarily fixed sets. The elements of K and K* are denoted by p, q, or s, and are called states. The two symbols ~ and ~* refer to functions from a subset of K × ~ into K and a subset of K* X ~ into K*, respectively. If I1 • • • Ik+l is a tape and k = 0, then ~ ( q , I i . . . Ik) is defined to be q for each state q. If k > 0, then ~(q,I~ . . . Ik+~) is defined (when it exists) recursively by

Journal ArticleDOI
TL;DR: The present paper considers a number system in which the two numbers entering a multiplication are transformed into indices, which are added in their own number system.
Abstract: In recent years there has been an interest in unconventional number representations for computer number systems [1, 2]. The present paper considers a sys tem in which the two numbers entering a multiplication are transformed into indices. These indices are added in their own number system. The sum, when converted back, gives the product. As with logarithms, multiplication is thus replaced by the faster process of addition. In fact, a ~ m a y be computed by the single operation of multiplying the index of a by n. Using indices tha t are integers the product is exact, which is not generally true for logarithms. However, the difference of two indices corresponds to the quotient only when the lat ter is an integer. No easy way has been found for performing division in other cases. Properties of indices corresponding to Mersenne Primes are derived and used to save mechanizat ion or storage requirements for the conversion of numbers into indices and vice versa. For a computer with a large word length, the required storage is still quite extensive.

Journal ArticleDOI
TL;DR: The purpose of this note is to report on some results of actual calculational experiments which were performed to obtain the true values of these eigenvalues in several cases, and to compare the previously given estimates.
Abstract: \"Two-line\" iterative schemes for the biharmonic difference equation have been discussed in several recent works. This approach was first suggested by J. Heller [2] who noticed that such schemes are \"three-block schemes\" and hence many advantages are realized. Motivated by Heller's remarks, Varga [5] and the present author [3] independently developed useful methods of solving two-line equations. In that earlier work we also obtained estimates from above on ),a, hE, and hE, the spectral radii of the iteration matrices for the Richardson (simultaneous displacement), Liebmann (successive displacement), and Extrapolated Liebmann (over-relaxation, successive displacement) schemes respectively. More recently [4] we have also obtained estimates from below for these quantities. The purpose of this note is to report on some results of actual calculational experiments which we performed to obtain the true values of these eigenvalues in several cases. These results are then compared with the previously given estimates. In section 2 we discuss the general problem and present a discussion of the problem and earlier results. Our specific calculations together with the results are described in section 3. These calculations were all performed on the IBM 650 of the Research Computing Center, Indiana University. We are indebted to Mrs. Margaret Olsen and Miss Barbara Rose who programmed these calculations.