scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1962"


Journal ArticleDOI
TL;DR: Here the authors will consider only nonsingular linear integral equations of the first kind, where the known functions h(x), K(x, y) and g(x) are assumed to be bounded and usually to be continuous.
Abstract: where the known functions h(x) , K(x, y) and g(x) are assumed to be bounded and usually to be continuous. If h(x) ~0 the equation is of first kind; if h(x) ~ 0 for a -<_ x ~ b, the equation is of second kind; if h(x) vanishes somewhere but not identically, the equation is of third kind. If the range of integration is infinite or if the kernel K(x, y) is not bounded, the equation is singular. Here we will consider only nonsingular linear integral equations of the first kind:

1,879 citations


Journal ArticleDOI
Stephen Warshall1
TL;DR: It is proved the validity of an algorithm whose running time goes up slightly faster than the square of d, the running times of which increase-other things being equal-as the cube of d.
Abstract: Given two boolean matrices A arid B, we define the boolean product A A B as that matrix whose (i, j)th entry is vk(a~/, A bki). We define tile boolean sum A V B as that matrix whose (i, j)th entry is a ij V b~j. The use of boolean matrices to represent program topology (Presser [1], and Marimont [2], t'or example) has led to interest in algorithms for transforming the d × d boolean matrix M to the d × d boolean matrix M' given by: d M' = v M s where we defineM ~ = MandM ~+I = M ~AM. 4=1 ne convenience of describing the transformation as a boolean sum of boolean products has apparently l suggested the corresponding algorithms, the running times of which increase-other things being equal-as the cube of d. While refraining from comment on the area of utility of such matrices, we prove the validity of an algorithm whose running time goes up slightly faster than the square of d. T,~EoeE~z. Given a square (d × d) matrix M each of whose elements m~5 is 0 or 1. Define M' by m,{~ = 1 if and only if either mii= 1 or there exist integers 1. Set i = 1. 2. (Va3 :my* = 1)(V£) set. rnj* =mik V mik. We assert M* = M'. PROOF. Trivially, m~*j = 1 ~ m~i = 1. For, either m~s was unity initially (m,4j = J)-in which case m~i is surely unity-or m~*-was set to unity in step two. That is, there were, at the L0th application of step two, m~L0 = m~0~\" = 1. 1 Presser, op. cir. In his definition of Boolean sum and product, Presser uses \"V\" for product and \"/k\" for sum. This is apparently a typographicM error, for his subsequent usage is consistent with ours. This definition of M' is trivially equivalent to the previous one. a This definition by construction is equivalent to the recursive definition: 0. (mo)~ = mij ; 1.

1,684 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to outline a theory of automata appropriate to the properties, requirements and questions of adaptation and to formulate some of the key hypotheses and problems from relevant parts of biology, particularly the areas concerned with molecular control and neurophysiology.
Abstract: The purpose of this paper is to outline a theory of automata appropriate to the properties, requirements and questions of adaptation. The conditions that such a theory should satisfy come from not one but several fields: It should be possible to formulate, at least in an abstract version, some of the key hypotheses and problems from relevant parts of biology, particularly the areas concerned with molecular control and neurophysiology. The work in theoretical genetics initiated by R. A. Fisher [5] and Sewall Wright [24] should find a natural place in the theory. At the same time the rigorous methods of automata theory should be brought to bear (particularly those parts concerned with growing automata [1, 2, 3, 7, 8, 12, 15, 18, 23]). Finally the theory should include among its models abstract counterparts of artificial adaptive systems currently being studied, systems such as Newell-Shaw-Simon's \"General Problem Solver\" [13], Selfridge's \"Pandemonium\" [17], von Neumann's self-reproducing automata [22] and Turing's morphogenetic systems [19, 20]. The theory outlined here (which is intended as a theory and not the theory) is presented in four main parts. Section 2 discusses the study of adaptation via generation procedures and generated populations. Section 3 defines a continuum of generation procedures realizable in a reasonably direct fashion. Section 4 discusses the realization of generation procedures as populations of interacting programs in an iterative circuit computer. Section 5 discusses the process of adaptation in the context of the earlier sections. The paper concludes with a discussion of the nature of the theorems of this theory. Before entering upon the detailed discussion, one general feature of the theory should be noted. The interpretations or models of the theory divide into two broad categories: \"complete\" models and \"incomplete\" models. The \"complete\" models comprise the artificial systems--systems with properties and specifications completely delimited at the outset (cf. the rules of a game). One set of \"complete\" models for the theory consists of various programmed parallel computers. The \"incomplete\" models encompass natural systems. Any natural system involves an unlimited number of factors and, inevitably, the theory can handle only a selected few of these. Because there will always be variables which do not have explicit counterparts in the theory, the derived statements must be approximate relative to natural systems. For this reason it helps greatly that

1,126 citations


Journal ArticleDOI
Richard Bellman1
TL;DR: This problem can be formulated in dynamic programming terms, and resolved computationally for up to 17 cities, and combined with various simple manipulations may be used to obtain quick approximate solutions.
Abstract: The well-known travelling salesman problem is the following: \" A salesman is required ~,o visit once and only once each of n different cities starting from a base city, and returning to this city. What path minimizes the to ta l distance travelled by the salesman?\" The problem has been treated by a number of different people using a var ie ty of techniques; el. Dantzig, Fulkerson, Johnson [1], where a combination of ingemtity and linear programming is used, and Miller, Tucker and Zemlin [2], whose experiments using an all-integer program of Gomory did not produce results i~ cases with ten cities although some success was achieved in eases of simply four cities. The purpose of this note is to show tha t this problem can easily be formulated in dynamic programming terms [3], and resolved computationally for up to 17 cities. For larger numbers, the method presented below, combined with various simple manipulations, may be used to obtain quick approximate solutions. Results of this nature were independently obtained by M. Held and R. M. Karp, who are in the process of publishing some extensions and computat ional results.

711 citations


Journal ArticleDOI
TL;DR: The purpose of this investigation is to gain some insight into the syntax of POL, in particular ALGOL, and finds that the defining scheme for ALOOL turns out to be equivalent to one of the several schemes described by Chomsky in his attempt to analyze the morphology of natural languages.
Abstract: A serious drawback in the application of modern data processing systems is the cost and time consumed in programming these complexes. The user's problems and their solutions are described in a natural language such as English. To utilize the services of a data processor, it is necessary to convert this language description into machine language, to wit, program steps. Recently, attempts have arisen to bridge the gap between these two languages. The method has been to construct languages (called problem oriented languages, or POL) that are (i) rich enough to allow a description of a set of problems and their solutions; (ii) reasonably close to the user's ordinary language of description and solution; and (iii) formal enough to permit a mechanical translation into machine language. COBOL and ALGOL are two examples of POL. The purpose of this investigation is to gain some insight into the syntax of POL, in particular ALGOL [1]. Specifically, the method of defining constituent parts of ALGOL 60 is abstracted, this giving rise to a family of sets of strings; and mathematical facts about the resulting family deduced. Now an ALGoL-like definable language (we hesitate to use the inclusive term "POL") may be viewed either as one of these sets (the set of sentences) ; or else, as a finite collection of these sets, one of which is the set of sentences, and the remaining, the constituent parts of the language used to construct the sentences. This is in line with one current view of natural languages [4, 5, 6]. The defining scheme for ALOOL turns out to be equivalent to one of the several schemes described by Chomsky [6] in his attempt to analyze the syntax of natural languages. Of course, POL, as special kinds of languages, should fit into a general theory of language. However, it is reasonable to expect that POL, as artificial languages contrived so as to be capable of being mechanically translated into machine language, should have a syntax simpler than that of the natural languages. The technical results achieved in this paper are as follows. Two families of sets (of strings), the family of definable sets and the family of sequentially definable sets, are described. Definable sets are obtained from a system of simultaneous equations, all the equations being of a certain form. This system, essentially parallel in nature, is an abstraction of the ALGOL method of description. Definable sets turn out to be identical to the type 2 languages (with identity) introduced by Chomsky [6]. Sequentially definable sets are obtained from a system

298 citations


Journal ArticleDOI
W. Doyle1
TL;DR: Some ideas for position- and size-invariant two-dimensional pattern recognition lead, in some cases, to easily mechanized operations and an application to detection of straight lines is proposed.
Abstract: : Some ideas for position- and size-invariant two-dimensional pattern recognition are discussed. They lead, in some cases, to easily mechanized operations. An application to detection of straight lines is proposed. (Author)

237 citations


Journal ArticleDOI
TL;DR: A digital computer receives information about a plane black-and-white pattern, and is to decide on the basis of this information whether the pattern is "similar", in some sense to be specified below, to a given prototype.
Abstract: The problem which concerns us may be stated as follows: A digital computer receives information about a plane black-and-white pattern, and is to decide on the basis of this infoim~ation whether the pattern is \"similar\", in some sense to be specified below, to a given prototype. It is usually assumed that the patterns considered are of bounded size, say each contained in a given rectangle. Thus a pattern may be defined as any subset of the points in the rectangle, namely, the subset of \"black\" points. The information given to the computer must be of finite length; usually the rectangle is covered by a finite number of cells, and the information to the computer amounts to signals indicating whether any given cell is white or black. Patterns used as prototypes may be, for example, a ring, the letter \"A\", or the like. The recognition of alphabetic and numerical characters is of particular practical importance. What constitutes \"similarity\" varies widely from ease to case, and the methods which the machine uses in recognizing similarity must vary accordingly. To give two extreme examples, one might call two patterns similar only if they agree point for point (or rather, cell for cell) ; or else, one might admit as similar two patterns which are topo-logically equivalent (after defining a suitable topology in the space of cells), so that e.g. any simple closed curve would be called similar to a circle. 3/fany intermediate definitions are possible. In particular, for the recognition of printed characters one will wish to admit as similar two patterns if they differ at most in the following respects: (A) Location (B) Size (C) \"Stretching\" and \"Squeezing\" in either X-or Y-direction. Mathematically , these are affine transformations preserving the X-and Y-directions, i.e., transformations of the form X* = aX + b, Y* = cY + d. The special ease a = c = 1 corresponds to translations, i.e. to (A). (D) While one might consider rotation through a small angle as admissible, rotations through large angles are obviously not; e.g. the characters \"6\" and \"9\" should be recognized as different. It seems preferable to omit rotation entirely and admit in its place \"slanting\" such as occurs in italic type by comparison with roman. Mathematically, this is characterized as an affine transformation which leaves the points of the X-axis fixed. Combinations of the transformations (A) and (D) ex-240

203 citations


Journal ArticleDOI
TL;DR: This paper introduces an abstract entity, the binary search tree, and exhibits some of its properties, which are relevant to processes occurring in stored program computers-in particular, to search processes.
Abstract: This paper introduces an abstract entity, the binary search tree, and exhibits some of its properties. The properties exhibited are relevant to processes occurring in stored program computers-in particular, to search processes. The discussion of this relevance is deferred until Section 2. Section 1 constitutes the body of the paper. Section 1.1 consists of some mathematical formulations which arise in a natural way from the somewhat less formal considerations of Section 2.1. The main results are Theorem 1 (Section 1.2) and Theorem 2 (Section 1.3). The initial motivat ion of the paper was an actual computer programming problem. This problem was the need for a list which could be searched efficiently and also changed efficiently. Section 2.1 contains a description of this problem and explains the relevance to its solution of the results of Section 1. Section 2.2 contains an application to sorting. The reader who is interested in the programming applications of the results but not in their mathematical content can profit by reading Section 2 and making only those few references to Section 1 which he finds necessary.

125 citations


Journal ArticleDOI
Adin D. Falkoff1
TL;DR: It is shown that there is a hierarchy of dependency among these algorithms, that they appear in pairs with each member of a pair belonging to one or the other of two distinct classes, and that every type of search can be executed within each class.
Abstract: The underlying logical structure of parallel-search memories is described; the characteristic operation of three major types is displayed in the execution of searches based on equality; and algorithms are presented for searches based on other specifications including maximum, miniTnum, greater than, less than, nearest to, between limits, and ordering (sorting). It is shown that there is a hierarchy of dependency among these algorithms, that they appear in pairs with each member of a pair belonging to one or the other of two distinct classes, and that every type of search can be executed within each class.

91 citations


Journal ArticleDOI
TL;DR: Backus has developed an elegant method of defining well-formed formulas for computer languages such as ALGOL that consists of a finite sequence of letters from the alphabet.
Abstract: Backus [1] has developed an elegant method of defining well-formed formulas for computer languages such as ALGOL. It consists of (our notation is slightly different from that of Backus): A finite alphabet: a1, a2, …, at;Predicates: P1, P2, …, Pϵ;Productions, either of the form (a) aj ∈ Pi;

77 citations



Journal ArticleDOI
TL;DR: Two aims of this paper are to find the least number of comparisons needed for sorting n items within the original locations (within the "source file"), 2 and to provide an algorithm employing the leastNumber of comparisons for obtaining a sorted file.
Abstract: An important problem in sorting is to arrange item keys in monotonic ascending order within the internal memory of a digital computer. An excellent survey of some known methods of dealing with this problem is presented in [2], and the reader's familiarity with the main concepts of that paper is presupposed. For very large files, say in the range of 10 ~ to 10 5 items, it is worthwhile to consider schemes which require only a minimum of working storage space, that is, only n locations where n is the largest number of item keys which together with the program can be stored in the internal memory. Two available schemes which are workable within n locations, namely exchanging and selection (with exchange), both entail a very large number of comparisons, in fact ~ n ( n 1)/2. Two-way merging [1, 3, 4] which requires only n[log2 n] comparisons where [log2 n] is the next integer greater than log2 n, is vastly more economical in point of operations, but requires 2n locations at least, and thus may be ruled out for large n even on the biggest available computers. Two aims of this paper are (1) to find the least number of comparisons needed for sorting n items within the original locations (within the \"source file\"), 2 and (2) to provide an algorithm employing the least number of comparisons for obtaining a sorted file. Answers, which we conjecture are complete solutions, to (1) and (2) are given in the following Sections 2-6, and comparisons with other methods are presented in Section 7.


Journal ArticleDOI
TL;DR: For a certain class of automata a necessary and sufficient condition, in terms of the group of the automaton, is given for insuring that an automaton can be represented as a direct product.
Abstract: This paper persues a discussion of certain algebraic properties of automata and their relationship to the structure (i.e., properties of the next state function) of automata. The device which is used for this study is the association of a group with each automaton. We introduce functions on automata and study the group of an automaton, a representation for the group elements and the direct product of automata. Finally, for a certain class of automata a necessary and sufficient condition, in terms of the group of the automaton, is given for insuring that an automaton can be represented as a direct product.

Journal ArticleDOI
TL;DR: It is shown that a mode of activity where identical subsets fire repeatedly and periodically may exist in either of the above cases even when the network is made up of elements quite widely different in properties.
Abstract: Networks of cells having properties similar to those of biological neurons have been demonstrated to be capable of supporting self-maintaining activity, using both theoretical and simulation techniques. Different types of steady-state and oscillatory activity are considered and related to the network parameters: connectivity, latent summation period and decay, refractiveness and threshold decay. It is shown that a mode of activity where identical subsets fire repeatedly and periodically may exist in either of the above cases even when the network is made up of elements quite widely different in properties. The correlation of these results with certain physiological studies, and their possible function, are discussed briefly in the conclusion.

Journal ArticleDOI
TL;DR: An information retrieval based upon Lazarsfeld's latent class analysis is proposed, which has mathematical foundations and suggests that the mathematical rationale for the former could also provide a useful theoretical basis for the latter.
Abstract: The application of digital computers to the tasks of document classification, storage and retrieval holds considerable promise for solving the so-called “library problem.” Due to the high-speed and data handling characteristics of digital computers, a number of different approaches to the “library problem” have been placed in operation [4]. Although existing systems are rather rudimentary when compared with the ultimate goal of an automated library, progress towards that goal has been made in several areas: the organization of a mass of documents through automatic indexing schemes; the retrieval from a mass of documents of only those documents related to an information request made by a user of the library. A high proportion of existing document retrieval systems is based upon the author's background and skill rather than upon a mathematical model. Although allowing considerable success in the initial stages of development, the heuristic approach has a limited potential unless an underlying mathematical rationale can be found. Therefore, the present paper proposes an information retrieval based upon Lazarsfeld's latent class analysis [11], which has mathematical foundations. Although latent class analysis was developed by Lazarsfeld [11] to analyze questionnaires, the similarity of this task and document classification suggests that the mathematical rationale for the former could also provide a useful theoretical basis for the latter.Because the number of words contained in even a moderately sized report can exceed the capacity of most computers, some form of data reduction is a necessity. The reduction usually results in one of three levels of abstraction: abstracts of documents, key or topical words which convey the meaning of the document or abstract, and indices or tags based upon key words which are then assigned to the document. In general, indexing systems either assign key words to the document or use several key words to assign tags or indices to the documents. The key words or tags then serve as basic information for a retrieval system. Until a radical change in the data handling characteristics of computers is made, it would appear likely that key words or tags will continue to serve as the raw data for information retrieval systems. Although considerable uniformity exists in basic data introduced into an automated library, many different approaches exist as to the subsequent processing of the data. Several papers are reviewed below, which illustrate some of the considerations that enter into the development of an information retrieval system.Maron and Kuhns [8] have developed the “probabilistic indexing” scheme, which reduces the number of documents searched yet increases the retrieval of appropriate documents. In this approach, a large mass of source documents was read by human reviewers and key words were selected. The key words were then pooled into a few well-defined categories. However, any given key word could appear in more than one category. The resulting categories were then assigned meaningful labels or tags which constituted an index term list. The source documents were then re-inspected and the appropriate tag or tags assigned to the document.Document retrieval using the probabilistic indexing scheme is accomplished by presenting the computer with a series of tags and a value of a relevance number below which documents are not of sufficient importance to be retrieved. The tags locate the document, and the value of the corresponding relevance number compared to the lower bound value determines if the document should be retrieved.The high degree of dependence of the probabilistic indexing scheme upon human reviewers greatly reduces the efficiency of the method. If the number of documents, key words and tags were large, a human reviewer would not be able to maintain a consistent frame of reference when assigning tags and relevance numbers. The unique contribution of the probabilistic indexing scheme, however, is the use of relevance numbers in conjunction with the indices. The number provides a basis for determining the relevance of the stored documents to the indexed terms used by the requester of information.Stiles [10] had also reported the use of an association factor to accompany the index terms assigned to a document. The factor used expressed the discrepancy of the observed joint occurrence from the expected joint occurrence of an index pair, assuming independence. The association factor employed was the k2 value obtained from a two-fold contingency table involving the pair of index terms. A correlation coefficient, such as tetracortic r which expresses the correlation within the two-fold table, rather than a chi-square value expressing lack of independence would have been more appropriate in the present context. Stiles [10], however, reports that the use of the association factor was found to improve document retrieval.A more intensive study of the inter-relationships among words within a document was performed by Doyle [2]. The joint occurrences of word pairs in a body of 600 documents served as the basic data of the study. Two types of word correlations were found to exist within word pairs: adjacent correlations, resulting from words which appeared in pairs due to the nature of our language; and proximal correlation, due to words which are logically related but appear at non-adjacent positions within a document. The statistical effects of these two correlations were denoted by language redundancy and reality redundancy. In addition, a third type of redundancy, documentation redundancy resulted when more than one document could be classified by a given set of key words. The effect of language redundancy can be reduced by pooling adjacent key words and treating the pair as a single key word, thus eliminating the redundancy. Documentation redundancy would be reduced by pooling similar documents and assigning a single label to the batch, thus eliminating unnecessary duplication of effort. Reality redundancy, however, is the result of the author's cognitive processes, and the degree to which the literature researcher can duplicate this redundancy determines how successfully the original document can be retrieved. This study indicates that an important function in an information retrieval system is machinery for reducing the effects of language and documentation redundancy so that important relationships are not obscured.The results of the three studies reviewed above indicated document retrieval can be improved if the documents are surveyed for document redundancy and if the relationships among the key words are filtered to remove language redundancy. In addition, the use of a relevance number relating the document and key words appears to increase the efficiency of document retrieval.

Journal ArticleDOI
P. E. Chase1
TL;DR: The stability properties of predictor-corrector algorithms is investigated for an increased range of integration intervals and it is necessary to make a clear distinction between two modes of application.
Abstract: Predictor-corrector methods furnish attractive algorithms for the numerical solution of ordinary differential equations because of the relatively small number of derivative evaluations required. For example, fourth degree predictor-cotrector methods require two derivative evaluations per integration step while the corresponding Runge-Kutta fourth degree algorithm requires four derivative evaluations per integration step. In order to use these methods, however, an appropriate number of starting points must be provided in addition to the initial point and they must be obtained by another method. One of the key factors to be considered in the selection of a particular predictor-corrector method is the stability of the numerical algorithm. This is particularly crucial when the differential equations being solved correspond to a system with a forcing function whose time duration or period is relatively long compared to the transient time constants of the system. Considerable effort has been directed toward the development of algorithms having improved stability characteristics [1, 2, 3, 4]. Much of the prior work in this field relates only to the limiting properties of algorithms as the interval of integration approaches zero. In many applications, such as the one described above, one needs additional information to infer the operating characteristics of a given algorithm. Dahlquist [2, 3] defines an algorithm as strongly unstable unless all of the characteristic roots are equal to or less than unity in absolute value as the integration interval approaches zero with the additional requirement that the roots of unit magnitude be simple. He obtains the important result that the degree of an algorithm cannot exceed its order by more than two without encountering strong instability. Both Dahlquist [3] and Henrici [5] study further the error propagation in the immediate vicinity of zero interval through the investigation of certain growth parameters. If an algorithm is strongly stable but exhibits undesirable error growth in the immediate vicinity of zero interval it is called weakly or conditionally unstable. Hamming [4], and Crane and Lambert [1] have synthesized corrector algorithms which are stable over an increased range of integration intervals. In this paper the stability properties of predictor-corrector algorithms is investigated for an increased range of integration intervals. I t is necessary to make a clear distinction between two modes of application

Journal ArticleDOI
TL;DR: The main result shows the group of operation-preserving transformations of a strongly connected au tomaton onto itself is isomorphic to a group of subsets of input sequences under a certain operation.
Abstract: This paper is mot iva ted by Fleck's s tudy [1] on certain classes of structurepreserving, nontrivial t ransformations of au tomata . In tha t paper the class of those transformations which preserve \"strongly-connectedness\" is completely characterized. An interesting subclass, the class of operation-preserving functions (which are essentially homomorphisms) is introduced there. Fleck showed tha t the set of all operation-preserving functions of an au tomaton A onto itself constitutes a group G(A). In [2] some of the properties of G(A) when A is strongly connected were studied. I t was shown in the lat ter paper tha t corresponding to every finite group G of regular permutat ions there is a strongly connected automaton A for which G = G(A). Since, in fact, the group G(A) determines the structure of A, it would appear tha t the structure of G(A) and of A should be related. The present paper investigates tha t relationship. The main result shows tha t the group of operation-preserving transformations of a strongly connected au tomaton onto itself is isomorphic to a group of subsets of input sequences under a certain operation.

Journal ArticleDOI
TL;DR: An elaboration of the concepts of ALGOL 60 is given, mostly with the help of illustrative examples.
Abstract: ALGOL 60 is a universal, algebraic, machine-independent programming language It was designed by a group representing computer societies from many different countries Its primary aims are: (1) simplification of program preparation, (2) simplification of program exchange, and (3) incorporation of the important programming techniques presently known An elaboration of the concepts of ALGOL 60 is given, mostly with the help of illustrative examples (auth)

Journal ArticleDOI
TL;DR: An algorithm for scanning Boolean expressions that takes a complex, relational expression and transforms it into an opt imal set of computing steps and is advantageous in that it fits into a general scheme for the translation of statements to machine language.
Abstract: Abstract. This paper describes an algorithm for scanning Boolean expressions. Such an algori thm takes a complex, relational expression and transforms it into an opt imal set of computing steps. The result is opt imal in the sense tha t no redundant evaluat ions are made. The par t icular algori thm described is advantageous in that , as a var iant of a well-known arithmetic scan, it fits into a general scheme for the translation of statements to machine language. Consistent with this arithmetic expression scan, which is included as a starting point for the development, this Boolean scan does not require the re-ordering of subexpressions.

Journal ArticleDOI
TL;DR: In this article, an interpretive system for automatic formal manipulation of polynomials by a digital computer is presented, which is used for the solution of certain types of problems which require formal manipulation.
Abstract: An interpretive system for automatic formal manipulation of polynomials by a digital computer is presented. Its purpose is to make practical the solution of certain types of problems which require formal manipulation of polynomials. For example, it can be used for the formal solution of systems of polynomial equations. The manipulations of the system are those producing the sum, difference, product, remainder after division, greatest common factor, and eliminant of two polynomials in any reasonable number of variables. Euclid's Algorithm is used for the greatest common factor and the eliminant. Applications are discussed and examples are given. The system has been programmed for an IBM 650.

Journal ArticleDOI
TL;DR: The Jaeobi method of diagonalization ~md an available program utilizing an improved technique for its execution on existing computers are described and the predicted in~ creases in speed due to the organization and parallelism and then with the superimposed effect of higher speed circuitry are evaluated.
Abstract: Abslract. The design of a special purpose computer to operate in parallel with a general purpose computer to accelerate the diagonalization of real symmetric matrices is described. The entire system operates in a configuration described as the \"Fixed-Plus-Variable\" Structure Computer [1] such that the same elements used for the special computer may be reorganized for other problem applications. As a vehicle for this study it is assumed that problem properties dictated the choice of Jacobi's method. The Jaeobi method of diagonalization ~md an available program utilizing an improved technique for its execution on existing computers are described. The bases of decisions leading to design of the special purpose computer are explained. The nature of the supervisory control, which coordinates the activities of the general and speeiM purpose computers is detailed. The predicted in~ creases in speed due to the organization and parallelism and then with the superimposed effect of higher speed circuitry are evaluated.


Journal ArticleDOI
David D. Morrison1
TL;DR: The results of P. Henriei on the asymptotic behavior of the tnmcation error are used in order to get the simpler problem, and the functions involved are sufficiently smooth so that the results of Henrici are valid.
Abstract: In the integration of a system of ordin'~ry differential equations, the simplest approach is to use a fixed step size. However, over some parts of the range of i~tegra~ion it: is generally possible to ~ake a larger step size wilhoue, seriously affecd~g the \"local ~nmcatiou error.\" This gives rise to the currently popular \"halving and doubling\" me~hod, in which one changes the step size ir~ such a way as to keep the local truncation error more or less constant. This, however, is not necessarily optimal since a smM1 local truncation error in some parts of the range of integration can give rise to a large total truncation error. The basic problem is then to choose the step size in an optimal way; i.e. for a fixed mmff)er of mesh points, how should one distribute the mesh points in o:der to achieve the smallest error ~ at the end of the range of integration. (One might ask instead that the integral of the square of the truncation error over the range of integration should be minimized instead of the error at the end of the interval. This is reasonable when the error over the entire range is of interest instead of simply the error at the end. This problem does not seem to have a simple closed fonn solution and we will not discuss it here.) The problem in this generality is extremely diffmult; hence we will first approximate it, by a simpler problem, and we will solve the simpler problem eom-pletely. Specifically, we use the results of P. Henriei [1] on the asymptotic behavior of the tnmcation error in order to get the simpler problem. In order to solve ~he simpler problem we make the following assumptions: (1 There is only one differential equation. (Otherwise, a simple closed form solution such as given here does not seem to exist; instead one has an unpleasant integral equation to solve.) (2) The (approximate) local truncation error has one sign throughout the range of integration. (Otherwise, the solution becomes very strange; one may find that it is necessary to make as large an error as possible over some parts of the range of integration.) (3) The functions involved are sufficiently smooth so that the results of Henrici are valid. We will also make some further smoothness assumptions as we go along. For practical investigation one can sometimes weaken these assumptions. Thus if there …

Journal ArticleDOI
TL;DR: The present paper generalizes the methods of Hartmanis and Stearns to produce efficiently and systematically for machines of the aforementioned class assignments in which state variable is optimally reduced.
Abstract: An important step in the design of finite state sequential machines is the assignment of binary variables to represent their internal states. In [1] J. Hartmanis studied the problem of determining economical state assignments. In [2] he and R. E. Stearns generalized many of the results of [1]. Assignments obtained by the methods of [1] and [2] reduce dependence among the state variables of any sequential machine in which reduction is possible. These methods, however, will not guarantee the production of assignments with a maximal reduction of state variable dependence. The present paper generalizes these methods to produce efficiently and systematically for machines of the aforementioned class assignments in which state variable is optimally reduced.

Journal ArticleDOI
TL;DR: It is suggested that the introduction of an extra parameter as a coefficient of an additional term in the eorrector formula might lead to a more general theory of stability, at least for a certain range of the interval of integration.
Abstract: Introduction In recent years there has been much study of the predictor-corrector difference equation method of numerically solving a differential equation of the form y' = f(x, y). Hamming [1] and others have shown that some predictor-corrector methods, of which the Milne method is best known, are always unstable in that errors introduced at some stage of the solution are nob damped out in succeeding stages. Milne and Reynolds [3] have devised methods to reduce or control the magnitude of the oscillations caused by instability. By use of a generalized corrector formula, Hamming [1] has shown that it is possible to derive a stable correetor formula, at least for a certain range of the interval of integration. In Hamming's excellent paper, it is suggested that the introduction of an extra parameter as a coefficient of an additional term in the eorrector formula might lead to a more general theory of stability. It is our purpose to develop this more general theory and to offer a method of analysis somewhat different from Hamming's.

Journal ArticleDOI
TL;DR: The basic elements of' the theory of files~ are defined, the "derivatives of a file" are introduced, and the scheme of the functional relation is defined.
Abstract: [ ni~'c~'sit~j of California at Los Angelea In this paper are defined the basic elements of' the theory of files~ which was presented in [23]. In particular, a system lgllgtlt~ge is considered, wlfich allows the use of mathematical methods for the description of procedures in which non-numerical information processing i8 of primary importance. This language is called the Algebraic Data System Language. ~ The main features of the language are the following: The Algebraic Data System Language uses logieo-mathematieal techniques to control the d'~ta flow. A procedure description consists of a sequence of statements, which are not in a biunique correspondence with the boxes of any flow diagram of the machine control, in other words, the description of a procedure is asyncronous with respect to the manner in which the procedure is carried out by the machine. The schemes by which any problem is solved, and in particular the scheme of the data flow, are determined by the computer, in accordance with its internal structure and input-output equipment. Procedure descriptions are consequently independent of the hardware with which they are carried out. A problem description consists of the b~yout of the input data and of the output results (date description) and of a set of equations @rocedure description) which relate these re-suits to the data. The human element is eliminated not. only in coding, but also in the design of flow diagrams. The Algebraic Data System Language provides for a shorthand reference to functions which are discretely defined by tables, and for the stating of reeursive operations concerned with information organized in tables, which avoids any explieit setting of loops or tallies. The features which are original with the Algebraic Data System Language and not shared by other system languages are discussed thoroughly in the paper; when no need of special treatment arises, only references are made. In Section 1 the basic definitions are given, the boolean and ordinary expressions and the conditions are discussed, and the first examples of statements of the Algebraic Data System Language are exhibited. Section 2 is concerned with the table-functions, the handling of tables, and the recursive operations. Section 3 deals with the information to be processed. Files are defined, the \"derivatives of a file\" are introduced, and the scheme of the functional relation-The preparation of this paper was sponsored by the Office of Naval Research. Reproduction in whole or in part is permitted …

Journal ArticleDOI
TL;DR: An algorithm for expressing a matr ix A of the set ~I(R, C) as an average of vertex matrices, together with an algorithm for the construction of vertices of that set, which form a convex set.
Abstract: This paper is concerned with convexity properties of m X n matrices whose entries are non-negative real numbers and whose row and column sums are specified positive numbers. Such matrices make their appearance in an irapor tan t linear programming problem known as the Hitchcock or t ransportat ion problem. The determination of the vertices of such sets of matrices enables one to obtain all solutions of the t ransportat ion problem when the solution is not uniquely determined. Let r~, r2, , r ~ , cl , c~, , c~ be positive numbers such tha t ~ m r, ~ 1 c~ . Call R = ( h , r2, . , rm) and C = (cl, c~, . . , c~) row and column sum vectors. Let ~I(R, C) be the class of all m × n matrices A with non-negative entries such tha t the sums of the entries in the i th row and j t h column of A are r, and ca respectively. The set of matrices ~I(R, C) form a convex set. A matr ix of A of the set ~I(R, C) is called a vertex matrix if there do not exist matrices B a n d C i n ~I(R,C) a n d a n u m b e r a, 0 0 f o r i = 1 , 2 , . , r , ~ : _ ~ M = 1 and A = ~ : = ~ X~Bi. Our main result is an algorithm for expressing a matr ix A of the set ~I(R, C) as an average of vertex matrices, together with an algorithm for the construction of vertex matrices. Some special results are exhibited in the case of generalized doubly stochastic matrices, i.e. matrices for which r~ = r2 = " ' ' = rm and cl = c2 = • . . = an. There is an int imate connection between matrices with non-negative entries and bipart i te graphs. By a bipartite graph we mean the following structure. There is a system K consisting of three sets; two vertex sets S and T (whose elements are denoted by s~ and t~ respectively) and a set of edges E which is a subset of the Cartesian product S X T. Each edge of E will be denoted by a pair (s, t) with s in S and t in T. The term rank of a graph K is the largest number p of edges in a set (s~, t~) (s2, t2) (s3, t~) . . (sp, tp), no two of which have a vertex in common. Any such set of p edges is called a maximal set of independent edges. In connection with a bipart i te graph K, the concepts of minimal covering, reducibility, connectivity, and cycle play an impor tant role, and we define these as follows: Let A be a subset of S, and B a subset of T. The pair [A, B] is said to cover

Journal ArticleDOI
TL;DR: Improved methods are presented-one using the stabilization idea of Milne and Reynolds, another using stable formulas as proposed by Hamming, and among the basic fifth-order eorrector (closed type) formulas it is clear that C2, Simpson's rule, has the simplest form and also the least truncation error.
Abstract: The term \"fifth-order methods\" is here applied to predict-correct methods using open and closed quadrature formulas having truncation errors proportional to the fifth power of the step length. The first part of the paper continues the investigation of R. W. Hamming [1] and of Milne and Reynolds [2, 3] relative to sgability. Improved methods are presented-one using the stabilization idea of Milne and Reynolds, another using stable formulas as proposed by Hamming. These methods use \"four-point\" formulas in contrast to \"three-point\" formulas derived by T. E. Hull and A. C. R. Newbery [4]. The second part of the paper treats the case where only one substitution is performed per step, The question of stability for this case is investigated and the accuracy is compared with the case of two substitutions per step. It is shown that f(~r some methods and with suitable restrictions on step length it is much better to use only one substitution per step than to double the step length and retain two substitutions per step. Backgr(~und Most methods for the numerical treatment of ordinary differential equations fall into two classes: (A) the methods of Runge and the various modifications due to Kutta, NystrSm, Gill, and others; and (B) methods such as those of Adams, Moulton, and their followers, based on some type of quadrature formulas. Before electric desk calculators became generally available, it was natural to express the formulas in terms of differences in order to avoid laborious multiplications. With the coming of desk calculators the use of quadrature formulas in terms of ordinates became practical but it was still important to select the simplest possible formulas consistent with reasonable accuracy. This consideration motivated Milne's choice of predictor and corrector formulas in his original paper [5]. For in the following list of basic fifth-order predictor (open type) formulas it is clear that P~ has the simplest form and, moreover, happens to have the least truncation error. Likewise among the basic fifth-order eorrector (closed type) formulas we note that C2, Simpson's rule, has the simplest form and also the least truncation error. These two therefore were obvious choices. BASIC FIFTH-ORDER FORMULAS [T = hSy(5)/5760, h =-step length]

Journal ArticleDOI
Sheldon Sobel1
TL;DR: Oscillating Sort, designed to gain an (N-1)-way merge with N merging tapes available, is further designed for read-backwards tapes, and if read-write overlap is available, for cross channel switching, i.e. the ability for any tape to be read while any other tape is being written.
Abstract: Oscillating Sort, designed to gain an (N-1)-way merge with N merging tapes available, is further speeifieaUy designed for read-backwards tapes, and if read-write overlap is available, for cross channel switching, i.e. the ability for any tape to be read while any other tape is being written. Furthermore, this technique is compatible with replacement sorting techniques. The Oscillating Sort Technique Oscillating Sort begins in a conventional manner with \"Phase 1\" (internal sort) developing strings of sequenced records. However, after \"Phase 1\" has created N 1 strings (one string on each of N 1 tapes) the sort goes into a merging phase. The N 1 strings are read backwards and merged onto the available tape, N. This tape is then tapemarked. The other N 1 tapes are at load point. Control is now transferred back to the internal sort portion of the program. The next string is written onto tape N after the tapemark. The next N 2 strings are written on any N 2 of the N 1 tapes that are still available. The control is again transferred to the merging portion of the program and these N 1 strings are merged onto the remaining available merge tape. This process is continued until each of N 1 tapes has had N 1 sequences merged onto it. Therefore, we have created N 1 sequences from ( N l ) 2 sequences. At this point, the N--1 sequences are merged (each of the N--1 tapes is at the tapemark following the sequence and the available tape is at load point) onto the available tape and then a tapemark is written on this tape. The process now begins again, and finally another tape will contain a sequence formed from ( N 1 ) 2 sequences. At the point that each of N 1 tapes contains a sequence formed from ( N 1) 2 original sequences, the N 1 sequences are again merged onto the available tape. This iterative process continues until all the input records have gone through the internal sort. At this time, a partial merging pass may be required followed by a final merge operation onto the output tape. Figure 1 describes graphically the sorting technique. Advantages of Oscillating Sort Table 1 shows, for several different tape configurations, the power of merge of Oscillating Sort and three other sort merging techniques, namely Balanced/ * Received August, 1961. Balanced merging has been available for a considerable period of t ime and has been revised and improved by several people. Specific credit for this system is ~fficult to give. 372 OSCILLATING SORT 373