# Showing papers in "Journal of the ACM in 1964"

••

TL;DR: In this paper the notion of a derivative of a regular expression is introduced atld the properties of derivatives are discussed and this leads, in a very natural way, to the construction of a state diagram from a regularexpression containing any number of logical operators.

Abstract: Kleene's regular expressions, which can be used for describing sequential circuits, were defined using three operators (union, concatenation and iterate) on sets of sequences. Word descriptions of problems can be more easily put in the regular expression language if the language is enriched by the inclusion of other logical operations. However, il~ the problem of converting the regular expression description to a state diagram, the existing methods either cannot handle expressions with additional operators, or are made quite complicated by the presence of such operators. In this paper the notion of a derivative of a regular expression is introduced atld the properties of derivatives are discussed. This leads, in a very natural way, to the construction of a state diagram from a regular expression containing any number of logical operators.

962 citations

••

General Electric

^{1}TL;DR: The resulting solution is a smooth composite of parametric surface segments, i.e. each surface piece is represented by a vector (point)-valued function.

Abstract: The problem of defining a smooth surface through an array of points in space is well known. Several methods of solution have been proposed. Generally, these restrict the set of points to be one-to-one defined over a planar rectangular grid (X, Y-plane). Then a set of functions Z = F(X, Y) is determined, each of which represents a surface segment of the composite smooth surface. In this paper, these ideas are generalized to include a much broader class of permissible point array distributions: namely (1) the point arrangement (ordering) is topologically equivalent to a planar rectangular grid, (2) the resulting solution is a smooth composite of parametric surface segments, i.e. each surface piece is represented by a vector (point)-valued function. The solution here presented is readily applicable to a variety of problems, such as closed surface body definitions and pressure envelope surface definitions. The technique has been used successfully in these areas and others, such as numerical control milling, Newtonian impact and boundary layer.

230 citations

••

TL;DR: It is shown that the order of these generalized predictor-corrector methods is not subject to the above restrictions; stable-step schemes with p = 2 + 2 have been constructed for k = 4 and it is proved that methods of order p actually converge like h =supscrpt p uniformly in a given interval of integration.

Abstract: The order p which is obtainable with a stable k-step method in the numerical solution of y′ = f(x, y) is limited to p = k + 1 by the theorems of Dahlquist. In the present paper the customary schemes are modified by including the value of the derivative at one “nonstep point;” as usual, this value is gained from an explicit predictor. It is shown that the order of these generalized predictor-corrector methods is not subject to the above restrictions; stable k-step schemes with p = 2k + 2 have been constructed for k ≤ 4. Furthermore it is proved that methods of order p actually converge like hp uniformly in a given interval of integration. Numerical examples give some first evidence of the power of the new methods.

161 citations

••

Bell Labs

^{1}TL;DR: SNOBOL is a programming language for the manipulation of strings of symbols that consists of a statement in the SNOBOL language that operates on symbolically named strings.

Abstract: SNOBOL is a programming language for the manipulation of strings of symbols. A statement in the SNOBOL language consists of a rule that operates on symbolically named strings. The basic operations are string formation, pattern matching and replacement. Facilities for integer arithmetic, indirect referencing, and input-output are included. In the design of the language, emphasis has been placed on a format that is simple and intuitive. SNOBOL has been implemented for the IBM 7090.

117 citations

••

TL;DR: The sufficiency of these conditions is proved constructively leading to a method for the synthesis of combinational networks containing static hazards as specified, and the section on non-series-parallel contact networks also includes a brief discussion of the applicability of lattice matrix theory to hazard detection.

Abstract: This paper is concerned with the study of static hazards in combinational switching circuits by means of a suitable ternary switching algebra. Techniques for hazard detection and elimination are developed which are analogous to the Huffman-McCluskey procedures. However, gate and series-parallel contact networks are treated by algebraic methods exclusively, whereas a topological approach is applied to non-series-parallel contact networks only. Moreover, the paper derives necessary and sufficient conditions for a ternary function to adequately describe the steady-state and static hazard behavior of a combinational network. The sufficiency of these conditions is proved constructively leading to a method for the synthesis of combinational networks containing static hazards as specified. The section on non-series-parallel contact networks also includes a brief discussion of the applicability of lattice matrix theory to hazard detection. Finally, hazard prevention in contact networks by suitable contact sequencing techniques is discussed and a ternary map method for the synthesis of such networks is explained.

104 citations

••

TL;DR: It is pointed out in several theorems that programs of finitely determined instructions are properly more powerful if address modification is permitted than when it is forbidden, thereby shedding some light on the role of address modification in digital computers.

Abstract: A new class of machine models as a framework for the rational discussion of programming languages is introduced. In particular, a basis is provided for endowing programming languages with semantics. The notion of Random-Access Stored-Program Machine (RASP) is intended to capture some of the most salient features of the central processing unit of a modern digital computer. An instruction of such a machine is understood as a mapping from states (of the machine) into states. Some classification of instructions is introduced. It is pointed out in several theorems that programs of finitely determined instructions are properly more powerful if address modification is permitted than when it is forbidden, thereby shedding some light on the role of address modification in digital computers. The relation between problem-oriented languages (POL) and machine languages (ML) is briefly considered.

97 citations

••

TL;DR: A major revision of the paper "Minimal Boolean Expressions with ~[ore than Two Levels of Sums and Products," presented at the Third Annual Symposium, Chicago, October 1962 is presented.

Abstract: An approach to the problem of multilevel Boolean minimization is described. The conventional prime implicant is generalized to the multilevel case, and the properties of multilevel prime implicants are investigated. A systematic procedure for computing multilevel prime implicants is described, and several examples are worked out. I t is shown how ,'absolutely minimal\" forms can be obtained by carrying out multilevel minimization to a sutSciently large number of levels. 1. [ntroduclion Boolean forms cart be classified by the number of levels of sums and products they contain; e.g. a sum of products or a product of sums has two levels, whereas a sum of products of sums has three. There are a number of well-known methods for finding minimal two-level forms for a given function. However, there has been comparatively little progress in the development of methods for finding minimal forms with more than two levels; some references to prior work are appended to this paper. This paper describes an approach to the problem of finding minimal N-level forms and \"absolutely minimal\" forms. The techniques described are applicable to any and all Boolean functions, and are suitable for automatic computat ion. The approach generalizes two-level minimization procedures, and it is assumed that the reader is familiar with these methods. 2. Definitions and Notation Attention is restricted to Boolean forms in which only individual letters are complemented. Definition 1. (1) Expressions composed of single literals, e.g. \"x,\" \"2,\" \"y,\" a~ld \"~,\" are the only zero-level forms. (2) If q~l, ¢2, . • • , Op (P >1 ) are N-level forms, then q~l V 02 V .. ~ / ~ p is a sum of N-level forms and ( ~ ) /x (¢~) /~ • . . /~ (q~p) is a product of N-level forms. (3) Sums of N-level forms and products of N-level forms are the only (N + 1)-level forms. For brevity, we refer to N-level forms as N-forms, and sums of N-level forms a~d products of N-levels forms as sN-forms and pN-forms. I t is very important This constitutes a major revision of the paper, \"Minimal Boolean Expressions with ~[ore than Two Levels of Sums and Products,\" presented at the Third Annual Symposium ~r~ Switching Circuit Theory and Logical Design, AIEE Fall General Meeting, Chicago, October 1962. The revision was supported by Air Force Contract 33(657)-7811. t Information Systems Laboratory, Department of Electrical Engineering. 283 Journal of the Association for Computing Machinery, Vol. 11, No. 3 (July, 1964}, pp. 283-295 284 EUGENE L. LAWL:EI{ to :':ole tha% by defir~itio~, every N-forrn is an (N@l)-form, being both an s.V.,form m~d a pN-fonn. Examph:s: gX).forms: ~.~, v.gw. dl~forms: u, u V w V 2 . plof(: ~sn.s: ~q ui)w, \"u 'fl w V 2, ~x(v ',./ ~'), (u V O)(v V @). :,£..forms: u, uf~v, u V w ' , /2, u(v V 0), (u V ~)(v V go), l~oolca:'~ fu::ctio:m are deplored by the letters f, g, and h, and canonical ~rn~s of fimctions by their dechnal equivalents. (The numbering of the eanonicai tf~rms is :~lch that ~,):;c is assigned the number 1. ) The function having i, j, • . . , ]c a:~,~ em~onical terms is dermted by the letter f, g, or h with the supe~cript~ i, j, , • , I.', i.e. f<~\"'~, gc.i,...,k, or h < ~ ' .k Boobm~ hmctiol~s are also defined by Boolean forms. For exampl% the function f defin~i by the form c \\ / ~ is specified by writing f ( z , y ) = z V ~. To disgi:guish between %rms and functions, the function defined by a given Boolean form ~h is de::oted by t ¢ I. Thus, f = I:~Vgl. (Occ~skm~dly we abuse this notation by omitting the absolute value sig:~ if the :mm,~i~~g i~ obvkms,) A~ ineompt(dely spec{fied Boolean function, or incomplete hmetion, is representM by a~ ft, all minimal aN-forms for (f0, h) are contained in P. (2) P is a necessary set of prime sN-implieants for (f0, ft) if and only if for every h ~ f~, at least one minimal aN-form for (f0, h) is contained in P (if such a form exists). THEOREM 2'. Let P be a su~icient set of prime s( N -1) irnplicants for (fo , f~). Then every minimal pN-form for (fo , fl ) is a product of forms contained in P. THEOREM 3'. Let P be a necessary set of prime s( N -1)-implicants for (fo , fl ). Then there exists at least one minimal pN-form for (fo , f~) that is a product of forms contained in P. 4. Computing Necessary and Sufficient Sets of Prime Implicants The proposed approach to minimization is recursive: 2-level minimization is the basis for 3-level minimization, 3-level minimization is the basis for 4-level minimization, and so forth. I t is apparent that this recursive approach will be impracticable if it is necessary to carry out pN-minimization for all possible (h, f~) to compute either a necessary or a sufficient set of prime pN-implicants for (fo, f~). To avoid this difficulty, we shall make use of the following lemmas, which follow immediately from the definitions. LB

88 citations

••

IBM

^{1}TL;DR: A somewhat generalized version of the notion of schema is defined, in a language similar to that used in finite automata theory, and a simple algorithm for the equivalence problem solved by Ianov is presented.

Abstract: Ianov has defined a formal abstraction of the notion of program which represents the sequential and control properties of a program but suppresses the details of the operations. For these schemata he defines a notion corresponding to computation and defines equivalence of schemata in terms of it. He then gives a decision procedure for equivalence of schemata, and a deductive formalism for generating schemata equivalent to a given one. The present paper is intended, first as an exposition of Ianov's results and simplification of his method, and second to point out certain generalizations and extensions of it. We define a somewhat generalized version of the notion of schema, in a language similar to that used in finite automata theory, and present a simple algorithm for the equivalence problem solved by Ianov. We also point out that the same problem for an extended notion of schema, considered rather briefly by Ianov, is just the equivalence problem for finite automata, which has been solved, although the decision procedure is rather long for practical use. A simple procedure for generating all schemata equivalent to a given schema is also indicated.

87 citations

••

TL;DR: The representation of the Turing machine in the present system has a lower degree of exponentiation, which may be of significance in applications, and these systems seem to be of value in establishing unsolvability of combinatorial problems.

Abstract: By a simple direct construction it is shown that computations done by Turing machines can be duplicated by a very simple symbol manipulation process The process is described by a simple form of Post canonical system with some very strong restrictionsThis system is monogenic: each formula (string of symbols) of the system can be affected by one and only one production (rule of inference) to yield a unique result Accordingly, if we begin with a single axiom (initial string) the system generates a simply ordered sequence of formulas, and this operation of a monogenic system brings to mind the idea of a machineThe Post canonical system is further restricted to the “Tag” variety, described briefly below It was shown in [1] that Tag systems are equivalent to Turing machines The proof in [1] is very complicated and uses lemmas concerned with a variety of two-tape nonwriting Turing machines The proof here avoids these otherwise interesting machines and strengthens the main result; obtaining the theorem with a best possible deletion number P = 2Also, the representation of the Turing machine in the present system has a lower degree of exponentiation, which may be of significance in applicationsThese systems seem to be of value in establishing unsolvability of combinatorial problems

81 citations

••

TL;DR: The secant method for solving F(x) = 0 is a two-point iterative method that converges to O if the inequality IZ l-r) is satisfied.

Abstract: The secant method for solving F(x) = 0 is a two-point iterative method. Thf~ formula is x.i = F(x~-t , xi-2) = x~-i f(x~-2)-x~-2j'(x~-,) f(x~_~)-f(z~_~) (1.1) Its order of convergence has been shown [1, 3, 5] to be (5 ¢ + 1)/2. An r-point iterative method is defined by a function F for which The definition of order of convergence does not depend on the choice of norm, since for any two norms [[~, I I~ of a finite dimensional space there exist positive constantsc~,c~suehthatc~lX[~ _-> IXi-~ >= c~IXl~forallX. Lm~ 1. If the inequality IZ l-r) is satisfied with a~-t-\"'\"-t-a~ = a > 1 and a~ >-_ 0 (j = 1, .. • , r), and {f I Zi I < min (c-~/(~-~), 1) (1.4) then Z~ converges to O. Moreover, if the order of cc/rt-v, where v is the positive root of x ~ = aS-~ + ... + a,.

49 citations

••

IBM

^{1}TL;DR: It is shown that a finite-state completely specified automaton possesses this property if and only if there exists a finite sequence which is a finite sequences synchronizer for the automaton.

Abstract: Some properties of automata are investigated, which are capable of limiting the e f fec t of input errors on their behavior. First necessary and sufficient conditions arc de r ived for an automaton to be capable of always being resynchronized wi th in a bo~mded xmmber of input letters after an error has occurred, and then the results are specialized to finit e s t a t e completely specified automata. Automata are investigated, which are capable of be ing resynchronized with probability one and it is shown that a finite-state completely specified automaton possesses this property if and only if there exists a finite sequence which is a un ive r sa l synchronizer for the automaton. Some connections with similar problems for var i~b leAength codes are indicated.

••

IBM

^{1}TL;DR: The problem of developing systems of logical inference for natural languages is discussed, and an example of such an analysis of a sublanguage of English is presented.

Abstract: Information Retrieval systems may be classified either as Document Retrieval systems or Fact Retrieval systems. It is contended that at least some of the latter will require the capability for performing logical deductions among natural language sentences. The problem of developing systems of logical inference for natural languages is discussed, and an example of such an analysis of a sublanguage of English is presented. An experimental Fact Retrieval system which incorporates this analysis has been programmed for the IBM 7090 computer, and its main algorithms are stated.

••

TL;DR: A procedure for computing A+ is given which consists of a variant of the gradient projection method, equivalent to a Hestenes conjugate directions method in the special case r-and-equil;m and may be applied to any complex matrix.

Abstract: A number of methods have been proposed for computing the (Moore-Penrose-Bjerhammar) generalized inverse A+ of an arbitrary m by n complex matrix A of rank r-and-le;m-and-le;n Boot, Ben-Israel and Wersan have recently published methods which require the formation either of AA* or BB*, where B is an r by n submatrix of A having rank r In this paper a procedure for computing A+ is given which consists of a variant of the gradient projection method The procedure, which is equivalent to a Hestenes conjugate directions method in the special case r-and-equil;m, may be applied to any complex matrix The basic procedure requires the application of the Gram-Schmidt orthogonalization process, first to the column vectors (a(i)*) of A*, then, if A is not of full row rank, to the column vectors (c(j)) of A

••

TL;DR: It is concluded that, while there is no significant difference in the predictive efficiency between the Bayesian and the Factor Score methods, automatic document classification is enhanced by the use of a factor-analytically-derived classification schedule.

Abstract: This study reports the results of a series of experiments in the techniques of automatic document classification. Two different classification schedules are compared along with two methods of automatically classifying documents into categories. It is concluded that, while there is no significant difference in the predictive efficiency between the Bayesian and the Factor Score methods, automatic document classification is enhanced by the use of a factor-analytically-derived classification schedule. Approximately 55 percent of the document were automatically and correctly classified.

••

TL;DR: A general algorithm for solving the nonlinear programming problem is presented, consisting of a modified direct search routine coupled to a newly developed multiple gradient summation technique for handling the constraints.

Abstract: A general algorithm for solving the nonlinear programming problem is presented. Comprised of a modified direct search routine coupled to a newly developed multiple gradient summation technique for handling the constraints, the algorithm converges to either a constrained or unconstrained local optimum for fairly loose restrictions on the forms of the objective function and the constraints, and consequently will converge to a global optimum if the objective function is concave and the constraints convex. A FORTRAN code was developed from this algorithm and several small-scale test problems have been solved.

••

TL;DR: A pseudorandom normal number generator is proposed and its performance evaluated, indicating a close agreement with random normal theory.

Abstract: A pseudorandom normal number generator is proposed and its performance evaluated. Normal numbers are produced by first generating two uniform random numbers, each by a different mixed congruential generator, and then transforming them to two normal numbers. Eight million of these pseudorandom normal numbers are tested both for normality and randomness. The results of these tests indicate a close agreement with random normal theory.

••

TL;DR: The theorems and the etticient algorithm are used to solve illustrative problems in ti, e areas of computer programming, solving mat}m-matical problems, personnel selection, and medical diagnosis.

Abstract: This paper is concerned with a class of procedures for making true-false decisions which depend on the outcome of eo sequence of elementary, binary tests. Certain of these procedures, calJed s-procedures, are conveniently represented by Boolean expressions (whose variables represent the elementary, bil~ary tests) and are easily carried out. If we are given the cost of applying each elementary, bina.ry test and the (a priori) prob'~bility of its outcome, the (average) cost of a procedure may be computed. Th.e genera! problem treated is to find efficiently a minimum-cost procedure equiv,~dent to a given s-procedure with known costs and probabilities for its elementt~ry, binary tests. Among the facts proved are two theorems showing that, under certain1 coi~ditions, the s-procedure found by a specified , efficient algorithm has mhfimum cost. The theorems and the etticient algorithm are used to solve illustrative problems in ti, e areas of computer programming, solving mat}m-matical problems, personnel selection, and medical diagnosis.

••

••

TL;DR: Some preliminary tests indicate that such a method would improve accuracy of character recognition systems, and should be coupled with other procedures to obtain truly effective, easily in~plemented and reliable character recognition.

Abstract: Tests indicate t ha t use of certain Crypta.nalysis techniques show promise as a method to improve character recognition by computer . All languages exhibi t certain peculiar character is t ics such as let ter combinat ions, f requency of occurrence of let ters and init ial and terminal le t ters of words. These a t t r i b u t e s can therefore be used to improve charac ter recognition. T e s t s run on Engl ish text indicate t ha t the use of s ta t is t ics (m the occurrence of twolet ter combinat ions can appreciably improve charac te r recognit ion in the presence of noise in the input channel. This fact leads one to believe t h a t using this and other techniques could dramat ical ly improve the abil i ty of character recognit ion methods to filter out noisy input and improve accuracy. There has been considerable work done recently in the field of using computers to recognize printed or typed characters [1, 2, 3]. To the authors' knowledge there has been little, if any, use of the fact that most written languages have certain letter patterns which occur often and that certain other patterns are unlikely. In fact, one observer comments that \"A certain recognition technique perhaps should be coupled with other procedures to obtain truly effective, easily in~plemented and reliable character recognition. The loss introduced by a mixed recognition system appears to be solely one of elegance\" [4]. Cryptographers, of course, have used their knowledge of letter patterns to considerable advantage in decoding secret messages. It is well known, for instance, that in English text the most frequently used letter is E and that the letter T is most often the first letter of a word. Many other facts are also known about combination of letters, frequency of letter occurrence, etc. Could not this knowledge be used as information to a cornputer to allow it to more accurately read a text consisting of alphabetic characters only? Some preliminary tests indicate that such a method would improve accuracy of character recognition systems. Most character recognition methods are affected by \"noise\" from poor printing, dirt on the paper and similar conditions. The computer is then faced with the problem of deciding if the input is due to actual data or extraneous information. The letter H might be easily distorted into an A or an R by noise in the input. In instances like these the human brain easily supplies the proper letter by using context. For instance, the word C-W must either be COW or CAW since none of the other letters in the alphabet result in proper English words. While it would no doubt be difficult to \"teach\" a computer enough facts to perform this type of reasoning, there is good data available on the occurrence of digraphs or combination of two letters in the conunon languages. There are available at least two tabulations which show the occurrence in English of the various letters of the alphabet taken two at a time [5, 6]. 465 Journal of the Association for Computing Machinery, Vol. 11, No. 4 (October, 1964), pp. 465-470 4 6 6 A.W. ED%VARDS A N D R. L. C H A M B E R S

••

TL;DR: The limiting effect of the n u m b e r of banks of memory upon the response of the computer is studied and it is shown this effect is diminished and almost removed by increasing the number of memory banks.

Abstract: Abstracl. The limiting effect of the n u m b e r of banks of memory upon the response of the computer is studied. I t is shown t, ha t this effect is diminished and almost removed by increasing the number of memory banks. A quanti tat ive equat ion relating a wait.~ng-time factor , 5, to the memory cycle time, the i~put/output t ime and worst-ease calculat ion ~,ime for different numbers of banks of memory is derived. This derivation is based upon a n u m b e r of simplifying assumptions, all of vd~ieh are explai~ed and justified. Two graphs convey these re la t ions and provide a means for estimating an effective number of banks for a g iven computat ionM abilit, y.

••

RAND Corporation

^{1}TL;DR: All linear machines are shown to have finite memory no greater than their dimensions; a simple test for diagnosibility is described, and the existence of diagnosing sequences much shorter than in the general case is proved; finally, information lossless computing is discussed.

Abstract: Certain properties are proved for machines whose state and output behavior are linear ever a finite field. In particular, conditions in terms of the defining matrices are given for definiteness; all linear machines are shown to have finite memory no greater than their dimensions; a simple test for diagnosibility is described, and the existence of diagnosing sequences much shorter than in the general case is proved; finally, information losslesshess is discussed. I. Linear Machines A machine is said to be linear if its states can be identified with vectors and its state and output behavior described by a pair of matrix equations over a finite field .1 s(t + 1) = As(t) + Bx( t ) (la) z(t) = Cs(t) + Dx(t) . (lb) The vectors s(t), x(t) and z(t) denote the state, input and output, respectively, of the machine at time t. No restrictions are placed on the dimensions of the input or output vectors (numbers of inputs or outputs). A machine described by Eqs. la and lb will be called a linear machine [A, B, C, D]; the dimension of the machine is the dimension of its state vector. The equations represent a Moore model or a Mealy model according to whether D is or is not identically zero. Conversion between Moore and Mealy models of linear machines will be discussed in a forthcoming paper by the author and S. Even. The final-state and input-output behavior of a linear machine under an experiment of length t may be described by iterating Eqs. la and lb. t--I s(t) = Ats(O) -4~ At-~-~Bx(i) (2a) i=0 z(O) = Cs(O) -ff Dx(O) z(1) = CAs(O) -4CBx(O) -tDx(1) z(2) = CA2s(O) -4CABx(O) qCBx(1) -4Dx(2) (25) z(t -1) = CAt-is(O) + CAt-2Bx(O) + . . . + CBx(t 2) + D x ( t 1). 1 These machines have also been called linear modular sequential circuits [1], linear sequential networks [2, 3, 4] and fully linear sequential networks [5]. 296 Journal of the Association for Computing Machinery, Vol. 11, No. 3 (July, t 964), pp. 296-301 PROPERTIES OF LINEAR MACHINES 297 It will be convenient to rewrite these equations as s ( t ) A%(O) = H~S t ~ T t z K,s(O) ~x where / x(0) } : j = { x(:l) , \\ ~ ( t L 1) r.t'. ] L J,-, Z t and (~(o) ) z (1 ) ,

••

TL;DR: Methods of mechanized indexing (subject indexing by computer) which have been proposed are systematically summarized and a comprehensive document preparation is described from which proposed methods can be derived by selection.

Abstract: Methods of mechanized indexing (subject indexing by computer) which have been proposed are systematically summarized. Every suggested method consists of some document preparation process (mostly or wholly mechanical) followed by the application of indexing rules to the prepared document. A comprehensive document preparation is described from which proposed methods can be derived by selection. It includes full text input, document place (title, abstract, etc.) marking, sentence and paragraph marking, pronoun replacement, and other syntactic marking. I t also includes addition of thesaurus headings, position numbers, weighted frequencies, closely associated expressions, importance measures, and reference information. Three kinds of indexing rules are then distinguished and illustrated. Several general comments on mechanized indexing include remarks on the argument that good mechanized indexing is not feasible, and the argument that mechanized indexing has the advantage, compared to human indexing, of consistency. Some problems of testing mechanized (or any other) indexing quality by the quality of the retrieval it permits are described. Index duplication studies are suggested as an alternative kind of empirical investigation of mechanized indexing methods.

••

TL;DR: It was shown that each of the following questions is reeursively unsolvable (no algorithm which always yields the correct answer) for arbitrary CFL U and V.

Abstract: Each of the following three problems is shown to be recursively solvable for arbitrary regular sets U and V. (1) Does there exist a complete sequential machine which maps U into V? (2) Does there exist a generalized sequential machine which maps U i~to V so that the image of U is infinite if U is infinite? (3) Does there exist a complete seque~tial machine which maps U onto V? Introduction In [1] it was noted that the syntactic sets defined by Backus normal form, that is, the so-called ALGoL-like languages, are identical to the context-free languages (abbreviated CFL). In [2] it was shown that if S is a generalized sequential ma. chine (abbreviated gsm), a frequently used model for a computer, and if L is a CFL, then S(L), the output words from S with the words in L as i@ut, is a CFL. From this result the questions arose as to whether for arbitrary CFL L1 and L2 there always exists a gsm mapping L1 onto L2, or mapping LI into L2 in a nontrivial way. These questions clearly are relevant to machine translation of artificial languages. As noted in [3], there do not always exist such gsm. It was then shown that each of the following questions is reeursively unsolvable (no algorithm which always yields the correct answer) for arbitrary CFL U and V: (1) Does there exist a complete sequential machine which maps U into V? (2) Does there exist a gsm which maps U into V so that the image of U is infinite if U is infinite? (3) Does there exist a complete sequential machine which maps U onto V? (4) Does there exist a gsm which maps U onto V? [Of the four questions, (2) is probably the most meaningful in relevancy to language translation.] From a data processing point of view it is desirable that the questions raised be recursively solvable, not recursively unsolvable. Thus one seeks important subclasses of CFL for which questions (1)-(4) are solvable. If some such subclass is large enough for other data processing purposes, then we might want to restrict the syntactic sets arising from Backus normal form to be in that subclass. In other words, studies of this kind could lead to requirements upon the constituent sets used in programming languages. Among the better known subclasses of CFL are the regular sets, sometimes called \"finite state …

••

TL;DR: A method for computing an analysis of variance from an algebraic specification of the statistical model, written in a form consistent with usual statistical notation but also suitable for computer input and logical manipulation is described.

Abstract: A method for computing an analysis of variance from an algebraic specification of the statistical model is described The model is written in a form consistent with usual statistical notation but also suitable for computer input and logical manipulation Calculations necessary to obtain the analysis of variance are then determined by the model An outline of the computational procedure is given

••

TL;DR: An algebraic representation of the flow of a computer program is described and an algorithm is presented for manipulating this representation into a form from which a flowchart can be drawn.

Abstract: An algebraic representation of the flow of a computer program is described. An algorithm is presented for manipulating this representation into a form from which a flowchart can be drawn. A procedure for forming the algebraic representation from a compiler source code is also described.

••

TL;DR: Two pseudo-random number generators are considered, the multiplicative congruential method and the mixed congruentials method, and several algorithms are developed for the evaluation of these generators.

Abstract: Two pseudo-random number generators are considered, the multiplicative congruential method and the mixed congruential method. Some properties of the generated sequences are derived, and several algorithms are developed for the evaluation of xi =

••

TL;DR: The aim of the research reported is to put the laboratory application of closed-loop digital computer systems to experimental control and analysis in biology and behavioral science on a formal basis, using special concepts of programming to quantitatively control different parameters of variation in sensory feedback of specific response systems.

Abstract: The aim of the research reported is to put the laboratory application of closed-loop digital computer systems to experimental control and analysis in biology and behavioral science on a formal basis, using special concepts of programming to quantitatively control different parameters of variation in sensory feedback of specific response systems The theory is unconventional in that the computer and the techniques of closed-loop programming are designed to control time delays, space displacements, kinetic modulations, and informational and symbolic transformations in the sensory feedback that is generated by movements and/or physiological actions of the living subjectThe present experiment illustrates application of the system and the methods of closed-loop perturbation programming to study of delayed auditory feedback of speech The subject's speech is transduced, amplified, converted by an analog-to-digital converter, programmed for delays, deconverted by a digital-to-analog converter, filtered, amplified, and then used to activate protected earphones on the speaker's ears The computer is also used to introduce many special perturbation of the auditory feedback of speech in real time

••

TL;DR: This paper shows how it is possible to combine one algorithm from each class together into a “mixed” strategy for diagonalizing a real symmetric matrix.

Abstract: The algorithms described in this paper are essentially Jacobi-like iterative procedures employing Householder orthogonal similarity transformations and Jacobi orthogonal similarity transformations to reduce a real symmetrix matrix to diagonal form. The convergence of the first class of algorithms depends upon the fact that the algebraic value of one diagonal element is increased at each step in the iteration and the convergence of the second class of algorithms depends upon the fact that the absolute value of one off-diagonal element is increased at each step in the iteration. Then it is shown how it is possible to combine one algorithm from each class together into a “mixed” strategy for diagonalizing a real symmetric matrix.

••

IBM

^{1}TL;DR: It is shown for a certain class of systems that the relations defined by them are exactly the same as those definable in the elementary theory of addition of natural numbers.

Abstract: Finite automata which communicate with counters or with tapes on a single letter alphabet with end mark are studied. A typical machine system s~udied here consists of a family of machines; the finite automaton part of each of the machines is ideutical; each machine has one reset counter (Mmost blank loop tape) and one non-reset counter (ahnost blank straight tape) ; the first counter counts up to a, say, and the second comets up to b. For each pair of natural numbers a, b there is a machine of the system with counters running up to a, b respectively. The system \"accepts\" those pairs (a, b) such that the (a, b)-nmchine eventually halts, when started in standard position. Thus the system de* fines a binary relation on natural numbers. Some solvability and unsolvability results are obtained concerning the emptiness of the set accepted by a system or the emptiness of the intersection of the sets accepted by two or more systems. Some of the theorems strengthen results of Rabin and Scott [8]. It is shown for a certain class of systems that the relations defined by them are exactly the same as those definable in the elementary theory of addition of natural numbers. For another class of systems it is shown that an intersection problem is equivalent to Hilbert's tenth problem.

••

TL;DR: A set of subroutines that has been developed on small computers and used in analytical evaluation of multidimensional integrals is described, which will perform algebraic, differential and substitutional operations upon polynomials in several variables.

Abstract: A set of subroutines that has been developed on small computers and used in analytical evaluation of multidimensional integrals is described. The type of integral considered may be written as a differential operator acting upon a restricted class of functions of polynomials. Together, the subroutines are referred to as a programmed polynomial calculus (PPC). PPC will perform algebraic, differential and substitutional operations upon polynomials in several variables. Flowcharts are given. Applications of PPC in theoretical physics are discussed.