scispace - formally typeset
Search or ask a question

Showing papers presented at "Symposium on Symbolic and Algebraic Manipulation in 1979"


Book ChapterDOI
01 Jun 1979
TL;DR: This work has tried to demonstrate how sparse techniques can be used to increase the effectiveness of the modular algorithms of Brown and Collins and believes this work has finally laid to rest the bad zero problem.
Abstract: In this paper we have tried to demonstrate how sparse techniques can be used to increase the effectiveness of the modular algorithms of Brown and Collins. These techniques can be used for an extremely wide class of problems and can applied to a number of different algorithms including Hensel's lemma. We believe this work has finally laid to rest the bad zero problem.

1,297 citations


Book ChapterDOI
01 Jun 1979
TL;DR: A new criterion is presented that may be applied in an algorithm for constructing Grobner-bases of polynomial ideals and allows to derive a realistic upper bound for the degrees of the polynomials in the GroBner-Bases computed by the algorithm in the case of poylemials in two variables.
Abstract: We present a new criterion that may be applied in an algorithm for constructing Grobner-bases of polynomial ideals. The application of the criterion may drastically reduce the number of reductions of polynomials in the course of the algorithm. Incidentally, the new criterion allows to derive a realistic upper bound for the degrees of the polynomials in the Grobner-bases computed by the algorithm in the case of polynomials in two variables.

374 citations



Book ChapterDOI
30 Jun 1979
TL;DR: In this article, the Rabin-Strassen-Solovay primality algorithm is used to test polynomial identities and properties of systems of polynomials.
Abstract: The startling success of the Rabin-Strassen-Solovay primality algorithm, togehter with the intriguing foundational possibility that axioms of randomness may constitute a useful fundamental source of mathematical truth independent of the standard axiomatic structure of mathematics, suggests a vigorous search for probabilistic algorithms. In illustration of this observation, we present various fast probabilistic algorithms, with probability of correctness guaranteed a priori, for testing polynomial identities and properties of systems of polynomials. Ancillary fast algorithms for calculating resultants and Sturm sequences are given. Theorems of elementary geometry can be proved much more efficiently by the techniques presented than by any known artificial intelligence approach.

68 citations


Book ChapterDOI
30 Jun 1979
TL;DR: Practical methods for computing equivalent forms of integer matrices and applications to finding the structure of finitely presented abelian groups are described.
Abstract: Practical methods for computing equivalent forms of integer matrices are presented. Both heuristic and modular techniques are used to overcome integer overflow problems, and have successfully handled matrices with hundreds of rows and columns. Applications to finding the structure of finitely presented abelian groups are described.

26 citations


Book ChapterDOI
01 Jun 1979
TL;DR: A proof of the fact that any semialgebraic set possesses a c.d. is provided, amounting to a description of Collins' algorithm from a theoretical point of view, and it is shown that the algorithm can be extended to determine the dimension of each cell in a c-d.
Abstract: For any r≥1 and any i, 0≤i≤r, an i-dimensional cell (in Er) is a subset of r-dimensional Euclidean space Er homeomorphic to the i-dimensional open unit ball. A subset of Er is said to possess a cellular decomposition (c.d.) if it is the disjoint union of finitely many cells (of various dimensions). A semialgebraic set S (in Er) is the set of all points of Er satisfying some given finite boolean combination φ of polynomial equations and inequalities in r variables. φ is called a defining formula for S. A real algebraic variety, i.e. the set of zeros in Er of a system of polynomial equations in r variables, is a particular example of a semialgebraic set. It has been known for at least fifty years that any semialgebraic set possesses a c.d., but the proofs of this fact have been nonconstructive. Recently it has been noted that G. E. Collins' 1973 quantifier elimination algorithm for the elementary theory of real closed fields contains an algorithm for determining a c.d. of a semialgebraic set S given by its defining formula, apparently the first such algorithm. Specifically, each cell c of the c.d. C of S is itself a semialgebraic set, and for every c in C, a defining formula for c and a particular point of c are produced. In the present paper we provide a proof of this fact, our proof amounting to a description of Collins' algorithm from a theoretical point of view. We then show that the algorithm can be extended to determine the dimension of each cell in a c.d. and the incidences among cells. A computer implementation of the algorithm is in progress.

25 citations


Book ChapterDOI
01 Jun 1979

24 citations


Book ChapterDOI
01 Jun 1979
TL;DR: There are two classical classes of methods to solve systems of algebraic equations: methods which concern numerical analysis and get the roots by successive approximations and methods which eliminate the variables one after the other to get finally one equation in one variable.
Abstract: There are two classical classes of methods to solve systems of algebraic equations (ie to find the common zeros of a finite number of polynomials in a finite number of variables) The first class consists of methods which concern numerical analysis and get the roots by successive approximations Such methods work quickly, but do not give any information on the number nor the algebraic properties of the roots They do not work on extensions of finite fields The second class consists of methods which eliminate the variables one after the other to get finally one equation in one variable Such methods give the exact number of roots but not their multiplicity They are very slow : if the system consists of equations of degree d in n variables, the final equation has, in general, degree d 2 n 1 "

24 citations


Book ChapterDOI
01 Jun 1979
TL;DR: The cardinality procedure results in a complete factorization algorithm for primitive univariate integral polynomials whose average computing time, in a very strong sense, is dominated by a polynomial function of its degree n.
Abstract: Let A be a primitive squarefree univariate integral polynomial of degree n. An irreducible factor of A can be found by forming products of lifted modulo p factors of A for a suitable small prime p. One can either form first the products consisting of the smallest numbers of lifted factors (cardinality procedure) or form first the products with smallest degrees (degree procedure). Let ∏ be the partition of n consisting of the degrees of the irreducible factors of A. The average number of products formed before finding an irreducible factor of A is a function of ∏, C(∏) or D(∏) respectively. Let C*(n) (D*(n)) be the maximum of C(∏) (D(∏)) for all partitions, ∏, of n. Subject to the validity of two conjectures, for which considerable evidence is presented, it is proved that C*(n) is dominated by n2 whereas D*(n) is exponential. If the conjectures are true then the cardinality procedure results in a complete factorization algorithm for primitive univariate integral polynomials whose average computing time, in a very strong sense, is dominated by a polynomial function of its degree n.

21 citations


Book ChapterDOI
01 Jun 1979

20 citations



Book ChapterDOI
01 Jun 1979
TL;DR: Risch's algorithms for dealing with algebraic functions required considerably more complex machinery than his earlier ones for purely transcendental functions and demonstrated its practicality, whereas the same has yet to be done for Risch's more recent approach.
Abstract: I. Intzodu cUon Risch's landmarh paper [Ris89] presen ted the first decision proced~re for the integration of elementary functions. In that paper he required that the functions appearing in the integrand be algebraically independent. Shortly afterwards in [Risalg] and [RisTO] he relaxed that restriction and outlined a complete decision procedure for the integration of elemeniary functions in finite terms. Unfortunately his algorithms for dealing ~th algebraic functions required considerably more complex machinery than his earlier ones for purely transcendental functions. ~qeses' implementation of the earlier approach in ~%IACS~f]v!~ [MAC??] demonstrated its practicality, whereas the same has yet to be done for Risch's more recent approach.

Book ChapterDOI
01 Jun 1979
TL;DR: It is proved that in one iteration the number of correct coefficients is more than double in the case of an explicit differential equation, and is less than doubled in the most general case.
Abstract: In a typical application of the Newton iteration in a power series domain (e.g. to compute an algebraic function), the number of correct power series coefficients in the k-th iterate is exactly double the number of correct coefficients in the preceding iterate. This paper considers the application of the Newton iteration to compute the power series solution of a first-order nonlinear differential equation. It is proved that in one iteration the number of correct coefficients is more than doubled in the case of an explicit differential equation, and is less than doubled in the most general case.

Book ChapterDOI
01 Jun 1979
TL;DR: The capabilities of a microcomputer algebra system intended for educational and personal use, which offers a broad range of facilities from indefinite-precision rational arithmetic through symbolic integration, symbolic summation, matrix algebra, and solution of a nonlinear algebraic equation is described.
Abstract: This paper describes the capabilities of a microcomputer algebra system intended for educational and personal use. Currently implemented for INTEL-8080 based microcomputers, the system offers a broad range of facilities from indefinite-precision rational arithmetic through symbolic integration, symbolic summation, matrix algebra, and solution of a nonlinear algebraic equation. The talk will include a filmed demonstration, and informal live demonstrations will be given afterward.

Book ChapterDOI
30 Jun 1979
TL;DR: A survey of the use of a combination of symbolic and numerical calculations is presented and it is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.
Abstract: A survey of the use of a combination of symbolic and numerical calculations is presented. Symbolic calculations primarily refer to the computer processing of procedures from classical algebra, analysis, and calculus. Numerical calculations refer to both numerical mathematics research and scientific computation. This survey is intended to point out a large number of problem areas where a cooperation of symbolic and numerical methods is likely to bear many fruits. These areas include such classical operations as differentiation and integration, such diverse activities as function approximations and qualitative analysis, and such contemporary topics as finite element calculations and computation complexity. It is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.

Book ChapterDOI
30 Jun 1979
TL;DR: In this paper the work that has grown out of the early results on integration is surveyed, showing where the development has been smooth and where it has spurred work in seemingly unrelated fields.
Abstract: By the end of the 1960s it had been shown that a computer could find indefinite integrals with a competence exceeding that of typical undergraduates. This practical advance was backed up by algorithmic interpretations of a number of classical results on integration, and by some significant mathematical extensions to these same results. At that time it would have been possible to claim that all the major barriers in the way of a complete system for automated analysis had been breached. In this paper we survey the work that has grown out of the above-mentioned early results, showing where the development has been smooth and where it has spurred work in seemingly unrelated fields.

Book ChapterDOI
John P Fitch1
30 Jun 1979
TL;DR: A brief review of how symbolic algebra is being used in various branches of physics can be found in this article, with a focus on the traditional fields of celestial mechanics, quantum mechanics, and general relativity.
Abstract: This paper is a brief review of how symbolic algebra is being used in various branches of physics. It is hoped that the breadth of application will be apparent from the text and the references quoted. As well as the traditional fields of celestial mechanics, quantum mechanics and general relativity, there are a number of new areas where we can expect growth. While reviews of this nature have tended to concentrate on the physical sciences, uses are now being found in many numerate sciences and engineering.

Book ChapterDOI
01 Jun 1979
TL;DR: A REDUCE arbitrary precision real arithmetic package is described which will become a part of the kernel of an algebraic-numeric system being developed forREDUCE, and is as efficient as possible in both calculation speed and memory usage and highly portable and extensible.
Abstract: A REDUCE arbitrary precision real arithmetic package is described which will become a part of the kernel of an algebraic-numeric system being developed for REDUCE. The basic design principles of this package are first, it is as efficient as possible in both calculation speed and memory usage, second, even a casual user can use it, and third, it is highly portable and extensible. Our idea to attain the first property is to represent the arbitrary precision real number in as short a form as possible and to handle the precision in a much more flexible manner than any other similar system. A comparison is made of our scheme with a conventional one which uses a global precision, verifying the efficiency of our scheme. Our package contains two sets of routines for elementary arithmetic operations such as addition or multiplication. An expert user can write efficient programs using the first set of routines, while a casual user may use the second set of routines with less programming effort. Our package will become faster by rewriting only four basic and simple routines machine-dependently.

Book ChapterDOI
Richard D. Jenks1
30 Jun 1979

Book ChapterDOI
30 Jun 1979
TL;DR: The arithmetic is shown to implicitly contain an adaptive single-to-double precision natural rounding behavior that acts to recover true simple fractional results and the probability of such recovery is investigated and shown to be quite favorable.
Abstract: Closed approximate rational arithmetic systems are described and their number theoretic foundations are surveyed. The arithmetic is shown to implicitly contain an adaptive single-to-double precision natural rounding behavior that acts to recover true simple fractional results. The probability of such recovery is investigated and shown to be quite favorable.

Book ChapterDOI
30 Jun 1979
TL;DR: In this paper, a number of recently developed recursive minor expansion algorithms are presented, and a recursion count with respect to the recursion depth shows the behaviour of the algorithms under various typical conditions.
Abstract: A number of recently developed recursive minor expansion algorithms is presented. A recursion count with respect to the recursion depth shows the behaviour of the algorithms under various typical conditions.

Book ChapterDOI
01 Jun 1979
TL;DR: The main results are: the unification problem is decidable and the set of unifiers is always finite, and the algorithm is not minimal, but improves over the naive solution.
Abstract: A complete unification algorithm for terms involving a commutative function is presented. The main results are: the unification problem is decidable and the set of unifiers is always finite. The algorithm, as presented, is not minimal, but improves over the naive solution. This paper is a short version of [21.], which contains the proofs omitted here and some additional technical material.

Book ChapterDOI
P. Schmidt1
01 Jun 1979
TL;DR: Test results show that a program utilizing the basic elementary solution methods and a comparatively small set of such heuristics is able to solve most of the elementarily solvable differential equations of first order and first degree, which are collected in the textbooks of Kamke and Murphy.
Abstract: Some heuristics that suggest substitutions for solving differential equations of first order and first degree are presented. The test results with these heuristics show that ‘satisfying’ heuristics can be developed for this type of differential equations. Moreover, test results show that a program utilizing the basic elementary solution methods and a comparatively small set of such heuristics is able to solve most of the elementarily solvable differential equations of first order and first degree, which are collected in the textbooks of Kamke and Murphy.


Book ChapterDOI
30 Jun 1979
TL;DR: The difficulty in the way of extending this algorithm, and some ways of solving the problem in all cases where the algebraic expressions depend on a parameter as well as on the variable of integration are explained.
Abstract: The problem of finding elementary integrals of algebraic functions has long been recognised as difficult, and has sometimes been thought insoluble. Risch [18] stated a theorem characterising the integrands with elementary integrals, and we can use the language of algebraic geometry and the techniques of [2] to yield an algorithm that will always produce the integral if it exists. We explain the difficulty in the way of extending this algorithm, and outline some ways of solving it. Using work of Manin [9, 10] we are able to solve the problem in all cases where the algebraic expressions depend on a parameter as well as on the variable of integration.


Book ChapterDOI
01 Jun 1979
TL;DR: The first computational work on the integration problem that made usc of thc mathematical structure of the problem is contained in Moses's thesis, and Moses was frequently able to guess the structure of an integral.
Abstract: I . Motivation, Computing integrals h0s been a favorite pastime of algebraic manipulators (bol, h human and machine) for some time. The usual problem is to determine if the integral of a function can be expressed in terms of some prespecitled set of functions. This set is usually the "elementary functions"-algebraic functions, the exponential function and the logarithm. Slaglc's thesis [11] was the first research to draw the attention of researchers in algebraic manipulation to the integration probtcm. His methods were entircly heuristic and were really an application of artificial intelligence techniques to an algebraic manipulation problem. His program achieved about the same proficiency as that of a college freshman. The first computational work on the integration problem that made usc of thc mathematical structure of the problem is contained in Moses's thesis [4]. SIN, tile program described in Moses's thesis, contained a set of special case algorithms, such as for rational functions of simple exponentials. It also tried as a default method--the "Edge" hcurlstic. This heuristic is used to make an educated guess on the structure of the intcgrat based on Liouvillc 's theorem [2, 3]. By carefully analyzing the mathematical properties of integration, Moses was frequently able to guess the structure of an integral. Then, by filling in the unknown coefficients, differentiating and solving the resulting system of equations, he was able to solve a large class of problems. At the end of the sixties, Risch was able to produce a decision procedure for integrating expressions ccnst.ruct.cd from elementary functions [7, 8]. By sharpening Liouville 's theorem hc was able to deduce what the structure of an integral had to bc if it could bc expressed in terms of elementary functions. The next step in the integration problem is to incorporate special functions in the integral. Moses [5] began discussing the problems and some possible solutions in the use of special functions. Hc emphasized the us,~ge of a "functional" approach to the problem. Ra the r than studying a particular function, Sp(x), Moses suggested studying the s tructure of extensions via Spence functions, Sp(u(x)) for some class of functions u(:c). Rothst.ein investigated the integration problem involving special functions in his thesis [1% Hc developed an algorithm that could compute the integral of an expression given the the functions that could occur in the answer. From our point of view it would have

Book ChapterDOI
01 Jun 1979
TL;DR: The problem of giving a formal definition of the representation of algebraic data structures is considered and developped in the frame work of the abstract data types approach.
Abstract: The problem of giving a formal definition of the representation of algebraic data structures is considered and developped in the frame work of the abstract data types approach. Such concepts as canonical form and simplification are formalized and related to properties of the abstract specification and of the associated term rewriting system.

Book ChapterDOI
30 Jun 1979
TL;DR: A new recursive method for the p-adic construction of the correction coefficients is presented and analyzed, and its efficiency is of central importance to the entire factoring algorithm.
Abstract: In a recently published paper [4] the author described a new algorithm for multivariate polynomial factorization over the integers. One important feature of this algorithm is the variable-by-variable p-adic construction of all factors in parallel. To perform such p-adic construction of the factors, a second p-adic construction for multivariate "correction coefficients" is used. This second p-adic construction is in an "inner loop" of the construction process. Therefore its efficiency is of central importance to the entire factoring algorithm. In [4], an iterative algorithm is used for this second p-adic construction. A timing analysis of this iterative algorithm is included. A new recursive method for the p-adic construction of the correction coefficients is presented and analyzed.

Book ChapterDOI
01 Jun 1979
TL;DR: The ways in which the ideas of the authors influence reliability, portability, efficiency, generality and flexibility are presented, and the view of the relative importance of these attributes is given.
Abstract: This report explains the aims and presents the design of a new algebra system that is being constructed in Cambridge. It discusses in particular three areas that seem to lead to complicated and often conflicting requirements — the selection of basic data-structures, the incorporation and support of the most efficient algorithms and the design of an interface between the system and its users. We present the ways in which our ideas influence reliability, portability, efficiency, generality and flexibility. Our view of the relative importance of these attributes is given.