scispace - formally typeset
Search or ask a question

Showing papers in "ACM Communications in Computer Algebra in 2009"


Journal ArticleDOI
TL;DR: SINGULAR is a specialized computer algebra system for polynomial computations with emphasize on the needs of commutative algebra, algebraic geometry, and singularity theory, which features one of the fastest and most general implementations of various algorithms for computing standard resp.

1,255 citations


Journal ArticleDOI
TL;DR: This work investigates the integration of C implementation of fast arithmetic operations into Maple, focusing on triangular decomposition algorithms, showing substantial improvements over existing Maple implementations and outperforms Magma on many examples.
Abstract: We investigate the integration of C implementation of fast arithmetic operations into Maple, focusing on triangular decomposition algorithms. We show substantial improvements over existing Maple implementations; our code also outperforms Magma on many examples. Profiling data show that data conversion can become a bottleneck for some algorithms, leaving room for further improvements. Since the early days of computer algebra systems, their designers have investigated many aspects of this kind of software. For systems born in the 70’s and 80’s, such as AXIOM and Maple, the primary concerns were probably the expressiveness of the programming language and the convenience of the user interface; the implementation of modular methods for operations such as polynomial GCD or factorization was also among these concerns. Computer algebra systems and libraries born in the 90’s, such as Magma and NTL, have brought forward a new priority: the implementation of asymptotically fast arithmetic for polynomials and matrices. They have demonstrated that, for relatively small input data size, FFT-based polynomial operations could outperform operations based on classical quadratic algorithms or on the Karatsuba trick. This increased in a spectacular manner the range of solvable problems. Meanwhile, AXIOM and Maple remain highly attractive: the former one by its programming environment and the latter one by its users community. In previous work [6, 9, 11] we have investigated the integration of asymptotically fast arithmetic operations into AXIOM. Since AXIOM is based today on GNU Common Lisp (GCL), we realized optimized implementations of these fast routines in C and

44 citations


Journal ArticleDOI
TL;DR: An algorithm for factoring a polynomial, f, in one variable with rational coefficients, is presented, a variant of the Belabas [Belabas] version of the van Hoeij [van HoeIJ] factoring algorithm, which contains a practical speed-up over BelabAS' but also allows us to prove a new complexity result for Factoring polynomials.
Abstract: We present an algorithm for factoring a polynomial, f , in one variable with rational coefficients. Our algorithm is a variant of the Belabas [Belabas] version of the van Hoeij [van Hoeij] factoring algorithm. Our algorithm not only contains a practical speed-up over Belabas' but it also allows us to prove a new complexity result for factoring polynomials.The van Hoeij algorithm follows Zassenhaus' [Zass] approach by factoring f mod a prime number and Hensel Lifting the local factors. The practical speed-up in van Hoeij's algorithm comes from using the LLL [LLL] algorithm to solve the exponential recombination problem of decicing which combination of local factors form the true factors. However, the cost of LLL in van Hoeij's approach has been difficult to bound (in fact [van Hoeij] only proves termination and makes no attempt at finding a complexity).Belabas found a fine-tuning [Belabas] of van Hoeij's algorithm which nearly optimizes the practical running times, but still does not provide a good bound for the LLL costs. In [BHKS] both variants were shown to have polynomial complexity, but these bounds are still unsatisfactorily large and do little to illuminate the behavior of these algorithms. Our algorithm improves Belabas' approach in the following ways: We make a practical improvement which ensures that Hensel Lifting is always minimized.We include a decision making process which allows us to bound the total number of LLL switches in the algorithm by O(r3), where r is the number of local factors. This is independent of both degree and coefficient size of f..In this poster we present an overview of the new algorithm and give a brief look at the style of our proofs. For the full details of the switch bound O(r3) see [Novocin]. Using the Floating-Point LLL [L2] and some minor changes we can show the LLL costs are O(r7). We are still writing down the details of a proof that the new total complexity of our algorithm is O(r7).

18 citations


Journal ArticleDOI
TL;DR: A novel rigorous proof of a more general criterion than the one stated by Faugère, which establishes when a set of polynomials is a Gröbner basis by considering the values of the module LT(Syz), is provided.
Abstract: The purpose of this work is to generalize the theory behind the "F5" algorithm presented by J.C. Faugere in [3] and its matrix variant described by M. Bardet in [1]. The F5 algorithm is an algorithm which computes the Grobner basis of a given polynomial ideal I from its generators F = (f1 , ... , fm). Faugere's main idea is to consider the expression of an element p ∈ I in terms of the generators, p = ∏hifi , and keep explicitly track of the leading term S(p) of the vector (h1 , ... , hm), taken in its normal form with respect to the module of syzygies of F. We provide a novel rigorous proof of a more general criterion than the one stated by Faugere, which establishes when a set of polynomials G is a Grobner basis by considering the values S(g) for all g ∈ G; we further generalize our result by removing the requirement that the sequence f1 , ... ,fm is a regular sequence. The criterion itself is based on the knowledge of the module LT(Syz F), we have however devised an algorithm which simultaneously computes (a subset of) LT(Syz F) and a Grobner basis of I. We had written a first prototypal implementation in C++ using CoCoAlib [2] and we are currently working on a new implementation of the algorithm again in C++ from scratch.

16 citations


Journal ArticleDOI
TL;DR: A software toolbox ApaTools for approximate polynomial algebra is presented, which includes Maple and Matlab functions implementing advanced numerical algorithms for practical applications, as well as basic utility routines that can be used as building blocks for developing other numerical and symbolic methods in computational algebra.
Abstract: Approximate polynomial algebra becomes an emerging area of study in recent years with a broad spectrum of applications. In this paper, we present a software toolbox ApaTools for approximate polynomial algebra. This package includes Maple and Matlab functions implementing advanced numerical algorithms for practical applications, as well as basic utility routines that can be used as building blocks for developing other numerical and symbolic methods in computational algebra.

16 citations


Journal ArticleDOI

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define a meaning for multivariate partial fraction expansion in the context of computer algebra systems and provide a corresponding algorithm. But most of the existing examples in the literature focus on only one variable, where any other variables are considered mere parameters.
Abstract: The Derive computer-algebra program has Expand as one of the menu choices: The user is prompted for successively less main expansion variables, which can be all of the variables or any proper subset. It is clear how to proceed when the expression is a polynomial: Fully distribute with respect to all expansion variables, but collect as coefficient polynomials all terms that share the same exponents for the expansion variables. Derive uses a partially factored form, so the collected coefficient polynomials can be fortuitously partially factored.For rational expressions the expand function does partial fraction expansion because it is the most useful kind of rational expansion. However, most other computer algebra systems and examples in the literature focus on partial fraction expansion with respect to only one variable, where any other variables are considered mere parameters. For consistency with multivariate polynomial expansion, we wanted a useful and well-defined meaning for multivariate partial fraction expansion. This paper provides such a definition and a corresponding algorithm.

9 citations


Journal ArticleDOI
TL;DR: This work focuses on how to compute the reduced form of a transfer function, which is important for several problems in nonlinear control theory, such as accessibility or construction of a minimal state-space realization.
Abstract: Let k be a commutative field, σ an automorphism of k, and δ a derivation on k with respect to σ. Ore in [7] defines a (univariate) polynomial ring k[∂;σ,δ], which is called an Ore or skew polynomial ring. An Ore polynomial ring is, in general, noncommutative. Its commutation rule is ∂r = σ(r)∂+ δ(r) for all r ∈ k. For example, C(t)[∂;1,δ], where 1 maps an element to itself, and δ = d d t , is the ring of differential operators over C(t); and C(t)[∂;σ,0], where σ maps f (t) 7→ f (t + 1) for all f (t) ∈ C(t), and 0 maps everything to zero, is the ring of shift operators over C(t). For two elements a and b of k[∂;σ,δ] with a 6= 0, one can form two skew fractions a−1 · b and b · a−1, which are called left-hand and right-hand fractions, respectively (see [7]). We are only concerned with left-hand fractions, which are so referred as (skew) fractions. They form a skew field k〈∂;σ,δ〉. A fraction a−1 ·b is said to be reduced if the greatest common left-hand divisor (gcld) of a and b is trivial. Fractions of Ore polynomials arise from the transfer function formalism in nonlinear control theory (see [8, 2, 3]). Transfer functions of nonlinear systems, like those of linear systems, are invariant with respect to any static state transformation. They provide input-output descriptions, and characterize some basic properties of input-output systems. Transfer functions for singleinput single-output continuous-time systems are fractions in K〈∂;1,δ〉, while those for discrete-time systems are in K〈∂;σ,0〉. The ground field K is some functional field related to the given system. In the multiple-input multiple-output case, transfer functions are matrices whose entries are fractions. There is a maple package for computing transfer functions (see [6]). We focus on how to compute the reduced form of a transfer function. The reduced form is important for several problems in nonlinear control theory, such as accessibility or construction of a minimal state-space realization (see, e.g. [8, 4]). The reduced transfer functions can be manipulated more efficiently when one combines simple systems into a complex one (see [2, 3]). Given a differential equation d ny d tn = f ( u, du d t , . . . , dn−1u d tn−1 , y, dy d t , . . . , dn−1y d tn−1 ) , where f is a rational function over some differential field k, we construct a differential field

8 citations


Journal ArticleDOI
TL;DR: This tutorial demonstrates how to boot and use the KNOPPIX/Math system, a wonderful world of mathematical software without needing to install anything yourself.
Abstract: KNOPPIX/Math offers many documents and mathematical software packages. Once you run the live system, you can enjoy a wonderful world of mathematical software without needing to install anything yourself. We will demonstrate how to boot and use this system.

7 citations


Journal ArticleDOI
TL;DR: Two new modules of the regular chains library in Maple are presented: constructible set tools which is the first distributed package dedicated to the maniputation of (parametric or not) constructible sets and parametric system tools which are the first implementation of comprehensive triangular decomposition.
Abstract: We present two new modules of the RegularChains library in Maple: ConstructibleSetTools which is the first distributed package dedicated to the maniputation of (parametric or not) constructible sets and ParametricSystemTools which is the first implementation of comprehensive triangular decomposition. We illustrate the functionalities of these new modules by examples and describe our software design and implementation techniques. Since several existing packages have functionalities related to those of our new modules, we include an overview of the algorithms and software for manipulating constructible sets and solving parametric systems.

4 citations


Journal ArticleDOI
TL;DR: The implementation, although targeted at the integro-differential applications described below, follows a generic approach that encompasses commutative/noncommutative polynomials as well as one/two-sided reduction.
Abstract: General Polynomial Reduction. We outline a prototype implementation of the algorithms for integro-differential operators/polynomials in [12]. Our approach based on a generic implementation of noncommutative monoid rings with reduction, programmed in the functors language of the TH∃OREM∀ system. The integro-differential operators—realized by a suitable quotient of noncommutative polynomials over a given integro-differential algebra—can be used for solving and manipulating boundary problems for linear ordinary differential equations. For describing extensions of integro-differential algebras algorithmically, we use integro-differential polynomials. We use a fixed Gröbner basis for normalizing integro-differential operators. Gröbner bases were invented by Buchberger [2, 3] for commutative polynomials and reinvented in [1] for noncommutative ones. While [9] analyzes the computational aspects of the latter, it does not support two features that are important for our present setting: the usage of infinitely many indeterminates and reduction modulo an (algorithmic) infinite system of polynomials. Among the systems implementing noncommutative Gröbner bases, most address certain special classes (e.g. algebras of solvable type or homogeneous polynomials) which do not include our present case. To our best knowledge, none of these allow polynomials with infinitely many indeterminates and reduction modulo an infinite system of polynomials. For details, see the website http://www.ricam.oeaw.ac.at/Groebner-Bases-Implementations. Our implementation, although targeted at the integro-differential applications described below, follows a generic approach that encompasses commutative/noncommutative polynomials as well as one/two-sided reduction. Polynomial rings are formulated as monoid rings (leading to the standard commutative or noncommutative polynomials by employing the additive monoid Nn or the word monoid {x1, . . . ,xn}∗, respectively), while polynomial reduction is realized by a noncommutative adaption of reduction rings (rings with so-called reduction multipliers) in the sense of [4, 14]; for a noncommutative approach along different lines, we refer to [8]. The TH∃OREM∀ Functor Language. The generic implementation of monoid rings with reduction multipliers is realized through functors whose principle and implementation in the TH∃OREM∀ version of higher order predicate logic were introduced by B. Buchberger. The general idea—and its use for structuring those domains in which Groebner bases computation is possible—is described in [4, 5], where you also find references to original and early papers by B. Buchberger on the subject. For a general discussion of functor programming, see also [15]. The TH∃OREM∀ system is designed as an integrated environment for doing mathematics [6], in particular proving, computing, and solving in various domains of mathematics. Its core language is higher-order predicate logic, containing a natural programming language such that algorithms can be coded and verified in a unified formal frame. Functors are a powerful tool for realizing a modular and generic build-up of hierarchical domains in mathematics. For speeding up computations, one may also use the new Java-to-Theorema compiler described in the recent thesis [16]. Starting from the base category of rings and monoids, the monoid ring is the crucial functor that builds up polynomials. After adding reduction multipliers, the functions for enumerating Groebner bases are added by virtue of an extension functor (a functor that leaves previous operations unchanged and adds new ones). Obviously, the coefficient rings may in turn be monoid rings, and there are also various functors for composing useful monoid rings: word monoids, cartesian products, free products. Anticipating the explanations given in the following two sections, let us here state one interesting example of a chain of functors that can be realized in our system: integro-differential operators over exponential polynomials with an undetermined function. This proceeds in three stages: Starting with say Q, the exponential polynomials are obtained as the monoid ring with N×Q as monoid. The second step is the functor of integro-differential polynomials, the last step the functor of integrodifferential operators. Integro-Differential Operators. The notion of integro-differential operators [13] is a generalization of the “Green’s polynomials” of [11]. They can be seen as an algebraic analog of differential, integral and boundary operators in the context of linear ordinary differential equations (LODEs). They are particularly useful for treating boundary problems for LODEs as they express both the problems statement (differential equation and boundary conditions) and its solution operator (an integral

Journal ArticleDOI
TL;DR: REDUCE is one of the longest existing software systems in Computer Algebra, initiated and developed more than 40 years ago to ease computations for the SLAC, the Stanford Linear accelerator.
Abstract: REDUCE is one of the longest existing software systems in Computer Algebra. It has been initiated and developed more than 40 years ago to ease computations for the SLAC, the Stanford Linear Accelerator. Prominent applications were Feynman–Diagrams, etc. The development has been spearheaded by Anthony C. Hearn (later on at U of Utah and RAND Corp.) and maintained and improved until now, with the support of a large number of researchers. This has been possible, because a module concept allows the developers to work in a non centralized way and to add to the system code. This does not require the whole system to be rebuilt when a module is changed. There is no loss of performance induced for code located in a module vs. the ’core system’ (which does not exist in the strong sense.) Now it is time to make the system freely available, says Tony Hearn (photo): Happy New Year!

Journal ArticleDOI
TL;DR: A slightly weaker method of reduction is introduced that allows one to reduce a basis to a 'good basis' in polynomial time.
Abstract: Lattice reduction has important applications in very diverse elds of both mathematics and computer science: computer algebra, cryptology, algorithmic number theory, algorithmic group theory, MIMO communications, computer arithmetic, etc. The LLL algorithm allows one to reduce a basis to a 'good basis' (as de ned by Lovász in [1]) in polynomial time. However the quality of the reduction obtained is directly related to the parameter δ (the 3/4 factor in the original algorithm) de ned below. For the purpose of this study we introduce a de nition of reduction that is slightly weaker than the one described in the original article [1]:

Journal ArticleDOI
TL;DR: The aim of this paper is to produce a considerably simpler and shorter proof of this interesting theorem and to add the most important applications of this theorem, partly in new form, from which it will be shown that the Hilbert Basis Theorem is also a simple consequence of Macaulay's Theorem.
Abstract: F. S. Macaulay has found a purely combinatorial theorem (see §2), with which he has been able to more simply derive the Hilbert characteristic function and some new results on polynomial ideals1. However, his proof of this theorem is very complicated [2, p. 537: Note]. The aim of this paper is to produce a considerably simpler and shorter proof of this interesting theorem (§2-3). For the sake of completeness and for the comfort of the reader, I also add the most important applications of this theorem (§4-6), partly in new form, from which it will be shown that the Hilbert Basis Theorem is also a simple consequence of Macaulay’s Theorem.

Journal ArticleDOI
TL;DR: The main difference to other approaches is that the diagonal elements are replaced not at the beginning of the algorithm but then when they are taken as a pivot elements (independent wether or not these elements are non-zero).
Abstract: We propose a modification of the usual algorithms for Gaussian elimination, determination of minors and computation of determinants in case of matrices with symbolic data (polynomials, rational functions, general algebraic structures and the like). The proposed modification improves on the Sasaki-Murao algorithm. The new algorithm aims at improving efficiency for a general class algebraic data like polynomials with non exact coefficients, algebraic structures with zero divisors, rational functions and general functions in several variables. The main difference to other approaches is that we replace the diagonal elements not at the beginning of the algorithm but then when they are taken as a pivot elements (independent wether or not these elements are non-zero). Therefore, our method abolishes the steps where pivot elements are checked to be non-zero, and even zero elements can be used as pivots.

Journal ArticleDOI
Changbo Chen1, Liyun Li1, Marc Moreno Maza1, Wei Pan1, Yuzhen Xie1 
TL;DR: Different algorithms for computing an irredundant representation of a constructible set or a family thereof are presented and a complexity analysis is provided and an experimental comparison is reported.
Abstract: The occurrence of redundant components is a natural phenomenon when computing with constructible sets. We present different algorithms for computing an irredundant representation of a constructible set or a family thereof. We provide a complexity analysis and report on an experimental comparison.

Journal ArticleDOI
TL;DR: An algorithm for the explicit evaluation of Kloosterman sums for GL(n;R) for ≥ 2 and an implementation in the Mathematica package GL(n)pack are described.
Abstract: An algorithm for the explicit evaluation of Kloosterman sums for GL(n;R) for n ≥ 2 and an implementation in the Mathematica package GL(n)pack are described. Plucker relations in dimensions 3, 4 and 5 are given.

Journal ArticleDOI
TL;DR: The CADO workshop on integer factorization was held in Nancy, France, on October 7-9th, 2008 as mentioned in this paper, which was focused on the Number Field Sieve and its implementation aspects.
Abstract: The CADO workshop on integer factorization was held in Nancy, France, on October 7-9th, 2008. The workshop was focused on the Number Field Sieve and its implementation aspects. Fifty participants attended the workshop.The workshop was organized by Pierrick Gaudry, Emmanuel Thome and Paul Zimmermann.

Journal ArticleDOI
TL;DR: This paper presents a characteristic set method to solve polynomial equation systems in finite fields and shows that the given characteristic set methods are much more efficient and have better properties than the general characteristicSet method.
Abstract: In this paper, we present a characteristic set method to solve polynomial equation systems in finite fields. Due to the special property of finite fields, the given characteristic set methods are much more efficient and have better properties than the general characteristic set method.


Journal ArticleDOI
TL;DR: This algorithm is a generalization of the previous one by Rupprecht-Galligo-Chèze which works after a generic change of coordinates and relies on a general algorithmic approach based on a study of the curve defined by the polynomial to factorize in a toric surface.
Abstract: We describe an efficient algorithm and an implementation for computing an absolute factorization of a bivariate polynomial with a given bidegree. Results of experimentation and an illustrative example are provided. This algorithm is a generalization of the previous one by Rupprecht-Galligo-Cheze which works after a generic change of coordinates. It relies on a general algorithmic approach based on a study of the curve defined by the polynomial to factorize in a toric surface.

Journal ArticleDOI
TL;DR: In his 1965 Ph.D. thesis, Bruno Buchberger invented the algebraic object known today as a Gröbner basis, which is a basis of the residue class ring of a zero-dimensional polynomial ideal.
Abstract: In his 1965 Ph.D. thesis [2], Bruno Buchberger invented the algebraic object known today as a Gröbner basis. The thesis problem that Buchberger was given by his advisor, Wolfgang Gröbner, was that of finding a basis of the residue class ring of a zero-dimensional polynomial ideal. It is clear that Gröbner knew generally of a method for computing this [1], and indeed the last paragraph of Gröbner’s 1950 paper [3] reads (in translation)

Journal ArticleDOI
TL;DR: An application of exact linear algebra to performance evaluation and stochastic modeling shows a case where accepting the computational costs of exact algebra to stabilize the numerical evaluation leads to massive computational savings compared to established approaches based on standard (inexact) linear algebra.
Abstract: This note illustrates a novel application of exact linear algebra to performance evaluation and stochastic modeling. We focus on queueing network models, which are high-level abstractions of Markov chains used in capacity planning of computer and communication systems [6, 7]. Until recently, it was prohibitively expensive to compute exact solutions for these models when they describe hundreds or thousands of users interacting with a network of servers, a case of large practical application when sizing web architectures. Here, we overview a new approach, which we have recently proposed [3, 4], that overcomes this limitation by means of a linear matrix difference equation that strictly requires exact linear algebra to be evaluated. Exact linear algebra is required in our method because of uncontrollable numerical instabilities that arise if round-off errors are introduced in the recursive evaluation of the matrix difference equation. This difficulty is also exasperated by the “astronomical” growth of the number of digits of the operands, which can be as large as 101000−1010000. The application presented in this paper shows a case where accepting the computational costs of exact algebra to stabilize the numerical evaluation leads to massive computational savings compared to established approaches based on standard (inexact) linear algebra. The remainder of this work is as follows. In Section 2, we give minimal background about queueing network models and explain how they can be solved recursively. We point to textbooks such as [7] for extensive background on queueing networks. In Section 3, we discuss the new solution approach based on a matrix difference equation. The numerical properties of the method are discussed in Section 3.1, where we argue that exact algebra is the only viable approach to prevent numerical instability. For the reader interested in experimenting with the problem, we report an example in Section 3.2. Finally, we draw conclusions in Section 4. Additional material, including a MAPLE implementation of the matrix difference equation approach to queueing networks, can be obtained by contacting the author or from his homepage.

Journal ArticleDOI
TL;DR: New ways of computing a least common multiple (LCM) and a greatest common divisor (GCD) of polynomials represented in Lagrange basis are explored by considering the underlying linear system of equations and showing that this can be done without first converting to the standard power basis representation and back.
Abstract: We explore new ways of computing a least common multiple (LCM) and a greatest common divisor (GCD) of polynomials represented in Lagrange basis, or in other words, by their interpolation data. By considering the underlying linear system of equations, we show that this can be done without first converting to the standard power basis representation and back. There has been considerable work on the manipulation of polynomials represented in alternate bases [1, 2, 3, 4, 5, 7, 9, 10, 11]. The present work is closely related to [7] but the method presented there is not applicable to Lagrange basis. Our goal is to compute LCM and GCD in exact arithmetic environments where coefficient growth is a concern. For example, conversion between Lagrange basis and the standard polynomial basis can convert a polynomial from a simpler computation domain (e.g. integers) to one which is more computationally involved (e.g. quotient field of the domain). This has a negative effect on fraction-free methods unless coefficient GCD operations are first performed to removed the contents. In addition, conversion may introduce unnecessary coefficient growth. Let f1(x) and f2(x) be two polynomials of degrees n1 and n2, respectively. Since the degree of their LCM is at most d = n1 + n2, the interpolation data at d + 1 points may be needed to represent the result. Thus, we assume that the values of f1(x) and f2(x) are known at d +1 distinct points α0, . . . ,αd . Alternatively, we can write the polynomials in Lagrange basis:

Journal ArticleDOI
TL;DR: The mathematics of voting and choice is a good way to draw talented undergraduates into research in the mathematical sciences, particularly ones who are unsure whether they are interested in research.
Abstract: The mathematics of voting and choice is a good way to draw talented undergraduates into research in the mathematical sciences, particularly ones who are unsure whether they are interested in research. Partly this is because of the obvious attraction of ”relevant” subject matter; the mathematics of choice deals with aggregation of preferences in arenas from politics to business to statistics. More practically, this is because both the foundational results (like the impossibility results of Arrow, Sen, and Gibbard/Satterthwaite) and quite recent literature are easily accessible to a student with a more modest background (e.g. linear algebra) than many other similar projects. Traditionally, conjectures and examples are come up with ”by hand”; even when some sort of exhaustive enumeration by computer was used to address various paradoxes, more conceptual arguments were usually preferred.

Journal ArticleDOI
Kosaku Nagasaka1
TL;DR: The conventional approximate GCD algorithms can be used for this problem since determining leading coefficients and content scalars and rounding to integers can not be done easily since such conversions make polynomials far from desired nearest GCDs.
Abstract: Symbolic numeric algorithms for polynomials are very important, especially for practical computations since we have to operate with empirical polynomials having numerical errors on their coefficients. Recently, for those polynomials, a number of algorithms have been introduced, such as approximate univariate GCD and approximate multivariate factorization for example. However, for polynomials over integers having coefficients rounded from empirical data, changing their coefficients over reals does not remain them in the polynomial ring over integers; hence we need several approximate operations over integers. In this paper, we discuss computing a polynomial GCD of univariate or multivariate polynomials over integers approximately. Here, ''approximately'' means that we compute a polynomial GCD over integers by changing their coefficients slightly over integers so that the input polynomials still remain over integers.

Journal ArticleDOI
TL;DR: A new algorithm for reconstructing sparse multivariate polynomials in floating point arithmetic, in which neither the number of terms nor any partial degrees are required in the input.
Abstract: To reconstruct a black box multivariate sparse polynomial from its floating point evaluations, the existing algorithms need to know upper bounds for both the number of terms in the polynomial and the partial degree in each of the variables [2]. On the other hand, Rutishauser's quotient-difference algorithm [7, 3], or the qd-algorithm, is an iterative method that can be use to determine the poles of a meromorphic function from its Taylor coefficients.Combining the relation between the qd-algorithm, Prony's method and their connections to the Ben-Or/Tiwari algorithm [6], we present a new algorithm for reconstructing sparse multivariate polynomials in floating point arithmetic, in which neither the number of terms nor any partial degrees are required in the input [1].For example, consider the bivariate black box polynomial function, f(x1,x2) = 1/3x6/1+4x4/2--2.1x4/1--4x2/2+x1x2+4x2/1, termed the six-hump camel back function. Evaluate f (x1 , x2 ) at the non-equidistant s-th powers of (1/3, 1/5), in which 3 and 5 are relatively prime, and proceed with the qd-algorithm on the data f (1/3, 1/5), f (1/32, 1/52), f (1/33, 1/53), ....The magnitude of the values in the first five e-columns drops from 10--2 to machine precision, while all values in the sixth e-column are of the order of machine precision. We conclude that the number of terms is 6 and obtain the multi-indices in the support directly from the q-values: 1/q1(S) → 9 = 32.1/q2(S) → 25 = 52.1/q3(S) → 625 = 54.1/q2(S) → 15 = 3151.1/q1(S) → 81 = 34.1/q1(S) → 729 = 36.After all six multi-indices in f are known, the coefficients in f can be recovered by a linear system formed by (at least six) evaluations of f.The convergence to zero of some e-columns in the qd-algorithm can be interpreted in an analogous way as the zero discrepancies in the early termination strategy [5, 4]. This is illustrated in Maple.Our sparse interpolation can be extended to polynomials represented in certain non-standard bases [4]. We show an extension to sparse interpolation of polynomials in the Pochhammer basis and present our current research on sparse polynomial interpolation in the Chebyshev basis, both in floating point arithmetic.

Journal ArticleDOI
TL;DR: A formula is derived which specifies the convergence domain of φ(∞)(u), assuming that F (x,u) is monic and square-free w.r.t. x and F(x,0, . . . ,0) issquare-free.
Abstract: Power-series expansion is a fundamental method in handling analytic functions, not only in mathematics but also in numerical analysis and computer algebra, and knowing the convergence domain is desirable in developing various algorithms. Convergence domain of the power-series expansion of univariate analytic function is well known. For multivariate functions, however, there seems to be no general formula specifying the convergence domain explicitly. In this poster, given a generic multivariate polynomial F(x,u) def = F(x,u1, . . . ,u`) (`≥ 2), we consider a power-series root φ(∞)(u) satisfying F(φ(∞)(u),u) = 0, where the expansion is made at the origin, and we derive a formula which specifies the convergence domain of φ(∞)(u), assuming that F(x,u) is monic and square-free w.r.t. x and F(x,0, . . . ,0) is square-free. Our derivation is based on a new formulation of Hensel construction developed recently by the present authors; see [SI08] for details. Let F(x,u) be divided as F(x,u) = F0(x) + Fu(x,u), where F0(x) = F(x,0). Let n = degx(F) and the roots of F(x,0) be α1, . . . ,αn. Let F0(x) be factorized as F0(x) = G0(x)H0(x). Our new formulation is such that 1) the Hensel factors are expressed in the roots of F0(x) and 2) all the terms of Fu(x,u) are treated as a mass. The point 1) is realized by expressing polynomials Ai(x) and Bi(x) (i=0, . . . ,n−1) in α1, . . . ,αn, where Ai(x) and Bi(x) are determined uniquely to satisfy

Journal ArticleDOI
TL;DR: This work states that determining integrals of a standard homogeneous linear differential equation u reduces to solving the so-called characteristic equation x, which is closely connected with the theory of algebraic equations.
Abstract: Introduction It is well-known that the theory of linear differential equations with constant coefficients is closely connected with the theory of algebraic equations. In general, determining integrals of a standard homogeneous linear differential equation u + c1u(s−1) + c2u(s−2) + . . . + csu = 0 (1) reduces to solving the so-called characteristic equation x + c1xs−1 + c2xs−2 + . . . + cs = 0; (2)

Journal ArticleDOI
TL;DR: A direct method is presented, which is applicable when in addition to compute the floor of a polynomial expression that involves real algebraic numbers, that is, when the real roots of such polynomials can be expressed as real algerbaic numbers and so the problem of computing a rational separating two realgebraic numbers is faced.
Abstract: We are interested in the following problem: Given two (distinct) real algebraic numbers in isolating interval representation, that is an isolating interval with rational endpoints and a square free polynomial with integer coefficients, can we compute a number between them as a rational function of the coefficients of the polynomials that define these two numbers?Assume that the order of the two numbers is known (we will remove this assumption in the sequel). If we are given intervals that contain the real algebraic numbers and a procedure to refine them, we can solve our problem as follows: We refine the intervals until they become disjoint, this will happen eventually since we assume that the algebraic numbers are not equal, and then we compute a rational between the intervals, which separates the algebraic numbers. However, this iterative approach depends on separation bounds, e.g. [7]. We present a direct method, which is applicable when we allow in addition to compute the floor of a polynomial expression that involves real algebraic numbers.The problem arises when we wish to compute rational numbers that isolate the roots of an integer polynomial of small degree, say ≤ 5 [2]. Also in geometry, in order to analyse the intersection of two quadrics P and Q [6], one needs to determine the real roots of the polynomial det(P +xQ) = 0, their multiplicities and a value in between each of these roots. Another motivation comes from the arrangement of quadrics [4] In this case a rational is needed separating two real roots of two polynomials with real algebraic numbers as coefficients. The real roots of such polynomials can be expressed as real algerbaic numbers and so we face the problem of computing a rational separating two real algebraic numbers.