scispace - formally typeset
Search or ask a question

Showing papers presented at "Symbolic Numeric Computation in 2009"


Proceedings ArticleDOI
03 Aug 2009
TL;DR: An effective symbolic-numeric cylindrical algebraic decomposition (SNCAD) algorithm and its variant specially designed for QE are proposed based on the authors' previous work and the implementation of those is reported.
Abstract: Recently quantifier elimination (QE) has been of great interest in many fields of science and engineering. In this paper an effective symbolic-numeric cylindrical algebraic decomposition (SNCAD) algorithm and its variant specially designed for QE are proposed based on the authors' previous work and our implementation of those is reported. Based on analysing experimental performances, we are improving our design/synthesis of the SNCAD for its practical realization with existing efficient computational techniques and several newly introduced ones. The practicality of the SNCAD is now examined by a number of experimental results including practical engineering problems, which also reveals the quality of the implementation.

54 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: This paper is focused on the comparison of black-box implementations of state-of-the-art algorithms for isolating real roots of univariate polynomials over the integers and indicates that for most instances the solvers based on Continued Fractions are among the best methods.
Abstract: Real solving of univariate polynomials is a fundamental problem with several important applications. This paper is focused on the comparison of black-box implementations of state-of-the-art algorithms for isolating real roots of univariate polynomials over the integers. We have tested 9 different implementations based on symbolic-numeric methods, Sturm sequences, Continued Fractions and Descartes' rule of sign. The methods under consideration were developed at the GALAAD group at INRIA,the VEGAS group at LORIA and the MPI Saarbrucken. We compared their sensitivity with respect to various aspects such as degree, bitsize or root separation of the input polynomials. Our datasets consist of 5,000 polynomials from many different settings, which have maximum coefficient bitsize up to bits 8,000, and the total running time of the experiments was about 50 hours. Thereby, all implementations of the theoretically exact methods always provided correct results throughout this extensive study. For each scenario we identify the currently most adequate method, and we point to weaknesses in each approach, which should lead to further improvements. Our results indicate that there is no "best method" overall, but one can say that for most instances the solvers based on Continued Fractions are among the best methods. To the best of our knowledge, this is the largest number of tests for univariate real solving up to date.

50 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: This work describes the development of rigorous tools to determine enclosures of flows of general nonlinear differential equations based on Picard iterations, with particular emphasis on methods that have favorable long term stability, which is achieved using suitable preconditioning and other methods.
Abstract: Taylor models combine the advantages of numerical methods and algebraic approaches of efficiency, tightly controlled recourses, and the ability to handle very complex problems with the advantages of symbolic approaches, in particularly the ability to be rigorous and to allow the treatment of functional dependencies instead of merely points. The resulting differential algebraic calculus involving an algebra with differentiation and integration is particularly amenable for the study of ODEs and PDEs based on fixed point problems from functional analysis. We describe the development of rigorous tools to determine enclosures of flows of general nonlinear differential equations based on Picard iterations. Particular emphasis is placed on the development of methods that have favorable long term stability, which is achieved using suitable preconditioning and other methods. Applications of the methods are presented, including determinations of rigorous enclosures of flows of ODEs in the theory of chaotic dynamical systems.

36 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: This work shows how the methods can be used for the problem of rigorous global search based on a branch and bound approach, where Taylor models are used to prune the search space and resolve constraints to high order.
Abstract: A Taylor model of a smooth function f over a sufficiently small domain D is a pair (P,I) where P is the Taylor polynomial of f at a point d in D, and I is an interval such that f differs from P by not more than I over D. As such, they represent a hybrid between numerical techniques for the interval and the coefficients of P and algebraic techniques for the manipulation of polynomials. A calculus including addition, multiplication and differentiation/integration is developed to compute Taylor models for code lists, resulting in a method to compute rigorous enclosures of arbitrary computer functions in terms of Taylor models. The methods combine the advantages of numeric methods, namely finite size of representation, speed, and no limitations on the objects on which operations can be carried out with those of symbolic methods, namely the ability to treat functions instead of points and making rigorous statements.We show how the methods can be used for the problem of rigorous global search based on a branch and bound approach, where Taylor models are used to prune the search space and resolve constraints to high order. Compared to other rigorous global optimizers based on intervals and linearizations, the methods allow the treatment of complicated functions with long code lists and with large amounts of dependency. Furthermore, the underlying polynomial form allows the use of other efficient bounding and pruning techniques, including the linear dominated bounder (LDB) and the quadratic fast bounder (QFB).

23 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: Here a hybrid symbolic-numerical algorithm is applied for certifying that 4 polynomials can be written as an exact fraction of two polynomial sums-of-squares (SOS) with rational coefficients.
Abstract: For a proof of the monotone column permanent (MCP)conjecture for dimension 4 it is sufficient to show that 4 polynomials, which come from the permanents of real matrices, are nonnegative for all real values of the variables, where the degrees and the number of the variables of these polynomials are all 8. Here we apply a hybrid symbolic-numerical algorithm for certifying that these polynomials can be written as an exact fraction of two polynomial sums-of-squares (SOS) with rational coefficients.

21 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: A new algorithm for isolating the real roots of a system of multivariate polynomials, given in the monomial basis, inspired by existing subdivision methods in the Bernstein basis is presented.
Abstract: We present a new algorithm for isolating the real roots of a system of multivariate polynomials, given in the monomial basis. It is inspired by existing subdivision methods in the Bernstein basis; it can be seen as generalization of the univariate continued fraction algorithm or alternatively as a fully analog of Bernstein subdivision in the monomial basis. The representation of the subdivided domains is done through homographies, which allows us to use only integer arithmetic and to treat efficiently unbounded regions. We use univariate bounding functions, projection and preconditionning techniques to reduce the domain of search. The resulting boxes have optimized rational coordinates, corresponding to the first terms of the continued fraction expansion of the real roots. An extension of Vincent's theorem to multivariate polynomials is proved and used for the termination of the algorithm. New complexity bounds are provided for a simplified version of the algorithm. Examples computed with a preliminary C++ implementation illustrate the approach.

17 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: This paper introduces matrix representations of algebraic curves and surfaces for Computer Aided Geometric Design and shows how to manipulate these representations by proposing a dedicated algorithm to address the curve/surface intersection problem by means of numerical linear algebra techniques.
Abstract: In this paper, we introduce matrix representations of algebraic curves and surfaces for Computer Aided Geometric Design (CAGD). The idea of using matrix representations in CAGD is quite old. The novelty of our contribution is to enable non square matrices, extension which is motivated by recent research in this topic. We show how to manipulate these representations by proposing a dedicated algorithm to address the curve/surface intersection problem by means of numerical linear algebra techniques.

15 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: In this paper, an algorithm for reconstructing an exact algebraic number from its approximate value by using an improved parameterized integer relation construction method is presented. But this algorithm is not applicable for finding exact minimal polynomial of an algebraic numbers by its approximate root.
Abstract: We present a new algorithm for reconstructing an exact algebraic number from its approximate value by using an improved parameterized integer relation construction method. Our result is consistent with the existence of error controlling on obtaining an exact rational number from its approximation. The algorithm is applicable for finding exact minimal polynomial of an algebraic number by its approximate root. This also enables us to provide an efficient method of converting the rational approximation representation to the minimal polynomial representation, and devise a simple algorithm to factor multivariate polynomials with rational coefficients.Compared with the subsistent methods, our method combines advantage of high efficiency in numerical computation, and exact, stable results in symbolic computation. The experimental results show that the method is more efficient than identify in Maple for obtaining an exact algebraic number from its approximation. Moreover, the Digits of our algorithm is far less than the LLL-lattice basis reduction technique in theory. In this paper, we completely implement how to obtain exact results by numerical approximate computations.

8 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: A bisection method, based on exclusion and inclusion tests, is used to address the nearest univariate gcd problem formulated as a bivariate real minimization problem of a rational fraction using Smale's α-theory.
Abstract: A bisection method, based on exclusion and inclusion tests, is used to address the nearest univariate gcd problem formulated as a bivariate real minimization problem of a rational fraction.The paper presents an algorithm, a first implementation and a complexity analysis relying on Smale's α-theory. We report its behavior on an illustrative example.

7 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: In this article, a polynomial-time approximation scheme for the supremum of any honest n-variate (n+2)-nomial with a constant term, allowing real exponents as well as real coefficients, was given.
Abstract: We give a high precision polynomial-time approximation scheme for the supremum of any honest n-variate (n+2)-nomial with a constant term, allowing real exponents as well as real coefficients. Our complexity bounds count field operations and inequality checks, and are polynomial in n and the logarithm of a certain condition number. For the special case of polynomials (i.e., integer exponents), the log of our condition number is sub-quadratic in the sparse size. The best previous complexity bounds were exponential in the size, even for n fixed. Along the way, we partially extend the theory of A-discriminants to real exponents and exponential sums, and find new and natural NPR-complete problems.

7 citations


Proceedings ArticleDOI
03 Aug 2009
TL;DR: This paper overcome the problem arising in Hulst and Lenstra's algorithm and proposes a new polynomial time algorithm for factoring bivariate polynomials with rational coefficients and proves that this algorithm saves a (log2(mn)2+ε factor in bit-complexity comparing with the algorithm presented by HulSt and Lenstro.
Abstract: For factoring polynomials in two variables with rational coefficients, an algorithm using transcendental evaluation was presented by Hulst and Lenstra. In their algorithm, transcendence measure was computed. However, a constant c is necessary to compute the transcendence measure. The size of c involved the transcendence measure can influence the efficiency of the algorithm greatly.In this paper, we overcome the problem arising in Hulst and Lenstra's algorithm and propose a new polynomial time algorithm for factoring bivariate polynomials with rational coefficients. Using an approximate algebraic number of high degree instead of a variable of a bivariate polynomial, we can get a univariate one. A factor of the resulting univariate polynomial can then be obtained by a numerical root finder and the purely numerical LLL algorithm. The high degree of the algebraic number guarantees that this factor corresponds to a factor of the original bivariate polynomial. We prove that our algorithm saves a (log2(mn))2+e factor in bit-complexity comparing with the algorithm presented by Hulst and Lenstra, where (n, m) represents the bi-degree of the polynomial to be factored. We also demonstrate on many significant experiments that our algorithm is practical. Moreover our algorithm can be generalized to polynomials with variables more than two.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Multiplying families of "consecutive" transpositions, the authors construct permutations then subgroups of the symmetric group, and establish and study experimentally some conjectures on the distribution of theseTranspositions then on transitivity of the generated subgroups.
Abstract: Our main motivation is to analyze and improve factorization algorithms for bivariate polynomials in C[x,y], which proceed by continuation methods.We consider a Riemann surface X defined by a polynomial f(x,y) of degree d, whose coefficients are choosen randomly. Hence we can supose that X is smooth, that the discriminant δ(x) of f has d(d-1) simple roots, Δ, that δ(0) ≠ 0 i.e. the corresponding fiber has d distinct points {y1,...,yd}. When we lift a loop 0 ∈ γ ⊂ C - Δ by a continuation method, we get d paths in X connecting {y1,...,yd}, hence defining a permutation of that set. This is called monodromy.Here we present experimentations in Maple to get statistics on the distribution of transpositions corresponding to the loops turning around each point of Δ. Multiplying families of "consecutive" transpositions, we construct permutations then subgroups of the symmetric group. This allows us to establish and study experimentally some conjectures on the distribution of these transpositions then on transitivity of the generated subgroups.These results provide interesting insights on the structure of such Riemann surfaces (or their union) and eventually can be used to develop fast algorithms.

Proceedings Article
03 Aug 2009
TL;DR: The SNC 2009 Call For Papers solicited submissions in several topics including hybrid symbolic-numeric algorithms, approximate polynomial GCD and factorization, resultants and structured matrices for symbolic numerical computation, and differential equations for geometric computation as mentioned in this paper.
Abstract: The aim of SNC 2009 is to offer a forum for researchers in symbolic and numeric computation to present their work, interact, exchange ideas and identify important problems in this research area. SNC 2009 continues a successful tradition of previous highly successful workshops in the area of symbolic and numeric computation: SNAP 96, held July 15-17, 1996 in Sophia Antipolis, France SNC 2005, held July 19-21, 2005 in Xi'an, China SNC 2007, held July 25-27, 2007 in London Ontario, Canada A warm thank you goes to all those that worked hard to make SNC 2009 happen and in particular the local organizers, the program committee members, the anonymous referees, the invited speakers and the participants. The SNC 2009 Call For Papers solicited submissions in several topics: Hybrid symbolic-numeric algorithms Approximate polynomial GCD and factorization Symbolic-numeric methods for solving polynomial systems Resultants and structured matrices for symbolic-numeric computation Differential equations for symbolic-numeric computation Symbolic-numeric methods for geometric computation Symbolic-numeric algorithms in algebraic geometry Symbolic-numeric algorithms for nonlinear optimization Numeric computation of characteristic sets and Groebner bases Implementation of symbolic-numeric algorithms Approximate algebraic algorithms Applications of symbolic-numeric computation SNC 2009 is sponsored by the University of Tsukuba (http://www.tsukuba.ac.jp/english/). SNC 2009 is in cooperation with ACM SIGSAM (http://www.sigsam.org/) and we wish to thank the Chair of SIGSAM, Dr. Mark Giesbrecht, for his continuous help and support. SNC 2009 is also in cooperation with JSSAC, the Japan Society for Symbolic and Algebraic Computation (http://www.jssac.org/). We hope that the current book of proceedings of SNC 2009 will become a useful resource for researchers in Symbolic-Numeric Computation and other related research areas. We also hope that it will become another testament of the liveliness and vibrancy of this research area and a precursor of the future developments that await us ahead.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Lifting modulo powers of two is developed to implement the unified superfast algorithms for solving Toeplitz, Hankel, Vandermonde, Cauchy, and other structured linear systems of equations with integer coefficients in nearly optimal randomized Boolean time.
Abstract: Our unified superfast algorithms for solving Toeplitz, Hankel, Vandermonde, Cauchy, and other structured linear systems of equations with integer coefficients combine Hensel's symbolic lifting and numerical iterative refinement and run in nearly optimal randomized Boolean time for both solution and its correctness verification. The algorithms and nearly optimal time bounds are extended to some fundamental computations with univariate polynomials that have integer coefficients. Furthermore, we develop lifting modulo powers of two to implement our algorithms in the binary mode within a fixed precision.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Hensel series is an expansion of multivariate algebraic function at a singular point, computed from the defining polynomial by the Hensel construction, which seems to be useful in various applications.
Abstract: Hensel series is an expansion of multivariate algebraic function at a singular point, computed from the defining polynomial by the Hensel construction. The Hensel series is well-structured and tractable, hence it seems to be useful in various applications. In SNC'07, the present authors reported the following interesting properties of Hensel series, which were found numerically. 1) The convergence and the divergence domains co-exist in any small neighborhood of the expansion point. 2) If we trace a Hensel series by passing a divergence domain, the series may jump from a branch to another branch of the original algebraic function. In this paper, we clarify these properties theoretically and derive stronger properties.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Given a univariate polynomial having well-separated clusters of close roots, this work gives a method of computing close roots in a cluster simultaneously, without computing other roots, which is very efficient and gives a formula of quite tight error bound.
Abstract: Given a univariate polynomial having well-separated clusters of close roots, we give a method of computing close roots in a cluster simultaneously, without computing other roots. We first determine the position and the size of the cluster, as well as the number of close roots contained. Then, we move the origin to a near center of the cluster and perform the scale transformation so that the cluster is enlarged to be of size O(1). These operations transform the polynomial to a very characteristic one. We modify Durand-Kerner's method so as to compute only the close roots in the cluster. The method is very efficient because we can discard most terms of the transformed polynomial. We also give a formula of quite tight error bound. We show high efficiency of our method by empirical experiments.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: This work proposes a new method that reduces the number of exact computational steps needed for obtaining exact results in algebraic algorithms using zero rewriting and symbols, and mostly uses floating-point computations.
Abstract: For a certain class of algebraic algorithms, we propose a new method that reduces the number of exact computational steps needed for obtaining exact results. This method is the floating-point interval method using zero rewriting and symbols. Zero rewriting, which is from stabilization techniques, rewrites an interval coefficient into the zero interval if the interval contains zero. Symbols are used to keep track of the execution path of the original algorithm with exact computations, so that the associated real coefficients can be computed by evaluating the symbols. The key point is that at each stage of zero rewriting, one checks to see if the zero rewriting is really correct by exploiting the associated symbol. This method mostly uses floating-point computations; the exact computations are only performed at the stage of zero rewriting and in the final evaluation to get the exact coefficients. Moreover, one does not need to check the correctness of the output.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: For the canonical Hénon map, a numerical method based on curve fitting is proposed to find a positively invariant set containing the strange attractor and this work can be generalized to find inequality termination conditions for loops with nonlinear assignments.
Abstract: In this paper, we study positively invariant sets of a class of nonlinear loops and discuss the relation between these sets and the attractors of the loops. For the canonical Henon map, a numerical method based on curve fitting is proposed to find a positively invariant set containing the strange attractor. This work can be generalized to find inequality termination conditions for loops with nonlinear assignments.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: The so-called characteristic equation det(λI − A) = 0 occurs when looking for solutions of the form x(t) = ev for some vector v ∈ C.
Abstract: ẋ(t) = Ax(t) (1) where x ∈ C and A ∈ C(n×n), together with (say) initial conditions x(0) = x0, occurs often as a simple model of many applied dynamical phenomena, for instance in theoretical evolution or in the physics of lasers, to name only two out of many possibilities. The so-called characteristic equation det(λI − A) = 0 occurs when looking for solutions of the form x(t) = ev for some vector v ∈ C. Understanding the exact solution x(t) = exp(At)x0 comes from the eigenvalues (spectrum) of A, and more recently from the pseudospectrum of A, by which is meant the set [5, 17]

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Novel randomized preprocessing techniques for solving linear systems of equations and eigen-solving with extensions to the solution of polynomial and secular equations and structured input matrices are proposed.
Abstract: We propose novel randomized preprocessing techniques for solving linear systems of equations and eigen-solving with extensions to the solution of polynomial and secular equations. According to our formal study and extensive experiments, the approach turns out to be effective, particularly in the case of structured input matrices.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: A survey of the DE-Sinc numerical methods, which have been developed by Stenger and his school, incorporated with double-exponential transformations, with a feature that they enjoys the convergence rate O(exp(-κ'n/log n) with some κ'>0 even if the function, or the solution to be approximated has end-point singularity.
Abstract: The present talk gives a survey of the DE-Sinc numerical methods (= the Sinc numerical methods, which have been developed by Stenger and his school, incorporated with double-exponential transformations). The DE-Sinc numerical methods have a feature that they enjoys the convergence rate O(exp(-κ'n/log n)) with some κ'>0 even if the function, or the solution to be approximated has end-point singularity, where n is the number of nodes or bases used in the methods.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: This talk focuses on the approximate parametrization algorithm, and presents an empirical analysis that shows that the input and output curves of the algorithm are close in practice.
Abstract: In this talk we deal with the problem of parametrizing approximately a perturbed rational affine plane curve implicitly given. We present some of our recent results (see [3], [4], [5], [6]) and we describe our on going research in this context. More precisely, we focus on our approximate parametrization algorithm in [6], and we present an empirical analysis that shows that the input and output curves of the algorithm are close in practice.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: A numerical eigensolver using contour integral for a polynomial eigenvalue problem that is derived fromPolynomial equations is applied and the singular value decomposition for a matrix which appears in the eIGensolver is applied.
Abstract: In this paper, we present a method for finding zeros of polynomial equations in a given domain. We apply a numerical eigensolver using contour integral for a polynomial eigenvalue problem that is derived from polynomial equations. The Dixon resultant is used to derive the matrix polynomial of which eigenvalues involve roots of the polynomial equations with respect to one variable. The matrix polynomial obtained by the Dixon resultant is sometimes singular. By applying the singular value decomposition for a matrix which appears in the eigensolver, we can obtain the roots of given polynomial systems. Experimental results demonstrate the efficiency of the proposed method.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Algorithms for multivariate GCD and approximate GCD are presented by modifying Barnett's theorem, which is based on the LU-decomposition of Bézout matrix, and it is shown the method is stabler and faster than many other methods.
Abstract: We present algorithms for multivariate GCD and approximate GCD by modifying Barnett's theorem, which is based on the LU-decomposition of Bezout matrix. Our method is suited for multivariate polynomials with large degrees. Also, we analyze ill-conditioned cases of our method. We show our method is stabler and faster than many other methods.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: This work establishes the connection between the reciprocal of a multivariate polynomial and its Taylor expansion, and reconstructs the factors from the Taylor expansion as each irreducible factor, regardless of its multiplicity, can be separately extracted.
Abstract: We present a method to extract factors of multivariate polynomials with complex coefficients in floating point arithmetic. We establish the connection between the reciprocal of a multivariate polynomial and its Taylor expansion. Since the multivariate Taylor coefficients are determined by the irreducible factors of the given polynomial, we reconstruct the factors from the Taylor expansion. As each irreducible factor, regardless of its multiplicity, can be separately extracted, our method can lead toward the complete numerical factorization of multivariate polynomials.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: This talk will survey the proposed error free transformations for floating point numbers and clarify that this new methodology is very usefull to make efficient error free numerical algorithms including error free fast computational geometric algorithms.
Abstract: We have proposed error free transformations for floating point numbers[1]-[3] In the first place, this talk will briely survey this result Then, the suthor will clarify that this new methodology is very usefull to make efficient error free numerical algorithms including error free fast computational geometric algorithms[4], [5]

Proceedings ArticleDOI
03 Aug 2009
TL;DR: An application of the filter diagonalization method (FDM) is studied to solve the higher degree univariate algebraic equation of numerical coefficients when only a small portion of roots are required which are near the specified location in the complex plane ornear the specified interval.
Abstract: By the use of symbolic computation, a problem given by a set of multivariate algebraic relations is often reduced to a univariate algebraic equation which is quite high degree. And, if the roots are required in numbers we generally have to solve the higher degree algebraic equation by some iterative method. In this paper, an application of the filter diagonalization method (FDM) [5] is studied to solve the higher degree univariate algebraic equation of numerical coefficients when only a small portion of roots are required which are near the specified location in the complex plane or near the specified interval. Recently, FDM has been developing as the technique to solve a small portion of eigenpairs of a matrix selectively depending on their eigenvalues.By the companion method, roots of an algebraic equation of higher degree N are solved as eigenvalues of companion matrix A after the balancing is made. Usually all eigenvalues can be solved by the method of shifted QR iteration. The amount of computation of the ordinal shifted QR iteration which does not use the special non-zero structure of the Frobenius companion matrix of degree N is O(N3). In the paper [1], it was shown that the amount of computation to solve all eigenvalues of size N Frobenius companion matrix by the special QR iteration which uses the structure is O(N2).In this paper, we assume that not all but only a small portion of roots are required. To reduce the elapsed time, the inverse iteration (Rayleigh-quotient iteration) will be used in parallel. In the inverse iteration, the linear equation of shifted matrix A-ρI is solved. (For the case of a univariate algebraic equation of N-th degree, A is the Frobenius companion (or its balanced matrix). By the use of the special sparse structure of the shifted matrix in LU-decomposition or QR-decomposition, the complexity of the solution of the linear equation is O(N) in both arithmetics and space.) By the use of a well-tuned filter which is a linear combination of resolvents, FDM gives well approximated eigenpairs whose eigenvalues are near the specified location in the complex plane. From the approximated eigenpairs as initial values, inverse-iteration method quickly improves eigenpairs in a few iterations.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: Three algorithms for approximate factorization of univariate polynomials over Z are proposed; the first one uses sums of powers of roots (SPR method), the second one utilizes factor-differentiated polynmials (FD method), and the third one is a robust but slow method.
Abstract: We propose three algorithms for approximate factorization of univariate polynomials over Z; the first one uses sums of powers of roots (SPR method), the second one utilizes factor-differentiated polynomials (FD method), and the third one is a robust but slow method. The SPR method is applicable to monic polynomials well but it is almost useless for non-monic polynomials unless their leading coefficients are sufficiently small. The FD method is applicable to both monic and non-monic polynomials, but it also becomes useless if both the leading and the tail coefficients increase. The third one is applicable to any polynomial factorizable approximately over Z, but it is slow. We discuss two types of polynomials which are ill-conditioned for rootfinding, Wilkinson-type polynomials and polynomials with close roots. Furthermore, we consider briefly approximate factorization of multivariate polynomials over Z.

Proceedings ArticleDOI
03 Aug 2009
TL;DR: The problem of computing with the resulting symbolic objects, and their usage in algorithms for the automatic analysis and verification of cyber-physical systems is discussed.
Abstract: Cyber-Physical Systems (CPS) are integrations of computation and physical processes. Already now, more or less no new consumer device or industrial machinery does not have some form of integrated computation. Since such systems not only interact with each other, but also with humans, their malfunction can endanger human life, and hence it is essential for them to work correctly. Important examples of properties that are used for specifying system correctness are:" Safety: The system state always stays in a certain set considered to be safe." Progress: The system state will eventually reach some set considered to be desirable.It is important to notice that here we deal with nondeterministic systems: They do not possess a single initial state, but an uncountable set of initial states, and for a given state, further evolution of a system is not fixed but, in general, there are uncountably many further evolutions.So, when we want to automatically verify the correctness of such systems, due to this non-determinism, we need some form of global reasoning and a form of representing the above uncountable sets. Or, in other words, we need symbolic computation.Considering the two aspects of CPS, computation and physical processes, the first aspect is based on computer programs, which are fixed abstract objects. Hence, for analyzing pure software systems, classical symbolic computation is the natural candidate. However, the second aspect, physical processes, is prone to perturbations, whose analysis is one of the main tasks of numerical analysis.As a consequence, for analyzing cyber-physical systems, we need global reasoning in the presence of perturbations, or in other words, symbolic-numeric computation. In the talk we will discuss the problem of computing with the resulting symbolic objects, and their usage in algorithms for the automatic analysis and verification of cyber-physical systems.The talk will draw on joint work with Zhikun She, Tomas Dzetkulic and many others.