scispace - formally typeset
Search or ask a question

Showing papers on "Square-free polynomial published in 2015"


Journal ArticleDOI
TL;DR: In this paper, the authors show that the least squares method is quasi-optimal in expectation in the univariate case, under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space.
Abstract: Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

124 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the parameters of a Gaussian distribution with a fixed number of components can be learned using a sample whose size is polynomial in dimension and all other parameters.
Abstract: The question of polynomial learnability of probability distributions, particularly Gaussian mixture distributions, has recently received significant attention in theoretical computer science and machine learning. However, despite major progress, the general question of polynomial learnability of Gaussian mixture distributions still remained open. The current work resolves the question of polynomial learnability for Gaussian mixtures in high dimension with an arbitrary fixed number of components. Specifically, we show that parameters of a Gaussian distribution with a fixed number of components can be learned using a sample whose size is polynomial in dimension and all other parameters. The result on learning Gaussian mixtures relies on an analysis of distributions belonging to what we call polynomial families in low dimension. These families are characterized by their moments being polynomial in parameters and include almost all common probability distributions as well as their mixtures and products. Using...

32 citations


Journal ArticleDOI
TL;DR: A polynomial partitioning method with up to d polynomials in dimension d is provided, which allows for a complete decomposition of the given point set and is applied to obtain a new algorithm for the semialgebraic range searching problem.
Abstract: The polynomial partitioning method of Guth and Katz (arXiv:1011.4105) has numerous applications in discrete and computational geometry. It partitions a given n-point set $$P\subset {\mathbb {R}}^d$$P?Rd using the zero set Z(f) of a suitable d-variate polynomial f. Applications of this result are often complicated by the problem, "What should be done with the points of P lying within Z(f)?" A natural approach is to partition these points with another polynomial and continue further in a similar manner. So far this has been pursued with limited success--several authors managed to construct and apply a second partitioning polynomial, but further progress has been prevented by technical obstacles. We provide a polynomial partitioning method with up to d polynomials in dimension d, which allows for a complete decomposition of the given point set. We apply it to obtain a new algorithm for the semialgebraic range searching problem. Our algorithm has running time bounds similar to a recent algorithm by Agarwal et al. (SIAM J Comput 42:2039---2062, 2013), but it is simpler both conceptually and technically. While this paper has been in preparation, Basu and Sombra, as well as Fox, Pach, Sheffer, Suk, and Zahl, obtained results concerning polynomial partitions which overlap with ours to some extent.

29 citations


Posted Content
TL;DR: A complexity analysis of an existing algorithm due to Gurvits (J Comput Syst Sci 69(3):448–484, 2004 ), who proved it was polynomial time for certain classes of inputs, that is extended to actually approximate capacity to any accuracy in polynometric time.
Abstract: In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over $\mathbb{Q}$ is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time (whether or not randomization is allowed). The algorithm efficiently solves the "word problem" for the free skew field, and the identity testing problem for arithmetic formulae with division over non-commuting variables, two problems which had only exponential-time algorithms prior to this work. The main contribution of this paper is a complexity analysis of an existing algorithm due to Gurvits, who proved it was polynomial time for certain classes of inputs. We prove it always runs in polynomial time. The main component of our analysis is a simple (given the necessary known tools) lower bound on central notion of capacity of operators (introduced by Gurvits). We extend the algorithm to actually approximate capacity to any accuracy in polynomial time, and use this analysis to give quantitative bounds on the continuity of capacity (the latter is used in a subsequent paper on Brascamp-Lieb inequalities). Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, approximation of the permanent and naturally in non-commutative algebra. We provide a detailed account of some of these sources and their interconnections.

27 citations


Proceedings ArticleDOI
04 Jan 2015
TL;DR: It is shown that if every graph of a hereditary class F satisfies the property that it is possible to delete a bounded number of vertices such that every remaining component has size at most two, then F-Subgraph Test is solvable in randomized polynomial time and it is NP-hard otherwise.
Abstract: We study two fundamental problems related to finding subgraphs: (1) given graphs G and H, Subgraph Test asks if H is isomorphic to a subgraph of G, (2) given graphs G, H, and an integer t, Packing asks if G contains t vertex-disjoint subgraphs isomorphic to H. For every graph class F, let F-Subgraph Test and F-Packing be the special cases of the two problems where H is restricted to be in F. Our goal is to study which classes F make the two problems tractable in one of the following senses:• (randomized) polynomial-time solvable,• admits a polynomial (many-one) kernel (that is, has a polynomial-time preprocessing procedure that creates an equivalent instance whose size is polynomially bounded by the size of the solution), or• admits a polynomial Turing kernel (that is, has an adaptive polynomial-time procedure that reduces the problem to a polynomial number of instances, each of which has size bounded polynomially by the size of the solution).To obtain a more robust setting, we restrict our attention to hereditary classes F.It is known that if every component of every graph in F has at most two vertices, then F-Packing is polynomial-time solvable, and NP-hard otherwise. We identify a simple combinatorial property (every component of every graph in F either has bounded size or is a bipartite graph with one of the sides having bounded size) such that if a hereditary class F has this property, then F-Packing admits a polynomial kernel, and has no polynomial (many-one) kernel otherwise, unless the polynomial hierarchy collapses. Furthermore, if F does not have this property, then F-Packing is either WK[1]-hard, W[1]-hard, or Long Path-hard, giving evidence that it does not admit polynomial Turing kernels either.For F-Subgraph Test, we show that if every graph of a hereditary class F satisfies the property that it is possible to delete a bounded number of vertices such that every remaining component has size at most two, then F-Subgraph Test is solvable in randomized polynomial time and it is NP-hard otherwise. We introduce a combinatorial property called (a, b, c, d)-splittability and show that if every graph in a hereditary class F has this property, then F-Subgraph Test admits a polynomial Turing kernel and it is WK[1]-hard, W[1]-hard, or Long Path-hard otherwise. We do not give a complete characterization of the cases when F-Subgraph Test admits polynomial many-one kernels, but show examples that this question is much more fragile than the characterization for Turing kernels.

26 citations


Journal ArticleDOI
TL;DR: An algorithm for finding the solution of the state-dependent algebraic equation in the infinite-time case based on a Hamiltonian approach is provided and it is demonstrated that this solution leads the game to an ε - or quasi-equilibrium and provides an upper bound for this ε quantity.

24 citations


Book ChapterDOI
01 Jan 2015
TL;DR: A sequence of quasi-optimal best n-term sets is built to approximate multivariate functions that feature strong anisotropy in moderately high dimensions, which relies on a greedy selection of basis functions, which preserves the downward closedness property of the polynomial approximation space.
Abstract: We address adaptive multivariate polynomial approximation by means of the discrete least-squares method with random evaluations, to approximate in the L2 probability sense a smooth function depending on a random variable distributed according to a given probability density The polynomial least-squares approximation is computed using random noiseless pointwise evaluations of the target function Here noiseless means that the pointwise evaluation of the function is not polluted by the presence of noise Recent works Migliorati et al (Found Comput Math 14:419–456, 2014), Cohen et al (Found Comput Math 13:819–834, 2013), and Chkifa et al (Discrete least squares polynomial approximation with random evaluations – application to parametric and stochastic elliptic PDEs, EPFL MATHICSE report 35/2013, submitted) have analyzed the univariate and multivariate cases, providing error estimates for (a priori) given sequences of polynomial spaces In the present work, we apply the results developed in the aforementioned analyses to devise adaptive least-squares polynomial approximations We build a sequence of quasi-optimal best n-term sets to approximate multivariate functions that feature strong anisotropy in moderately high dimensions The adaptive approximation relies on a greedy selection of basis functions, which preserves the downward closedness property of the polynomial approximation space Numerical results show that the adaptive approximation is able to catch effectively the anisotropy in the function

22 citations


Journal ArticleDOI
TL;DR: The factorization problem is surveyed, discussing the algorithmic ideas as well as the applications to other problems, and the challenges ahead are discussed, in particular focusing on the goal of obtaining deterministic factoring algorithms.
Abstract: Algebraic complexity theory studies the complexity of computing (multivariate) polynomials efficiently using algebraic circuits. This succinct representation leads to fundamental algorithmic challenges such as the polynomial identity testing (PIT) problem (decide nonzeroness of the computed polynomial) and the polynomial factorization problem (compute succinct representations of the factors of the circuit). While the Schwartz-Zippel-DeMillo-Lipton Lemma [Sch80,Zip79,DL78] gives an easy randomized algorithm for PIT, randomized algorithms for factorization require more ideas as given by Kaltofen [Kal89]. However, even derandomizing PIT remains a fundamental problem in understanding the power of randomness.In this column, we survey the factorization problem, discussing the algorithmic ideas as well as the applications to other problems. We then discuss the challenges ahead, in particular focusing on the goal of obtaining deterministic factoring algorithms. While deterministic PIT algorithms have been developed for various restricted circuit classes, there are very few corresponding factoring algorithms. We discuss some recent progress on the divisibility testing problem (test if a given polynomial divides another given polynomial) which captures some of the difficulty of factoring. Along the way we attempt to highlight key challenges whose solutions we hope will drive progress in the area.

21 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the roots of the characteristic polynomial of certain finite lattices are all nonnegative integers, and that the quotient of a poset can be used to explain factorization.

21 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that a non-degenerate quadratic space has an isometry with minimal polynomial f if and only if such an isometric exists over all the completions of k. This gives a partial answer to a question of Milnor.
Abstract: Let k be a global field of characteristic not 2, and let f is an element of k[X] be an irreducible polynomial. We show that a non-degenerate quadratic space has an isometry with minimal polynomial f if and only if such an isometry exists over all the completions of k. This gives a partial answer to a question of Milnor.

20 citations


Journal ArticleDOI
TL;DR: The efficiency of the algorithm is demonstrated by exhibiting a better polynomial than the one used for the factorization of RSA-768, and a polynomials for RSA-1024 that outperforms the best published one.
Abstract: The general number field sieve (GNFS) is the most efficient algo-rithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the selected polynomials can be modelled in terms of size and root properties. We propose a new kind of polynomials for GNFS: with a new degree of freedom, we further improve the size property. We demonstrate the efficiency of our algorithm by exhibiting a better polynomial than the one used for the factorization of RSA-768, and a polynomial for RSA-1024 that outperforms the best published one.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the case of a Lie polynomial and showed that it is not a Lie Polynomial for any constant k > 2 and provided an arithmetic criterion for this case.
Abstract: Kaplansky asked about the possible images of a polynomial $f$ in several noncommuting variables. In this paper we consider the case of $f$ a Lie polynomial. We describe all the possible images of $f$ in $M_2(K)$ and provide an example of $f$ whose image is the set of non-nilpotent trace zero matrices, together with 0. We provide an arithmetic criterion for this case. We also show that the standard polynomial $s_k$ is not a Lie polynomial, for $k>2.$

Book ChapterDOI
09 Sep 2015
TL;DR: In this article, the verification of user-provided properties is not easily compatible with the usual forward fixpoint computation using numerical abstract domains, and abstract interpretation is not theoretically restricted to specific kinds of properties, it is mainly developed to compute linear over-approximations of reachable sets.
Abstract: While abstract interpretation is not theoretically restricted to specific kinds of properties, it is, in practice, mainly developed to compute linear over-approximations of reachable sets, aka. the collecting semantics of the program. The verification of user-provided properties is not easily compatible with the usual forward fixpoint computation using numerical abstract domains.

Posted Content
TL;DR: In this paper, a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over the free skew field is invertible or not is presented.
Abstract: In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over $\mathbb{Q}$ is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time (whether or not randomization is allowed). The algorithm efficiently solves the "word problem" for the free skew field, and the identity testing problem for arithmetic formulae with division over non-commuting variables, two problems which had only exponential-time algorithms prior to this work. The main contribution of this paper is a complexity analysis of an existing algorithm due to Gurvits, who proved it was polynomial time for certain classes of inputs. We prove it always runs in polynomial time. The main component of our analysis is a simple (given the necessary known tools) lower bound on central notion of capacity of operators (introduced by Gurvits). We extend the algorithm to actually approximate capacity to any accuracy in polynomial time, and use this analysis to give quantitative bounds on the continuity of capacity (the later is used in a subsequent paper on Brascamp-Lieb inequalities). Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, approximation of the permanent and naturally in non-commutative algebra. We provide a detailed account of some of these sources and their interconnections.

Proceedings ArticleDOI
06 Jul 2015
TL;DR: This work proposes a more model-theoretic formalism, called polynomial-time interpretation logic (PIL), that replaces the machinery of hereditarily finite sets and comprehension terms by traditional first-order interpretations, and handles counting by Härtig quantifiers.
Abstract: Choice less Polynomial Time (CPT) is one of the candidates in the quest for a logic for polynomial time. It is a strict extension of fixed-point logic with counting, but to date the question is open whether it expresses all polynomial-time properties of finite structures. We present here alternative characterisations of Choice less Polynomial Time (with and without counting) based on iterated first-order interpretations. The fundamental mechanism of Choice less Polynomial Time is the manipulation of hereditarily finite sets over the input structure by means of set-theoretic operations and comprehension terms. While this is very convenient and powerful for the design of abstract computations on structures, it makes the analysis of the expressive power of CPT rather difficult. We aim to reduce this functional framework operating on higher-order objects to an approach that evaluates formulae on less complex objects. We propose a more model-theoretic formalism, called polynomial-time interpretation logic (PIL), that replaces the machinery of hereditarily finite sets and comprehension terms by traditional first-order interpretations, and handles counting by Hartig quantifiers. In our framework, computations on finite structures are captured by iterations of interpretations, and a run is a sequence of states, each of which is a finite structure of a fixed vocabulary. Our main result is that PIL has precisely the same expressive power as Choice less Polynomial Time. We also analyse the structure of PIL and show that many of the logical formalisms or database languages that have been proposed in the quest for a logic for polynomial time reappear as fragments of PIL, obtained by restricting interpretations in a natural way (e.g. By omitting congruences or using only one-dimensional interpretations).

Journal ArticleDOI
TL;DR: This work considers the naive bottom-up concatenation scheme for a con- text-free language and shows that this scheme has the incremental polynomial time property, which means that all members of the language can be enumerated without duplicates so that the time between two consecutive outputs is bounded by a polynometric in the number of strings already generated.
Abstract: We consider the naive bottom-up concatenation scheme for a con- text-free language and show that this scheme has the incremental polynomial time property. This means that all members of the language can be enumerated without duplicates so that the time between two consecutive outputs is bounded by a polynomial in the number of strings already generated.

Book ChapterDOI
14 Sep 2015
TL;DR: In this article, the authors describe the design and implementation of a web interface and reflect on the application of polynomial homotopy continuation methods to solve poynomial systems in the cloud.
Abstract: Polynomial systems occur in many fields of science and engineering. Polynomial homotopy continuation methods apply symbolic-numeric algorithms to solve polynomial systems. We describe the design and implementation of our web interface and reflect on the application of polynomial homotopy continuation methods to solve polynomial systems in the cloud. Via the graph isomorphism problem we organize and classify the polynomial systems we solved. The classification with the canonical form of a graph identifies newly submitted systems with systems that have already been solved.

Journal ArticleDOI
TL;DR: In this article, the authors established some inequalities concerning to polar derivative of polynomial having all its zeros inside or outside a unit circle and thereby presented some compact generalizations of certain well-known polynomials inequalities.
Abstract: In this paper we establish some inequalities concerning to polar derivative of polynomial having all its zeros inside or outside a unit circle and thereby present some compact generalizations of certain well-known polynomial inequalities.

Journal ArticleDOI
TL;DR: The main contribution in this paper is proposing an optimization method based on functional decomposition of multivariate polynomial in the form of f(x) + f0 to obtain good building blocks, and vanishing polynomials over Z2m to add/delete redundancy to/from givenPolynomial functions to extract further common sub-expressions.
Abstract: This paper concentrates on high-level data-flow optimization and synthesis techniques for datapath intensive designs such as those in Digital Signal Processing (DSP), computer graphics and embedded systems applications, which are modeled as polynomial computations over $Z_{2^{n_1 } } \times Z_{2^{n_2 } } \times \cdots \times Z_{2^{n_d } }$ to $Z_{2^m }$ . Our main contribution in this paper is proposing an optimization method based on functional decomposition of multivariate polynomial in the form of $f(x) = g(x) \;o \;h(x) + f_{0} = g(h(x)) + f_{0}$ to obtain good building blocks, and vanishing polynomials over $Z_{2^m }$ to add/delete redundancy to/from given polynomial functions to extract further common sub-expressions. Experimental results for combinational implementation of the designs have shown an average saving of 38.85 and 18.85 percent in the number of gates and critical path delay, respectively, compared with the state-of-the-art techniques. Regarding the comparison with our previous works, the area and delay are improved by 10.87 and 11.22 percent, respectively. Furthermore, experimental results of sequential implementations have shown an average saving of 39.26 and 34.70 percent in the area and the latency, respectively, compared with the state-of-the-art techniques.

Book ChapterDOI
24 Aug 2015
TL;DR: This work has shown that there are deterministic polynomial kernelizations for Subset Sum and Knapsack when parameterized by the number n of items.
Abstract: Kernelization is a formalization of efficient preprocessing for \(\mathsf {NP}\)-hard problems using the framework of parameterized complexity. Among open problems in kernelization it has been asked many times whether there are deterministic polynomial kernelizations for Subset Sum and Knapsack when parameterized by the number n of items.

Proceedings ArticleDOI
17 Oct 2015
TL;DR: It is shown that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long standing open problem in formal language theory.
Abstract: We show that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long standing open problem in formal language theory. We also present efficient algorithms for subclasses: polynomial time for total transducers with unary output alphabet (over a given top-down regular domain language), and co-randomized polynomial time for linear transducers, these results are obtained using techniques from multi-linear algebra. For our main result, we prove that equivalence can be certified by means of inductive invariants using polynomial ideals. This allows us to construct two semi-algorithms, one searching for a proof of equivalence, one for a witness of non-equivalence.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: The algorithm can be seen as an extension of the usual rules of first-order unification and can be used to solve related problems in polynomial time, such as first- order unification of two terms that tolerates one clash.
Abstract: One Context Unification (1CU) extends first-order unification by introducing a single context variable. This problem was recently shown to be in NP, but it is not known to be solvable in polynomial time. We show that the case of 1CU where the context variable occurs at most twice in the input (1CU2r) is solvable in polynomial time. Moreover, a polynomial representation of all solutions can also be computed in polynomial time. The 1CU2r problem is important as it is used as a subroutine in polynomial time algorithms for several more-general classes of 1CU problem. Our algorithm can be seen as an extension of the usual rules of first-order unification and can be used to solve related problems in polynomial time, such as first-order unification of two terms that tolerates one clash. All our results assume that the input terms are represented as Directed Acyclic Graphs.

Journal ArticleDOI
TL;DR: In this article, the class of polynomial functions which are barycentrically associative over an infinite commutative integral domain is defined and described, and a general class of functions are defined.
Abstract: We describe the class of polynomial functions which are barycentrically associative over an infinite commutative integral domain.


Journal ArticleDOI
01 Jan 2015
TL;DR: A new simplified version of this algorithm is described, which entails a lower computational cost and uses linear test polynomials, which not only reduces the computational burden, but can also provide good estimates and deterministic bounds of the number of operations needed for factoring.
Abstract: The paper presents a careful analysis of the Cantor-Zassenhaus polynomial factorization algorithm, thus obtaining tight bounds on the performances, and proposing useful improvements. In particular, a new simplified version of this algorithm is described, which entails a lower computational cost. The key point is to use linear test polynomials, which not only reduce the computational burden, but can also provide good estimates and deterministic bounds of the number of operations needed for factoring. Specifically, the number of attempts needed to factor a given polynomial, and the least degree of a polynomial such that a factor is found with at most a fixed number of attempts, are computed. Interestingly, the results obtained demonstrate the existence of some sort of duality relationship between these two problems.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: In this paper, the authors describe a massively parallel predictor-corrector algorithm to track many solution paths of a polynomial homotopy, which combines the reverse mode of algorithmic differentiation with double double and quad double arithmetic.
Abstract: Polynomial systems occur in many areas of science and engineering. Unlike general nonlinear systems, the algebraic structure enablesto compute all solutions of a polynomial system. We describe our massively parallel predictor-corrector algorithmsto track many solution paths of a polynomial homotopy. The data parallelism that provides the speedups stems from theevaluation and differentiation of the monomials in the same polynomialsystem at different data points, which are the points on the solution paths. Polynomial homotopies that have tens of thousands of solution pathscan keep a sufficiently large amount of threads occupied. Our accelerated code combines the reverse mode of algorithmic differentiationwith double double and quad double arithmetic to compute more accurateresults faster.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the univariable and multivariable fractional polynomial model and highlight important aspects of their construction, including functional tables and functional plots.
Abstract: The fractional polynomial regression model is an emerging tool in applied research. Overcoming inherent problems associated with a polynomial expansion and splines, fractional polynomial models provide an alternate approach for modeling nonlinear relationships. In this article, we introduce the univariable and multivariable fractional polynomial model and highlight important aspects of their construction. Because of the curvilinear nature of fractional polynomial models, functional tables and functional plots are emphasized for model interpretation. We present two examples to illustrate fractional polynomial models for their selection and interpretation in applied research. WIREs Comput Stat 2015, 7:275–283. doi: 10.1002/wics.1355 For further resources related to this article, please visit the WIREs website. Conflict of interest: The authors have declared no conflicts of interest for this article.

Journal Article
TL;DR: In this paper, the complexity of factorization of polynomials in the free noncommutative ring was studied and it was shown that variable-disjoint factorization is polynomial-time equivalent to polynomial identity testing.
Abstract: In this paper we study the complexity of factorization of polynomials in the free noncommutative ring \(\mathbb {F}\langle x_1,x_2,\ldots ,x_n \rangle \) of polynomials over the field \(\mathbb {F}\) and noncommuting variables \(x_1,x_2,\ldots ,x_n\). Our main results are the following: Although \(\mathbb {F}\langle x_1,\ldots ,x_n \rangle \) is not a unique factorization ring, we note that variable-disjoint factorization in \(\mathbb {F}\langle x_1,\ldots ,x_n \rangle \) has the uniqueness property. Furthermore, we prove that computing the variable-disjoint factorization is polynomial-time equivalent to Polynomial Identity Testing (both when the input polynomial is given by an arithmetic circuit or an algebraic branching program). We also show that variable-disjoint factorization in the black-box setting can be efficiently computed (where the factors computed will be also given by black-boxes, analogous to the work [12] in the commutative setting). As a consequence of the previous result we show that homogeneous noncommutative polynomials and multilinear noncommutative polynomials have unique factorizations in the usual sense, which can be efficiently computed. Finally, we discuss a polynomial decomposition problem in \(\mathbb {F}\langle x_1,\ldots ,x_n \rangle \) which is a natural generalization of homogeneous polynomial factorization and prove some complexity bounds for it.

Journal ArticleDOI
TL;DR: In this article, it was shown that there is neither a universal non-compact polynomial nor a universal nonsmooth non-unconditionally converging polynomials between Banach spaces.

Proceedings ArticleDOI
17 Jun 2015
TL;DR: This work generalizes the main factorization theorem from Dvir et al.
Abstract: In [8], Kaltofen proved the remarkable fact that multivariate polynomial factorization can be done efficiently, in randomized polynomial time. Still, more than twenty years after Kaltofen's work, many questions remain unanswered regarding the complexity aspects of polynomial factorization, such as the question of whether factors of polynomials efficiently computed by arithmetic formulas also have small arithmetic formulas, asked in [10], and the question of bounding the depth of the circuits computing the factors of a polynomial.We are able to answer these questions in the affirmative for the interesting class of polynomials of bounded individual degrees, which contains polynomials such as the determinant and the permanent. We show that if P(x1,..., xn) is a polynomial with individual degrees bounded by r that can be computed by a formula of size s and depth d, then any factor f(x1,..., xn) of P (x1,..., xn) can be computed by a formula of size poly((rn)r, s) and depth d+5. This partially answers the question above posed in [10], that asked if this result holds without the exponential dependence on r. Our work generalizes the main factorization theorem from Dvir et al. [2], who proved it for the special case when the factors are of the form f(x1,..., xn) ≡ xn ---g(x1,..., xn−1). Along the way, we introduce several new technical ideas that could be of independent interest when studying arithmetic circuits (or formulas).