scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1996"


Journal ArticleDOI
TL;DR: The purpose of this paper is to give proofs for the bounds from [23], which were considered in a paper by the first author which appeared in Watanabe's book [23].
Abstract: There are several sorts of Kolmogorov complexity, better to say several Kolmogorov complexities: decision complexity, simple complexity, prefix complexity, monotonie complexity,a priori complexity. The last three can and the first two cannot be used for defining randomness of an infinite binary sequence. All those five versions of Kolmogorov complexity were considered, from a unified point of view, in a paper by the first author which appeared in Watanabe's book [23]. Upper and lower bounds for those complexities and also for their differences were announced in that paper without proofs. (Some of those bounds are mentioned in Section 4.4.5 of [16].) The purpose of this paper (which can be read independently of [23]) is to give proofs for the bounds from [23]. The terminology used in this paper is somehow nonstandard: we call “Kolmogorov entropy” what is usually called “Kolmogorov complexity.” This is a Moscow tradition suggested by Kolmogorov himself. By this tradition the term “complexity” relates toany mode of description and “entropy” is the complexity related to anoptimal mode (i.e., to a mode that, roughly speaking, gives theshortest descriptions).

77 citations


Journal ArticleDOI
TL;DR: In this paper, the Kolmogorov complexity and instance complexity of recursively enumerable (re) sets are studied and the Turing degrees of sets which attain this bound are characterized.
Abstract: The way in which way Kolmogorov complexity and instance complexity affect properties of recursively enumerable (re) sets is studied The well-known $2\log n$ upper bound on the Kolmogorov complexity of initial segments of r e sets is shown to be optimal, and the Turing degrees of r e sets which attain this bound are characterized The main part of the paper is concerned with instance complexity, introduced by Ko, Orponen, Schoning, and Watanabe in 1986, as a measure of the complexity of individual instances of a decision problem They conjectured that for every r e nonrecursive set, the instance complexity is infinitely often at least as high as the Kolmogorov complexity The conjecture is refuted by constructing an r e nonrecursive set with instance complexity logarithmic in the Kolmogorov complexity This bound is optimal up to an additive constant In the other extreme, the conjecture is established for many classes of complete sets, such as weak-truth-table-complete (wtt-complete) and Q-complete sets However, there is a Turing-complete set for which it fails

52 citations


Book ChapterDOI
25 Aug 1996

36 citations


Journal ArticleDOI
TL;DR: A model for the analysis of algorithms on graphs given by vertex expansion procedures is introduced, based on previously studied concepts of “succinct representation” techniques, and allows us to prove PSPACE-completeness or EXPTIME-Completeness of specific, natural problems on implicit graphs, such as those solved by A ∗, AO ∗ , and other best-first search strategies.

27 citations


Journal ArticleDOI
TL;DR: A new path consistency algorithm, PC-5, is presented, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4 and exhibits a much better average- case time complexity.
Abstract: One of the main factors limiting the use of path consistency algorithms in real life applications is their high space complexity. Han and Lee proposed a path consistency algorithm, PC-4, with O(n3a3) space complexity, which makes it practicable only for small problems. I present a new path consistency algorithm, PC-5, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4. Moreover, the new algorithm exhibits a much better average-case time complexity. The new algorithm is based on the idea (due to Bessiere) that, at any time, only a minimal amount of support has to be found and recorded for a labeling to establish its viability; one has to look for a new support only if the current support is eliminated. I also show that PC-5 can be improved further to yield an algorithm, PC5++, with even better average-case performance and the same space complexity.

27 citations


Proceedings ArticleDOI
27 Jul 1996
TL;DR: A functional framework for descriptive computational complexity, in which the Regular, First-order, Ptime, Pspace, k-Exptime, k/spl ges/1, and Elementary sets have syntactic characterizations that represent inputs and outputs as well as programs.
Abstract: We present a functional framework for descriptive computational complexity, in which the Regular, First-order, Ptime, Pspace, k-Exptime, k-Expspace (k/spl ges/1), and Elementary sets have syntactic characterizations. In this framework, typed lambda terms represent inputs and outputs as well as programs. The lambda calculi describing the above computational complexity classes are simply or let-polymorphically typed with functionalities of fixed order. They consist of: order 0 atomic constants, order 1 equality among these constants, variables, application, and abstraction. Increasing functionality order by one for these languages corresponds to increasing the computational complexity by one alternation. This exact correspondence is established using a semantic evaluation of languages for each fixed order, which is the primary technical contribution of this paper.

25 citations


Proceedings ArticleDOI
12 Aug 1996
TL;DR: This paper proposes an area model to estimate the area of single-output Boolean functions given only their functional description based on a new complexity measure called the average cube complexity, and empirical results demonstrating its feasibility and utility are presented.
Abstract: Estimation of the area complexity of a Boolean function from its functional description is an important step towards a power estimation capability at the register transfer level (RTL). This paper addresses the problem of computing the area complexity of single-output Boolean functions given only their functional description, where area complexity is measured in terms of the number of gates required for an optimal implementation of the function. We propose an area model to estimate the area based on a new complexity measure called the average cube complexity. This model has been implemented, and empirical results demonstrating its feasibility and utility are presented.

25 citations


Book
01 Jan 1996
TL;DR: Panel discussion: Does numerical analysis need a model of computation?
Abstract: Panel discussion: Does numerical analysis need a model of computation? by M. Shub On numerical solving of nonlinear Polaron equations by P. G. Akishin, I. V. Puzynin, and Y. S. Smirnov Symmetry reductions for the numerical solution of boundary value problems by E. L. Allgower and P. J. Aston The combinatorics of real algebraic splines over a simplicial complex by C. L. Bajaj QMR and TFQMR method for sparse nonsymmetric problems on massively parallel systems by A. Basermann On multigrid techniques for thin plate spline interpolation in two dimensions by R. K. Beatson, G. Goodsell, and M. D. Powell Sparse matrix reordering schemes for browsing hypertext by M. W. Berry, B. Hendrickson, and P. Raghavan Algebraic settings for the problem ""P $ eq$ NP?"" by L. Blum, F. Cucker, M. Shub, and S. Smale A new algorithm for computing the spectral matrix for higher-order differential equations and the location of discrete eigenvalues by B. M. Brown, M. P. Eastham, and D. R. McCormack An asymptotically optimal non-adaptive algorithm for minimization of Brownian motion by J. M. Calvin On two iterative methods for approximating the roots of a polynomial by J.-P. Cardinal Algebraic approach of residues and applications by J. P. Cardinal and B. Mourrain Nash trees and Nash complexity by F. Cucker and T. Lickteig Operator equations, multiscale concepts and complexity by W. Dahmen, A. Kunoth, and R. Schneider Approximate solutions of numerical problems, condition number analysis and condition number theorem by J.-P. Dedieu Computing the distance from a point to an algebraic hypersurface by J. P. Dedieu, X. Gourdon, and J. C. Yakoubsohn Local analysis of a Newton-type method based on partial linearization by A. L. Dontchev Approximations and complexity for computing algebraic curves by B. C. Eaves and U. G. Rothblum Numerical univariate polynomial GCD by I. Z. Emiris, A. Galligo, and H. Lombardi A parallel preconditioned GMRES algorithm for sparse matrices by J. Erhel An optimal algorithm for the local solution of integral equations by K. Frank Descriptive complexity theory over the real numbers by E. Gradel and K. Meer Complexity theory of Monte Carlo algorithms by S. Heinrich Qualitative numerical analysis of ordinary differential equations by A. Iserles and A. Zanna Tapia indicators and finite termination of infeasible-interior-point methods for degenerate LCP by J. Ji and F. A. Potra Quasi-Monte Carlo methods in computer graphics: The global illumination problem by A. Keller Numerical algorithms with automatic result verification by U. Kulisch Random product homotopy with minimal BKK bound by T. Y. Li, T. Wang, and X. Wang Computational complexity over the 2-adic numbers by M. Maller and J. Whitehead Optimal reconstruction of stochastic evolutions by P. Mathe Lagrangian globalization: Solving nonlinear equations via constrained optimization by J. L. Nazareth Polynomial time methods in convex programming by A. Nemirovski Effective parallel computations with Toeplitz and Toeplitz-like matrices filled with integers by V. Y. Pan An efficient discretization for solving ill-posed problems by S. V. Pereverzev and S. G. Solodky Survey of computational complexity with noisy information by L. Plaskota Lazy analysis and elementary numbers by D. Richardson On the average case complexity of solving Poisson equations by K. Ritter and G. W. Wasilkowski On the average number of real roots of certain random sparse polynomial systems by J. M. Rojas Computations in real algebraic geometry by M.-F. Roy Path following for large nonlinear equations by implicit block elimination based on recursive projections by H. Schwetlick, G. Timmermann, and R. Losche Computability with neural networks by H. T. Siegelmann Numerical algebraic geometry by A. J. Sommese and C. W. Wampler Wavelets from filter banks by G. Strang A pragmatic overview of fast multipole methods by J. H. Strickland and R. S. Baty Topological complexity of root-finding algorithms by V. A. Vassiliev On the relationship between layered least squares and affine scaling steps by S. A. Vavasis and Y. Ye Enclosure methods for capricious solutions of ordinary differential equations by W. Walter QR-like algorithms-An overview of convergence theory and practice by D. S. Watkins The complexity of the Poisson problem for spaces of bounded mixed derivatives by A. G. Werschulz Overview of information-based complexity by H. Wozniakowski.

23 citations


Book ChapterDOI
22 Feb 1996
TL;DR: In this paper, the complexity of generating and checking proofs of membership for sets in NP and PNP[O(log n) is investigated. But the complexity is not bounded.
Abstract: We consider the following questions: 1. Can one compute satisfying assignments for satisfiable Boolean formulas in polynomial time with parallel queries to NP? 2. Is the unique optimal clique problem (UOCLIQUE) complete for PNP[O(log n)]? 3. Is the unique satisfiability problem (USAT) NP hard? We define a framework that enables us to study the complexity of generating and checking proofs of membership. We connect the above three questions to the complexity of generating and checking proofs of membership for sets in NP and PNP[O(log n)]. We show that an affirmative answer to any of the three questions implies the existence of coNP checkable proofs for PNP[O(log n)] that can be generated in FP ∥ NP . Furthermore, we construct an oracle relative to which there do not exist coNP checkable proofs for NP that are generated in FP ∥ NP . It follows that relative to this oracle all of the above questions are answered negatively.

16 citations


Journal ArticleDOI
TL;DR: This work provides lower and upper bounds for the contention-free step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step.
Abstract: Worst-case time complexity is a measure of the maximum time needed to solve a problem over all runs. Contention-free time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in well-designed systems, it is important to design algorithms which perform well in the absence of contention. We study the contention-free time complexity of shared memory algorithms using two measures: step complexity, which counts the number of accesses to shared registers; and register complexity, which measures the number of different registers accessed. Depending on the system architecture, one of the two measures more accurately reflects the elapsed time. We provide lower and upper bounds for the contention-free step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step. We also present bounds on the worst-case and contention-free step and register complexities of solving the naming problem. These bounds illustrate that the proposed complexity measures are useful in differentiating among the computational powers of different primitives

15 citations



Journal Article
TL;DR: It is shown that languages generated by linear PCGSs can be recognized by O(log n) space-bounded Turing machines.
Abstract: The computational complexity is investigated for Parallel Communicating Grammar Systems (PCGSs) whose components are linear grammars. It is shown that language generated by linear PCGSs can be recognized by O(log n) space/bounded Turing machines. Based on the complexity characterization, the generative power of linear PCGSs is analyzed with respect to context-free and context-sensitive grammars.

Proceedings ArticleDOI
16 Nov 1996
TL;DR: The principle of PC-8 is used to propose a new algorithm to achieve arc-consistency called AC-8, the space complexity of which is O(n/sup 2/d) but its time complexity is worse than that ofPC-{5|6}.
Abstract: Recently, efficient algorithms have been proposed to achieve arc- and path-consistency in constraint networks. The best path-consistency algorithm proposed is PE-{5|6} which is a natural generalization of AC-6 to path-consistency independently proposed by M. Singh (1995) for PC-5 and A. Chmeiss and P. Jegou (1995) for PC-6. Unfortunately, we have remarked that PC-{5|6}, though it is widely better than PC-4 (Chmeiss and P. Jegou, 1996) was not very efficient in practice, especially for those classes of problems that require an important space to be run. So, we propose a new path-consistency algorithm called PC-8, the space complexity of which is O(n/sup 2/d) but its time complexity is O(n/sup 3/d/sup 4/), i.e. worse than that of PC-{5|6}. However, the simplicity of PC-8 as well as the data structures used for its implementation offer a higher performance than PC-{5|6}. The principle of PC-8 is also used to propose a new algorithm to achieve arc-consistency called AC-8.

Book ChapterDOI
19 Aug 1996
TL;DR: An algorithm to construct for any problem instance of degree d and fan-out k a communication schedule with total communication time at most qd+k1/q(d−1), for any integer q≥2 is presented.
Abstract: We consider the Multi-Message Multicasting problem for the n processor fully connected static network. We present an efficient algorithm to construct a communication schedule with total communication time at most d2, where d is the maximum number of messages a processor may send (receive). We present an algorithm to construct for any problem instance of degree d and fan-out k (maximum number of processors that may receive a given message) a communication schedule with total communication time at most qd+k1/q(d−1), for any integer q≥2. The time complexity bound for our algorithm is O(n(d(q+k1/q))q). Our main result is a linear time approximation algorithm with a smaller approximation bound for small values of k(<100). We discuss applications and show how to adapt our algorithms to dynamic networks such as the Benes network, the interconnection network used in the Meiko CS-2.

Journal Article
TL;DR: In this paper, the authors characterize the complexity of some natural and important problems in linear algebra and identify natural complexity classes for which the problems of determining if a system of linear equations is feasible and computing the rank of an integer matrix (as well as other problems) are complete under logspace reductions.
Abstract: We characterize the complexity of some natural and important problems in linear algebra. In particular, we identify natural complexity classes for which the problems of (a) determining if a system of linear equations is feasible and (b) computing the rank of an integer matrix (as well as other problems) are complete under logspace reductions.¶As an important part of presenting this classification, we show that the "exact counting logspace hierarchy" collapses to near the bottom level. We review the definition of this hierarchy below. We further show that this class is closed under NC1-reducibility, and that it consists of exactly those languages that have logspace uniform span programs (introduced by Karchmer and Wigderson) over the rationals.¶In addition, we contrast the complexity of these problems with the complexity of determining if a system of linear equations has an integer solution.

Book ChapterDOI
13 Nov 1996
TL;DR: A complexity model for discrete surfaces obtained by regular subdivisions of cells is shown, under the assumption that surfaces have uniform orientations in the space, and can be locally compared to planes, to show that their average number of points is a quadratic function of the subdivision factors.
Abstract: The main result of this paper is to exhibit a complexity model for discrete surfaces obtained by regular subdivisions of cells. We use it for estimating the number of points that will be generated by the Dividing-Cubes algorithm to represent the surface of 3D medical objects. Under the assumption that surfaces have uniform orientations in the space, and can be locally compared to planes, we show that their average number of points is a quadratic function of the subdivision factors. We give analytical expressions for the coefficients of the quadratic form.

Proceedings ArticleDOI
07 May 1996
TL;DR: An approach to adaptive IIR filtering based on a pseudo-linear regression and a QR matrix decomposition is developed that has proved to be stable and has good convergence properties if the unknown system satisfies the strictly positive real condition.
Abstract: An approach to adaptive IIR filtering based on a pseudo-linear regression and a QR matrix decomposition is developed. The algorithm has proved to be stable and has good convergence properties if the unknown system satisfies the strictly positive real condition. The derivation of the algorithm is straightforward and the computational complexity is less than the computational complexity of the IIR-RPE algorithm. Simulation results of system identification with synthetic and real world data are shown comparing the algorithm with the IIR-RPE and the IIR-LMS algorithm.

Journal Article
TL;DR: It is shown that the modular communication complexity can be characterised precisely in terms of the logarithm of a certain rigidity function of the communication matrix of this problem.
Abstract: The “log rank” conjecture involves the question of how precisely the deterministic communication complexity of a problem can be described in terms of algebraic invariants of the communication matrix of this problem. We answer this question in the context of modular communication complexity. We show that the modular communication complexity can be exactly characterised in terms of the logarithm of a certain rigidity function of the communication matrix. Thus, we are able to exactly determine the modular communication complexity of several problems, such as, e.g., set disjointness, comparability, and undirected graph connectivity. From the bounds obtained for the modular communication complexity we deduce exponential lower bounds on the size of depth-two circuits having arbitrary symmetric gates at the bottom level and a MOD m -gate at the top.

Book ChapterDOI
TL;DR: An attempt to reduce the computational complexity of the advancing front triangulation is described and it is shown that a major subtask, namely the geometric ompatibility (mesh correctness) checks can be carried out with linear growth rate.
Abstract: The advancing front triangulation algorithm possesses a number of desirable properties: allows for arbitrary gradation, easy connecting of independently meshed domains, etc. However, the computational complexity (abbreviated to CC hereinafter) of the algorithm implementations reported so far is O(N log N) (N being the number of generated triangles), and there is still a strong need for improvement especially with respect to ever growing size of finite element meshes used in practical problems. The present paper describes techniques that do not change the asymptotic CC. i.e. theoretical CC is still of order O(N log N). however significant improvements are achieved — large part of the time consumed during triangulation is of order O(N) (approximately 97% for graded meshes with N ≈ 105), and the rest of the triangulation time is at worst of O(N log N) CC.

Journal ArticleDOI
TL;DR: Although no one knows whether P is different from NP, showing that a problem is NP-complete provides strong evidence that the problem is computationally infeasible and justifies the use of heuristics for solving the problem.
Abstract: Computational complexity is the study of the resources, such as time and space (memory), required to solve computational problems. By quantifying these resources, complexity theory has profoundly affected our thinking about computation. Computability theory establishes the existence of undecidable problems that cannot be solved in principle, regardless of the amount of time invested. In contrast, complexity theory establishes the existence of decidable problems that, although solvable in principle, cannot be solved in practice, because the time and space required would be larger than the age and size of the known universe [Stockmeyer and Chandra 1979]. The quest for the boundaries of the set of feasible problems, those solvable in practice, has led to one of the most important unresolved questions in computer science: Is P different from NP? Here P comprises the problems that can be solved feasibly in polynomial time and NP comprises the problems whose solutions can be verified in polynomial time. Hundreds of fundamental problems, including many ubiquitous optimization problems of operations research, are NP-complete—they are the hardest problems in NP. If there is a polynomial-time algorithm for any one NP-complete problem, then there would be polynomial-time algorithms for all of them. Despite the efforts of many scientists over several decades, no polynomialtime algorithm has been found for any NP-complete problem. Although no one knows whether P is different from NP, showing that a problem is NP-complete provides strong evidence that the problem is computationally infeasible and justifies the use of heuristics for solving the problem.

Proceedings ArticleDOI
05 Jun 1996
TL;DR: The results of the study provide the minimum requirement for a hardware solution to perform real-time MPEG-2 encoding and using a state-of-the-art Video DSP, a mapping of the MPEG-1 encoding algorithm is proposed.
Abstract: The complexity of the MPEG-2 encoding algorithm is investigated in this paper. A single constraint syntax i.e., the mdn profile & main level, is considered. The results of the study provide the minimum requirement for a hardware solution to perform real-time MPEG-2 encoding. Using a state-of-the-art Video DSP, a mapping of the MPEG-2 algorithm is proposed. Execution times for each functional block is computed.

Journal ArticleDOI
TL;DR: The chi-square test is applied and a method is proposed in which the probability R(x) that the time complexity exceeds a certain value x is determined accurately by approximating the distribution of the time complex of the quicksort by a three-parameter Weibull distribution.
Abstract: Quicksort is a well-known sorting algorithm based on the divided control. the array to be sorted is divided into two sets as follows. an element in the array is specified, and the set of values larger than the value of that element and the set of values smaller than that value are constructed. Each of those two sets are sorted independently. the procedure is iterated for the divided sets. In other words, the algorithm has a recursive structure. the average time-complexity of the quicksort (the average number of comparisons) is O(n log n). Depending on the data to be sorted, however, the performance may be deteriorated drastically. In the worst case, the time complexity is O(n2). In this paper, the generating function based on the time complexity of the quicksort is introduced and the mean and the variance of the time complexity is determined analytically using the generating function. Based on the derived mean and the variance, the chi-square test is applied as to whether or not the distribution of the time complexity can be approximated by the normal distribution. Then, a method is proposed in which the probability R(x) that the time complexity exceeds a certain value x is determined accurately by approximating the distribution of the time complexity of the quicksort by a three-parameter Weibull distribution. Finally, the selection of the sorting algorithm is discussed.

Journal Article
TL;DR: In this paper, the computational complexity of linear PCGSs with linear grammars is investigated and it is shown that languages generated by linear PCG can be recognized by O(log n) space-bounded Turing machines.
Abstract: The computational complexity is investigated for Parallel Communicating Grammar Systems (PCGSs) whose components are linear grammars. It is shown that languages generated by linear PCGSs can be recognized by O(log n) space-bounded Turing machines. Based on the complexity characterization, the generative power of linear PCGSs is analyzed with respect to context-free and context-sensitive grammars.

Proceedings ArticleDOI
18 Aug 1996
TL;DR: A radix-a algorithm of this family of discrete Fourier transform algorithms is derived using the matrix factorization approach, known to be useful in mapping algorithms to architectures.
Abstract: Recently, a new family of discrete Fourier transform algorithms has been reported in which the structural complexity is significantly reduced without affecting the arithmetic complexity. In the present paper, a radix-a algorithm of this family is derived using the matrix factorization approach. This approach is known to be useful in mapping algorithms to architectures. An analysis of the computational complexity of this algorithm is carried out. A run-time comparison of the proposed algorithm is made with the Cooley-Tukey radix-2 algorithm.

Journal ArticleDOI
TL;DR: This article has identified some regions, with respect to the structural parameters of the input network, where the algorithm takes much more time than it needs over other regions, and analyzed data statistically to develop a model with which one can calculate the expected time to be consumed by the algorithm for a given input network.
Abstract: 3-consistency algorithm for temporal constraint propagation over interval-based network, proposed by James Allen, is finding its use in many practical temporal reasoning systems. Apart from the polynomial behavior of this algorithm with respect to the number of nodes in the network, very little is known about its time complexity with respect to other properties of the initially given temporal constraints. In this article we have reported some of our results analyzing the complexity with respect to some structural parameters of the input constraint network. We have identified some regions, with respect to the structural parameters of the input network, where the algorithm takes much more time than it needs over other regions. Similar features have been observed in recent studies on NP-hard problems. Average case complexity of Allen's algorithm is also studied empirically, over a hundred thousand randomly generated networks, and the growth rate is observed to be of the order of quadratic with respect to the problem size (at least up to node 40, and expected to be lower above that). We have analyzed our data statistically to develop a model with which one can calculate the expected time to be consumed by the algorithm for a given input network.


Journal Article
TL;DR: The size of circuits that perfectly hash an arbitrary subset S⊂{0, 1}n of cardinality 2k into {0,1}m is considered, and it is observed that the size of such circuits is exponential in 2k -m, and an upper bound is provided.
Abstract: We consider the size of circuits that perfectly hash an arbitrary subset S⊂{0, 1}n of cardinality 2k into {0, 1}m. We observe that, in general, the size of such circuits is exponential in 2k -m, and provide a matching upper bound.

Journal ArticleDOI
01 Jan 1996
TL;DR: The limiting distribution of the suitably normalized approximation error is derived for both random and deterministic non-adaptive approximation methods and it is identified as the form of the asymptotically optimal random non- adaptationship approximation methods.
Abstract: This paper is concerned with the analysis of the average error in approximating the global minimum of a 1-dimensional, time-homogeneous diffusion by non-adaptive methods. We derive the limiting distribution of the suitably normalized approximation error for both random and deterministic non-adaptive approximation methods. We identify the form of the asymptotically optimal random non-adaptive approximation methods.

Journal ArticleDOI
TL;DR: It is shown that, basically, only the ranking of the inputs by decreasing probabilities is of importance, and average distributional complexity classes defined by a time bound and a bound on the complexity of possible distributions are defined.
Abstract: A new definition is given for the average growth of a functionf: ∑* → N with respect to a probability measure μ on ∑* This allows us to define meaningful average distributional complexity classes for arbitrary time bounds (previously, one could not guarantee arbitrary good precision). It is shown that, basically, only the ranking of the inputs by decreasing probabilities is of importance.

Proceedings ArticleDOI
01 Oct 1996
TL;DR: A task-parallel algorithm for distinct degree factorization (DDF) is considered, and one of the major findings is that, from the viewpoint of time complexity, the fine DDF steps can be concealed by the coarse ones.
Abstract: A task-parallel algorithm for distinct degree factorization (DDF) is considered. The recent DDF algorithms consist of coarse and fine DDF steps, and utilize the asymptotically fast algorithms for various po~ynomial manipulations, including binary-tree multiplication, Chinese remaindering, multipoint evaluation and modular composition. Some basic techniques for parallelization are summarized and applied to these component algorithms, with no arithmetic operation of univariate polynomials parallelized. More significantly considered is the scheduling of the computation steps, and one of the major findings is that, from the viewpoint of time complexity, the fine DDF steps can be concealed by the coarse ones. Finally, we present a complete description of our new parallel algorithm of practical use, and show that it can perform DDF of a univariate polynomial of degree n over a finite field of q elements in time O(Af(n)logq + (114(n3\2) + nl/2M14(nlf2))l ogn) on n112 processors using 0(n3i2) space, where Al(n) and &f&f(k) denote the costs for multiplications of polynomials of degree n and of k x k matrices, respectively.