Topic
Average-case complexity
About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.
Papers published on a yearly basis
Papers
More filters
••
25 Oct 1976
TL;DR: There were consequences not only for computational complexity but also for number theory (new polynomial definitions of prima1ity and exponentiation) and logic (a syntactical characterization of the elementary-computable functions).
Abstract: In the 1930's Gode1 together with Church, K1eene, and Turing established a relationship between computation and elementary number theory. Using techniques developed by Robinson, Putnam, Davis4 and Matijasevic7 in th~ir celebrated solution to Hilbert's 10th problem, we began in [2] a detai led analysis to determine what consequences this relationship might have for computational complexity. We found that there were consequences not only for computational complexity (nontrivial lower bounds on decision procedures for polynomials, the polynomial compression theorem) but also for number theory (new polynomial definitions of prima1ity and exponentiation) and logic (a syntactical characterization of the elementary-computable functions).
41 citations
••
01 Jan 2010TL;DR: A modular framework is introduced which allows to infer upper bounds on the derivational complexity of term rewrite systems by combining different criteria and it is proved that this framework is strictly more powerful than the conventional setting.
Abstract: In this paper we introduce a modular framework which allows to infer
(feasible) upper bounds on the (derivational) complexity of term rewrite
systems by combining different criteria. All current investigations to
analyze the derivational complexity are based on a single termination
proof, possibly preceded by transformations. We prove that the modular
framework is strictly more powerful than the conventional setting.
Furthermore, the results have been implemented and experiments show
significant gains in power.
41 citations
••
TL;DR: A new model of computation is used that accepts as inputs vectors of real numbers and allows the transfer of the stuctural approach to computability and complexity for computations with real numbers, and a proof of the existence of NPR-complete problems is given.
Abstract: The aim of this paper is to survey certain theoretical aspects of the complexity of quantifler elimination in the elementary theory of the real numbers with real constants, and to present some new results on the subject. We use the new model of computation introduced by L. Blum, M. Shub and S. Smale that accepts as inputs vectors of real numbers and allows the transfer of the stuctural approach to computability and complexity for computations with real numbers. More concretely, we give a proof of the existence of NPR-complete problems. Also, we introduce a new complexity class PATR which describes the complexity of the decision of quantified formulae and, in order to study its relationships with the already existing complexity classes, a model for parallel computations is also introduced
41 citations
••
09 Jan 2013TL;DR: In this article, the authors propose a new complexity model to account for the energy used by an algorithm, which is a weighted sum of the time complexity of the algorithm and the number of parallel I/O accesses made by the algorithm.
Abstract: Energy consumption has emerged as a first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this paper, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of 'parallel' I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is the main contribution of the paper. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication.
41 citations
••
TL;DR: Two generalized discrete versions of the Arimoto-Blahut algorithm for continuous channels require only the computation of a sequence of finite sums, which significantly reduces numerical computational complexity.
Abstract: A version of the Arimoto-Blahut algorithm for continuous channels involves evaluating integrals over an entire input space and thus is not tractable. Two generalized discrete versions of the Arimoto-Blahut algorithm are presented for this purpose. Instead of calculating integrals, both algorithms require only the computation of a sequence of finite sums. This significantly reduces numerical computational complexity. >
41 citations