scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2000"


Journal ArticleDOI
TL;DR: A practical measure for the complexity of sequences of symbols (“strings”) is introduced that is rooted in automata theory but avoids the problems of Kolmogorov‐Chaitin complexity, and is applied to tRNA sequence data.

153 citations


Journal ArticleDOI
TL;DR: It is shown that the planning problem in presence of incompleteness is indeed harder: it belongs to the next level of complexity hierarchy (in precise terms, it is Σ2P-complete), and under certain conditions, one of these approximations makes the problem NP-complete.

145 citations


Journal ArticleDOI
01 Oct 2000
TL;DR: It is shown to be very unlikely that a polynomial-time algorithm can be found when either (1) the plant is composed of m components running concurrently or (2) the set of legal behaviors is given by the intersection of n legal specifications.
Abstract: The time complexity of supervisory control design for a general class of problems is studied. It is shown to be very unlikely that a polynomial-time algorithm can be found when either (1) the plant is composed of m components running concurrently or (2) the set of legal behaviors is given by the intersection of n legal specifications. That is to say, in general, there is no way to avoid constructing a state space which has size exponential in m+n. It is suggested that, rather than discouraging future work in the field, this result should point researchers to more fruitful directions, namely, studying special cases of the problem, where certain structural properties possessed by the plant or specification lend themselves to more efficient algorithms for designing supervisory controls. As no background on the subject of computational complexity is assumed, we have tried to explain all the borrowed material in basic terms, so that this paper may serve as a tutorial for a system engineer not familiar with the subject.

118 citations


Journal ArticleDOI
TL;DR: This article shows a relationship between the linear complexity and the minimum value k for which the k-error linear complexity is strictly less than thelinear complexity.
Abstract: Linear complexity is an important cryptographic criterion of stream ciphers. The k-error linear complexity of a periodic sequence of period N is defined as the smallest linear complexity that can be obtained by changing k or fewer bits of the sequence per period. This article shows a relationship between the linear complexity and the minimum value k for which the k-error linear complexity is strictly less than the linear complexity.

82 citations


Journal ArticleDOI
TL;DR: In this article, a fast algorithm for determining the linear complexity of a sequence with period p/sup n/ over GF (q), where q is a prime and p is an odd prime, was presented.
Abstract: A fast algorithm is presented for determining the linear complexity of a sequence with period p/sup n/ over GF (q), where p is an odd prime, and where q is a prime and a primitive root (mod p/sup 2/).

67 citations


Proceedings Article
10 Jul 2000
TL;DR: The worst case analysis of the time and space complexity of the parameter less genetic algorithm versus the genetic algorithm with an optimal population size is provided and the results of the analysis are discussed.
Abstract: In this paper the worst case analysis of the time and space complexity of the parameter less genetic algorithm versus the genetic algorithm with an optimal population size is provided and the results of the analysis are discussed Since the assumptions in order for the analysis to be correct are very weak the result is applicable to a wide range of problems Various con gurations of the parameter less genetic algorithm are considered and the results of their time and space complexity are compared

34 citations


Proceedings ArticleDOI
16 Jul 2000
TL;DR: This paper presents adaptive algorithm for mutual exclusion using only read and write operations, and presents a technique that reduces the space complexity of this algorithm to be a function of n, while preserving the other performance measures of the algorithm.
Abstract: A distributed algorithm is adaptive if its performance depends on k, the number of processes that are concurrently active during the algorithm execution (rather than on n, the total number of processes). This paper presents adaptive algorithm for mutual exclusion using only read and write operations.The worst case step complexity cannot be a measure for the performance of mutual exclusion algorithms, because it is always unbounded in the presence of contention. Therefore, a number of different parameters are used to measure the algorithm's performance: The remote step complexity is the maximal number of steps performed by a process where a wait is counted as one step. The system response time is the time interval between subsequent entries to the critical section, where one time unit is the minimal interval in which every active process performs at least one step.The algorithm presented here has O(k) remote step complexity and O(log k) system response time, where k is the point contention. The space complexity of this algorithm is O(nN), where N is the range of processes' names.The space complexity of all previously known adaptive algorithms for various long-lived problems depends on N. We present a technique that reduces the space complexity of our algorithm to be a function of n, while preserving the other performance measures of the algorithm.

27 citations


Proceedings ArticleDOI
04 Jul 2000
TL;DR: In this article, it was shown that the non-deterministic quantum query complexity of a total Boolean function f is linearly related to the degree of a "non-defministic" polynomial for f and that it can be exponentially smaller than its classical counterpart.
Abstract: It is known that the classical and quantum query complexities of a total Boolean function f are polynomially related to the degree of its representing polynomial, but the optimal exponents in these relations are unknown. We show that the non-deterministic quantum query complexity of f is linearly related to the degree of a "non-deterministic" polynomial for f. We also prove a quantum-classical gap of 1 vs. N for non-deterministic query complexity for a total f. In the case of quantum communication complexity there is a (partly undetermined) relation between the complexity of f and the logarithm of the rank of its communication matrix. We show that the non-deterministic quantum communication complexity of f is linearly related to the logarithm of the rank of a non-deterministic version of the communication matrix and that it can be exponentially smaller than its classical counterpart.

26 citations


Journal Article
TL;DR: It is shown that for average-case complexity under the uniform distribution, quantum algorithms can be exponentially faster than classical algorithms and under non-uniform distributions the gap can even be super-exponential.
Abstract: We compare classical and quantum query complexities of total Boolean functions. It is known that for worst-case complexity, the gap between quantum and classical can be at most polynomial [3]. We show that for average-case complexity under the uniform distribution, quantum algorithms can be exponentially faster than classical algorithms. Under non-uniform distributions the gap can even be super-exponential. We also prove some general bounds for average-case complexity and show that the average-case quantum complexity of MAJORITY under the uniform distribution is nearly quadratically better than the classical complexity.

25 citations


Posted Content
TL;DR: The nondeterministic quantum algorithms for Boolean functions f have positive acceptance probability on input x iff f(x)=1, which implies that the quantum communication complexities of the equality and disjointness functions are n+1 if the authors do not allow any error probability.
Abstract: We study nondeterministic quantum algorithms for Boolean functions f. Such algorithms have positive acceptance probability on input x iff f(x)=1. In the setting of query complexity, we show that the nondeterministic quantum complexity of a Boolean function is equal to its ``nondeterministic polynomial'' degree. We also prove a quantum-vs-classical gap of 1 vs n for nondeterministic query complexity for a total function. In the setting of communication complexity, we show that the nondeterministic quantum complexity of a two-party function is equal to the logarithm of the rank of a nondeterministic version of the communication matrix. This implies that the quantum communication complexities of the equality and disjointness functions are n+1 if we do not allow any error probability. We also exhibit a total function in which the nondeterministic quantum communication complexity is exponentially smaller than its classical counterpart.

24 citations


Journal ArticleDOI
TL;DR: The basic Quicksort algorithm is introduced and a flavor of the richness of its complexity analysis is given and some of its generalizations to parallel algorithms and computational geometry are provided.
Abstract: This article introduces the basic Quicksort algorithm and gives a flavor of the richness of its complexity analysis. The author also provides a glimpse of some of its generalizations to parallel algorithms and computational geometry.

Proceedings Article
28 Jun 2000
TL;DR: A formal learning model for this task that uses a hypothesis class as it “anti-overfitting” mechanism is introduced and it is shown that for some constants, depending on the hypothesis class, these problems are NP-hard to approximate to within these constant factors.
Abstract: We investigate the computational complexity of the task of detecting dense regions of an unknown distribution from unlabeled samples of this distribution. We introduce a formal learning model for this task that uses a hypothesis class as it “anti-overfitting” mechanism. The learning task in our model can be reduced to a combinatorial optimization problem. We can show that for some constants, depending on the hypothesis class, these problems are NP-hard to approximate to within these constant factors. We go on and introduce a new criterion for the success of approximate optimization geometric problems. The new criterion requires that the algorithm competes with hypotheses only on the points that are separated by some margin ? from their boundaries. Quite surprisingly, we discover that for each of the two hypothesis classes that we investigate, there is a “critical value” of the margin parameter ?. For any value below the critical value the problems are NP-hard to approximate, while, once this value is exceeded, the problems become poly-time solvable.

Journal ArticleDOI
TL;DR: This paper exploits the notion of “unfinished site”, introduced by Katajainen and Koppinen (1998) in the analysis of a two-dimensional Delaunay triangulation algorithm, based on a regular grid, and generalizes it to any dimension k⩾2, which allows the algorithm to adapt efficiently to irregular distributions.
Abstract: This paper exploits the notion of “unfinished site”, introduced by Katajainen and Koppinen (1998) in the analysis of a two-dimensional Delaunay triangulation algorithm, based on a regular grid. We generalize the notion and its properties to any dimension k⩾2 : in the case of uniform distributions, the expected number of unfinished sites in a k -rectangle is O (N 1−1/k ) . This implies, under some specific assumptions, the linearity of a class of divide-and-conquer schemes based on balanced k-d trees. This general result is then applied to the analysis of a new algorithm for constructing Delaunay triangulations in the plane. According to Su and Drysdale (1995, 1997), the best known algorithms for this problem run in linear expected time, thanks in particular to the use of bucketing techniques to partition the domain. In our algorithm, the partitioning is based on a 2-d tree instead, the construction of which takes Θ(N log N) time, and we show that the rest of the algorithm runs in linear expected time. This “preprocessing” allows the algorithm to adapt efficiently to irregular distributions, as the domain is partitioned using point coordinates, as opposed to a fixed, regular basis (buckets or grid). We checked that even for the largest data sets that could fit in internal memory (over 10 million points), constructing the 2-d tree takes noticeably less CPU time than triangulating the data. With this in mind, our algorithm is only slightly slower than the reputedly best algorithms on uniform distributions, and is even the most efficient for data sets of up to several millions of points distributed in clusters.

Journal ArticleDOI
TL;DR: The first nontrivial general lower bound for average-case Shellsort was shown in this paper, where the running time of p-pass Shellsort is Ω(pn 1+1/p ).
Abstract: We demonstrate an Ω(pn1+1/p ) lower bound on the average-case running time (uniform distribution) of p-pass Shellsort. This is the first nontrivial general lower bound for average-case Shellsort.

Journal ArticleDOI
TL;DR: A unified derivation of the bounds of the linear complexity is given for a sequence obtained from a periodic sequence over GF(q) by either substituting, inserting, or deleting k symbols within one period.
Abstract: A unified derivation of the bounds of the linear complexity is given for a sequence obtained from a periodic sequence over GF(q) by either substituting, inserting, or deleting k symbols within one period. The lower bounds are useful in case of n

Journal ArticleDOI
TL;DR: Borders on the average-case number of stacks required for sorting sequential or parallel Queuesort or Stacksort are proved and the incompressibility method is developed.
Abstract: Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

Proceedings ArticleDOI
03 Sep 2000
TL;DR: Improvements of the Perez and Vidal algorithm are proposed to reduce the complexity, using the A* algorithm, showing that the method is systematically faster, in particular for a large number of segments.
Abstract: Perez and Vidal proposed (1994) an optimal algorithm for the decomposition of digitized curves into line segments. The number of segments is fixed a priori, and the error criterion is the sum of the square Euclidean distance from each point of the contour to its orthogonal projection onto the corresponding line segment. The complexity of Perez and Vidal algorithm is O(n/sup 2/.m) where n is the number of points and m is the number of segments. We propose improvements of the algorithm to reduce the complexity, using the A* algorithm. The optimality of the algorithm is preserved and its complexity is lower. Some comparative results are presented, showing that our method is systematically faster, in particular for a large number of segments.

Proceedings ArticleDOI
31 Oct 2000
TL;DR: A symbolic-numeric version of the silhouette algorithm, which does not require the symbolic computation of the determinants of resultant matrices, and can work on floating point arithmetic.
Abstract: The silhouette algorithm developed by Canny (1988, 1993) is a general motion planning algorithm which is known to have the best complexity of all of the general and complete algorithms. The authors present a symbolic-numeric version of the algorithm. This version does not require the symbolic computation of the determinants of resultant matrices, and can work on floating point arithmetic. Though its combinatorial complexity remains the same, but its algebraic complexity has been improved significantly which is very important towards its implementation. Several numerical examples are also presented.


Proceedings ArticleDOI
01 May 2000
TL;DR: This paper defines several variants of metaquerying that encompass, as far as the authors know, all variants defined in the literature, and shows that, under the combined complexity measure, metaqueries is generally intractable (unless P=NP), but is able to single out some tractable interesting metaqueries.
Abstract: Metaquerying is a datamining technology by which hidden dependencies among several database relations can be discovered. This tool has already been successfully applied to several real-world applications. Recent papers provide only very preliminary results about the complexity of metaquerying. In this paper we define several variants of metaquerying that encompass, as far as we know, all variants defined in the literature. We study both the combined complexity and the data complexity of these variants. We show that, under the combined complexity measure, metaquerying is generally intractable (unless P=NP), but we are able to single out some tractable interesting metaquerying cases (whose combined complexity is LOGCFL-complete). As for the data complexity of metaquerying, we prove that, in general, this is in P, but lies within AC0 in some interesting cases. Finally, we discuss the issue of equivalence between metaqueries, which is useful for optimization purposes.

Journal ArticleDOI
TL;DR: This issue's column, Part II of the article started in the preceding issue, is about progress on the question of whether NP has sparse hard sets with respect to weak reductions as discussed by the authors.
Abstract: This issue's column, Part II of the article started in the preceding issue, is about progress on the question of whether NP has sparse hard sets with respect to weak reductions.Upcoming Complexity Theory Column articles include A. Werschulz on information-based complexity; J. Castro, R. Gavalda, and D. Guijarro on what complexity theorists can learn from learning theory; S. Ravi Kumar and D. Sivakumar on a to-be-announced topic; M. Holzer and P. McKenzie on alternating stack machines; and R. Paturi on the complexity of k-SAT.

Journal Article
TL;DR: It is shown that if A is semirecursive or recursively enumerable, but is not recursive, then these classes form non-collapsing hierarchies: Q(n, A) = {S: S can be decided with n sequential queries to A}.

01 Jan 2000
TL;DR: A new sequence pair algorithm is introduced which has complexity lower than O(M’.25), which is a significant improvement compared to the original O( M2) algorithm.
Abstract: This paper introduces a new sequence pair algorithm which has complexity lower than O(M’.25), which is a significant improvement compared to the original O(M2) algorithm [l]. Furthermore, the new algorithm has complexity close to the theoretical lower bound. Experimental results, obtained with a straightforward implementation, confirm this improvement in complexity.

01 Jan 2000
TL;DR: This thesis investigates variable complexity algorithms and proposes two fast algorithms based on fast distance metric computation or fast matching approaches that allow computational scalability in distance computation with graceful degradation in the overall image quality.
Abstract: In this thesis we investigate variable complexity algorithms. The complexities of these algorithms are input-dependent, i.e., the type of input determines the complexity required to complete the operation. The key idea is to enable the algorithm to classify the inputs so that unnecessary operations can be pruned. The goal of the design of the variable complexity algorithm is to minimize the average complexity over all possible input types, including the cost of classifying the inputs. We study two of the fundamental operations in standard image/video compression, namely, the discrete cosine transform (DCT) and motion estimation (ME). We first explore variable complexity in inverse DCT by testing for zero inputs. The test structure can also be optimized for minimal total complexity for a given inputs statistics. In this case, the larger the number of zero coefficients, i.e., the coarser the quantization stepsize, the greater the complexity reduction. As a consequence, tradeoffs between complexity and distortion can be achieved. For direct DCT we propose a variable complexity fast approximation algorithm. The variable complexity part computes only DCT coefficients that will not be quantized to zeros according to the classification results (in addition the quantizer can benefit from this information by by-passing its operations for zero coefficients). The classification structure can also be optimized for a given input statistics. On the other hand, the fast approximation part approximates the DCT coefficients with much less complexity. The complexity can be scaled, i.e., it allows more complexity reduction at lower quality coding, and can be made quantization-dependent to keep the distortion degradation at a certain level. In video coding, ME is the part of the encoder that requires the most complexity and therefore achieving significant complexity reduction in ME has always been a goal in video coding research. We propose two fast algorithms based on fast distance metric computation or fast matching approaches. Both of our algorithms allow computational scalability in distance computation with graceful degradation in the overall image quality. The first algorithm exploits hypothesis testing in fast metric computation whereas the second algorithm uses thresholds obtained from partial distances in hierarchical candidate elimination. (Abstract shortened by UMI.)

Book ChapterDOI
01 Feb 2000
TL;DR: In this article, the average-case quantum complexity of total Boolean functions is compared to the classical complexity of Boolean functions under uniform and non-uniform distributions, and it is shown that quantum algorithms can be exponentially faster than classical algorithms.
Abstract: We compare classical and quantum query complexities of total Boolean functions. It is known that for worst-case complexity, the gap between quantum and classical can be at most polynomial [3]. We show that for average-case complexity under the uniform distribution, quantum algorithms can be exponentially faster than classical algorithms. Under non-uniform distributions the gap can even be super-exponential. We also prove some general bounds for average-case complexity and show that the average-case quantum complexity of MAJORITY under the uniform distribution is nearly quadratically better than the classical complexity.

Proceedings ArticleDOI
08 Aug 2000
TL;DR: Extends the binary algorithm invented by J. Stein and proposes two iterative division algorithms in finite field GF(2/sup m/) that exhibits faster convergence while algorithm EBd has reduced complexity in each iteration.
Abstract: Extends the binary algorithm invented by J. Stein [1967] and proposes two iterative division algorithms in finite field GF(2/sup m/). Algorithm EBg exhibits faster convergence while algorithm EBd has reduced complexity in each iteration. A (semi-)systolic array is designed for algorithm EBd, resulting in an area-time complexity better than the best result known to date based on the extended Euclid algorithm.

Journal ArticleDOI
TL;DR: It is found that different problems are complete for the complexity classes, and finding bounded solutions of a Diophantine equation is shown to be intractable.
Abstract: We consider the computation of eigenvectors over the integers, where each component x i satisfies for an integer b We address various problems in this context, and analyze their computational complexity We find that different problems are complete for the complexity classes Applying the results, finding bounded solutions of a Diophantine equation is shown to be intractable

Journal ArticleDOI
TL;DR: For example, this paper proposed a dual-inheritance model for the identification of adaptive value of a trait or cultural practice in the context of cultural and dual inheritance models of evolution, which presents ambiguities not typically present in biological evolution.
Abstract: Cultural and dual-inheritance models of evolution present ambiguities not typically present in biological evolution. Criteria and the ability to specify the adaptive value of a trait or cultural practice become less clear. When niche construction is added, additional challenges and ambiguities arise. Its dynamic nature increases the difficulty of identifying adaptations, tracing the causal path between a trait and its function, and identifying the links between environmental demands and the development of adaptations.

Posted Content
TL;DR: It is argued that Kolmogorov complexity calculus can be more useful if it is refined to include an important practical case of simple binary strings, and shows that under such restrictions some error terms can disappear from the standard complexity calculus.
Abstract: Given a reference computer, Kolmogorov complexity is a well defined function on all binary strings. In the standard approach, however, only the asymptotic properties of such functions are considered because they do not depend on the reference computer. We argue that this approach can be more useful if it is refined to include an important practical case of simple binary strings. Kolmogorov complexity calculus may be developed for this case if we restrict the class of available reference computers. The interesting problem is to define a class of computers which is restricted in a {\it natural} way modeling the real-life situation where only a limited class of computers is physically available to us. We give an example of what such a natural restriction might look like mathematically, and show that under such restrictions some error terms, even logarithmic in complexity, can disappear from the standard complexity calculus. Keywords: Kolmogorov complexity; Algorithmic information theory.

Journal Article
TL;DR: This paper tries to analyze the different implementations of an algorithm and to predict the relative performance differences among them through combining the memory complexity analysis and the data movement/floating point operation ratio analysis.
Abstract: Memory systems become more and more complicated with so many efforts on bridging the large speed gap between processor and main memory. It is now difficult to gain high performance from a processor or a large parallel processing systems without considering the specific memory system features. Thus it becomes not enough just to use the time and space complexity to explain why different forms of one algorithm explore so different performance on one same platform. The complexity of memory systems must be incorporated into the analysis of algorithms. In 1996, Sun Jiachang first presented a new concept on memory complexity. It is believed that the complexity of an algorithm should consist of its computational complexity and memory complexity, among them, computational complexity consists of time complexity and space complexity, which are the basic characteristics of algorithm; while memory complexity is a varying characteristic, which will change with different implementations of the same algorithm and different platforms. The purpose of algorithmic optimization is to reduce the memory complexity, while the reduction of computational complexity needs new algorithmic research activity. In this paper, we try to analyze the different implementations of an algorithm and to predict the relative performance differences among them through combining the memory complexity analysis and the data movement/floating point operation ratio analysis. Further analysis with remote communication in parallel processing will be our future work.