scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: For apqn-periodic binary sequences wherep, q are two odd primes satisfying that 2 is a primitive root modulep andq2 and gcd(p−1,q−1)=2, the relationship between the linear complexity and the minimum valuek is analyzed.
Abstract: Thek-error linear complexity and the linear complexity of the keystream of a stream cipher are two important standards to scale the randomness of the key stream. For apq n-periodic binary sequences wherep, q are two odd primes satisfying that 2 is a primitive root modulep andq 2 and gcd(p−1,q−1)=2, we analyze the relationship between the linear complexity and the minimum valuek for which thek-error linear complexity is strictly less than the linear complexity.
Journal ArticleDOI
TL;DR: The value of help bits in the settings of randomized and average-case complexity is studied and it is shown that in the case where k is super-logarithmic, assuming k-membership comparability of a decision problem, one cannot prove that the problem is in P/poly by a “relativizing" proof technique.
Abstract: "Help bits" are some limited trusted information about an instance or instances of a computational problem that may reduce the computational complexity of solving that instance or instances. In this paper, we study the value of help bits in the settings of randomized and average-case complexity. If k instances of a decision problem can be efficiently solved using $${\ell < k}$$l Amir, Beigel, and Gasarch (1990) show that for constant k, all k-membership comparable languages are in P/poly. We extend this result to the setting of randomized computation: We show that for k at most logarithmic, the decision problem is k-membership comparable if using $${\ell}$$l help bits, k instances of the problem can be efficiently solved with probability greater than $${2^{\ell-k}}$$2l-k. The same conclusion holds if using less than $${k(1 - h(\alpha))}$$k(1-h(ź)) help bits (where $${h(\cdot)}$$h(·) is the binary entropy function), we can efficiently solve $${1-\alpha}$$1-ź fraction of the instances correctly with non-vanishing probability. We note that when k is constant, k-membership comparability implies being in P/poly. Next we consider the setting of average-case complexity: Assume that we can solve k instances of a decision problem using some help bits whose entropy is less than k when the k instances are drawn independently from a particular distribution. Then we can efficiently solve an instance drawn from that distribution with probability better than 1/2. Finally, we show that in the case where k is super-logarithmic, assuming k-membership comparability of a decision problem, one cannot prove that the problem is in P/poly by a "relativizing" proof technique. All previous known proofs in this area have been relativizing.
Proceedings ArticleDOI
07 Apr 2000
TL;DR: The background and justification for a new approach to studying computation and computational complexity is presented, focusing on categories of problems and categories of solutions which provide the logical definition on which to base an algorithm.
Abstract: We present the background and justification for a new approach to studying computation and computational complexity. We focus on categories of problems and categories of solutions which provide the logical definition on which to base an algorithm. Computational capability is introduced via a formalization of computation termed a model of computation. The concept of algorithm is formalized using the methods of Traub, Wasilkowski and Wozniakowski, from which we can formalize the differences between deterministic, non-deterministic, and heuristic algorithms. Finally, we introduce our measure of complexity: the Hartley entropy measure. We provide many examples to amplify the concepts introduced.
Proceedings Article
07 May 2012
TL;DR: A method for constructing a table of indices, where the multiplication operation performed on short residues, which can reduce the time and computational complexity.
Abstract: In present study a method for constructing a table of indices, where the multiplication operation performed on short residues, which can reduce the time and computational complexity.
Journal Article
TL;DR: The SAT problem is solved with the same DNA computing strategy using alternative biotechnical operations and some new results are reported on the universality and space complexity of this DNA computing algorithm.
Abstract: Adleman and Lipton adopted a brute-force search strategy to solve NP-complete problems by DNA computing i.e., a DNA data pool containing the full solution space must first be constructed in the initial test tube (t0), and then correct answers are extracted and/or false ones are eliminated from the data pool step by step. Thus, the number of distinct DNA strands contained in the initial test tube (t0) grows exponentially with the size of the problem. The number of DNA strands required for large problems eventually swamps the DNA data storage, which makes molecular computation impractical from the outset. Lipton’s brute-force search DNA algorithm is limited to about 60 to 70 variables and thus it is believed that DNA computers that use a brute-force search algorithm can not exceed the performance of electronic computers. Since then, studies on DNA computing have focused on reducing the size of the data pool. A few new algorithms, such as the breadth-first search algorithm , Genetic algorithm , random walking algorithm , have been proposed and tested. With the breadth-first search algorithm, the capacity of a DNA computer can be theoretically increased to about 120 variables, but even so, DNA computers are still not capable of competing with electronic computers. Previously, we solved the SAT problem using a DNA computing algorithm based on ligase chain reaction. In the present study, we solve the SAT problem with the same DNA computing strategy using alternative biotechnical operations. Here we report some new results on the universality and space complexity of this DNA computing algorithm. Keywords-DNA computing, NP-Complete, space complexity, time complexity I. DNA COMPUTING ALGORITHM ithout becoming too specific, we can assume that none of the clauses of F has both the positive form and negative form of the same variable and that F does not have two or more clauses consisting of the same three literals. The program for solving a 3-SAT problem with n variables and m clauses is shown in Program. 1. In the computing process, tj contains all of the sequences that satisfy clauses C1 to Cj. Strings that do not satisfy C1 to Cj can not be produced because the corresponding variable DNA is absent in tjk, or can not be amplified by PCR because they are broken by a restriction enzyme in tjk. After m steps of such operations guided by the SAT formula, all _____________________________ About-Department of Computer Science & Engineering, Suresh GyanVihar University, Jaipur, INDIA (rajawatshalini@yahoo.co.in; naven_h@yahoo.com AboutDepartment of Computer Science & Engineering, Karni College, Jaipur INDIA. ( vijaydiamaond@gmail.com ) About-Department of Biotechnology, Mahatma Gandhi Institute of Applied Sciences, Jaipur, INDIA (ekta.menghani@rediffmail.com) correct strings that satisfy all of the clauses will be generated.The computation time is O (9m+3n) because Split, U-ligate, Cut, Amplify, Merge and Detect commands are executed at most m, 3n, 3m, 3m, m and m times in the program, respectively.Therefore, the NP-complete problem can be solved in an amount of time that is proportional to the size of the problem. II. IMPLEMENTATION OF THE ALGORITHM Biotechnological implementation of the DNA algorithm is shown in Fig. (1). The commands are described in detail below: (1) PCR amplification of x0 v-xi v was performed in a total volume of 50μL, using 100 nmol/L of each primer P0 and Pi, 10ng of ligation product x0 v-xi v, 200 mmol/L of each of the 4 dNTP, and 2.5U Taq DNA polymerase in 1X PCR Buffer supplemented with MgCl2 at a final concentration of 1.5 mM (all from Promega). Amplification was carried out on a Biometre T1 thermal controller as follows: predenaturing at 94°C for 1 min, followed by 20 cycles of denaturing at 94°C for 20s, annealing at 62°C for 20s, and extension at 72°C for 20s, and a final extension at 72°C for 1 min. (2) U-ligation of variables xj v to x0 v-xi v was performed in a volume of 20μL, containing 100ng PCR product x0 v-xi v, 1X PCR buffer and 2U USER enzyme (NEB). This mixture was incubated at 37�for 30 min to cut the uracil base. Next, 1μmol xj v, 1X Taq DNA ligase buffer and 80U Taq DNA ligase (NEB) were added,and the mixture was heated to 95 c for 5 min, gradually cooled to 55 c, and incubated at 55 c for 30 min to ligate xj v and x0 v-xi v. (3) Restriction cutting of x0 v-xi v... was performed in a volume of 20μL containing 100ng PCR product x0 v-xi v..., 1X restriction buffer and 20U restriction enzyme (NEB) selected according to Table 1, and this mixture was incubated at the temperature recommended by the manufacturer for 60 min to cut strings containing xi v. /* Program 1: Solve 3-SAT on a DNA computer */ Function DNA3SAT (F, xi, m, n)

Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732