Topic
Average-case complexity
About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The proposed algorithm is actually a low complexity version of FJZD algorithm, it has a computational complexity of O(K N2), where K and N are the number and dimension of the target matrices respectively.
Abstract: In this letter, we introduce a fast and computationally efficient iterative algorithm for joint zero diagonalization of a set of complex-valued target matrices. The proposed algorithm is actually a low complexity version of FJZD algorithm, it has a computational complexity of O(K N2), where K and N are the number and dimension of the target matrices respectively. Moreover, the proposed algorithm is superior to FJZD in terms of interference to signal ratio. Simulation results demonstrate the good performance of the proposed algorithm.
2 citations
••
TL;DR: It is shown that the problem remains NP-hard even for binary matrices only using 1×1 and 2×2-squares as tiles and insight is provided into the influence of naturally occurring parameters on the problem’s complexity.
2 citations
•
01 Jan 2012
TL;DR: If the best-first search strategy breaks ties among nodes with equal lower bound by selecting a node with greater depth, all nodes in the subtree of u with lower bound values smaller or equal to the global minimum will be visited before node v, and it follows that node v will not be visited.
Abstract: Proof. Lets assume that there are two nodes u and v in depth d for which the calculated lower bound is optimal, and u is visited first among all nodes in depth d with optimal lower bound. This implies that the lower bound of v is not smaller than the lower bound of u. A best-first search Branch-and-Bound algorithm only visits nodes with lower bounds not greater than the global minimum. It follows that the lower bound of u must be equal to the global minimum, and at least one leaf node in the sub-tree of u has this minimum value. If the best-first search strategy breaks ties among nodes with equal lower bound by selecting a node with greater depth, all nodes in the subtree of u with lower bound values smaller or equal to the global minimum will be visited before node v, therefore also the leaf node with the global minimum value will be visited before node v. It follows that node v will not be visited, since a leaf node with the global minimum value has already been found. u t
2 citations
•
TL;DR: It is proved that a binary sequence with period $2^n$ can be decomposed into some disjoint cubes and the maximum $k$-error linear complexity is 2^n-(2^l-1)$ over all $2-periodic binary sequences, where k<2^{l}$.
Abstract: The linear complexity of a sequence has been used as an important measure of keystream strength, hence designing a sequence which possesses high linear complexity and $k$-error linear complexity is a hot topic in cryptography and communication. Niederreiter first noticed many periodic sequences with high $k$-error linear complexity over GF(q). In this paper, the concept of stable $k$-error linear complexity is presented to study sequences with high $k$-error linear complexity. By studying linear complexity of binary sequences with period $2^n$, the method using cube theory to construct sequences with maximum stable $k$-error linear complexity is presented. It is proved that a binary sequence with period $2^n$ can be decomposed into some disjoint cubes. The cube theory is a new tool to study $k$-error linear complexity. Finally, it is proved that the maximum $k$-error linear complexity is $2^n-(2^l-1)$ over all $2^n$-periodic binary sequences, where $2^{l-1}\le k<2^{l}$.
2 citations