scispace - formally typeset
Search or ask a question

Showing papers by "Center for Discrete Mathematics and Theoretical Computer Science published in 2016"


Journal ArticleDOI
TL;DR: This paper designs an effective hybrid memetic algorithm (HMA) for the minimum weight-dominating set problem, which contains a greedy randomized adaptive construction procedure, a tabu local search procedure, an crossover operator, a population-updating method, and a path-relinking procedure.
Abstract: The minimum weight-dominating set (MWDS) problem is NP-hard and has a lot of applications in the real world. Several metaheuristic methods have been developed for solving the problem effectively, but suffering from high CPU time on large-scale instances. In this paper, we design an effective hybrid memetic algorithm (HMA) for the MWDS problem. First, the MWDS problem is formulated as a constrained 0–1 programming problem and is converted to an equivalent unconstrained 0–1 problem using an adaptive penalty function. Then, we develop a memetic algorithm for the resulting problem, which contains a greedy randomized adaptive construction procedure, a tabu local search procedure, a crossover operator, a population-updating method, and a path-relinking procedure. These strategies make a good tradeoff between intensification and diversification. A number of experiments were carried out on three types of instances from the literature. Compared with existing algorithms, HMA is able to find high-quality solutions in much less CPU time. Specifically, HMA is at least six times faster than existing algorithms on the tested instances. With increasing instance size, the CPU time required by HMA increases much more slowly than required by existing algorithms.

42 citations


Journal ArticleDOI
TL;DR: This paper proposes a fused lasso model to identify significant features in the spectroscopic signals obtained from a semiconductor manufacturing process, and to construct a reliable virtual metrology (VM) model that yields more accurate and robust predictions than the lasso- and elastic net-based VM models.

32 citations


Journal ArticleDOI
TL;DR: This paper proposes two fast complex-valued optimization algorithms for solving complex quadratic programming problems: 1) with linear equality constraints and 2) with both an l1-norm constraint andlinear equality constraints.
Abstract: In this paper, we propose two fast complex-valued optimization algorithms for solving complex quadratic programming problems: 1) with linear equality constraints and 2) with both an $ {l_{1}}$ -norm constraint and linear equality constraints. By using Brandwood’s analytic theory, we prove the convergence of the two proposed algorithms under mild assumptions. The two proposed algorithms significantly generalize the existing complex-valued optimization algorithms for solving complex quadratic programming problems with an $ {l_{1}}$ -norm constraint only and unconstrained complex quadratic programming problems, respectively. Numerical simulations are presented to show that the two proposed algorithms have a faster speed than conventional real-valued optimization algorithms.

22 citations


Journal ArticleDOI
TL;DR: This work proves a lower bound of the form pΩ(δd) on the length of linear 2-query LCCs over Fp, that encode messages of length d, which improves over the known bound of 2Ω[8,10,6] which is tight for LDCs.
Abstract: A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a ? fraction of the coordinates is corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions. In this work, we show a separation between linear 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form pΩ(?d) on the length of linear 2-query LCCs over Fp, that encode messages of length d. Our bound improves over the known bound of 2Ω(?d) [8,10,6] which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in theoretical computer science. We also obtain, as corollaries of our main theorem, new results in incidence geometry over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields [14] and the second is a new analog of Beck's theorem over finite fields. The paper also contains an appendix, written by Sergey Yekhanin, showing that there do exist nonlinear LCCs of size 2O(d) over Fp, thus highlighting the importance of the linearity assumption for our result.

15 citations


Journal ArticleDOI
TL;DR: A density-based geodesic distance that can identify clusters in nonlinear and noisy situations and is calculated based on the neighborhood graph for nonlinearity is proposed.

12 citations


Journal ArticleDOI
TL;DR: Simulation results illustrate that the proposed method for Poissonian hyperspectral image superresolution has a better performance than several well-known methods both in terms of quality indexes and reconstruction visual effect.
Abstract: The reconstruction of Poissonian image is an active research area in recent years. This paper proposes a novel method for Poissonian hyperspectral image superresolution by fusing a low-spatial-resolution hyperspectral image and a low-spectral-resolution multispectral image. The fusion scheme is designed as an optimization problem, whose cost function consists of the two data-fidelity terms about Poisson distribution, the sparse representation term, and the nonlocal regularization term. The two data-fidelity terms can capture statistical information of Poisson noise. The sparse representation term is used for enhancing the quality of sparsity-based signal reconstruction, and the nonlocal regularization term exploits the spatial similarity of hyperspectral image. As a result, the hyperspectral image and multispectral image are well fused. Finally, the designed optimization problem is effectively solved by an alternating direction optimization algorithm. Simulation results illustrate that the proposed method has a better performance than several well-known methods both in terms of quality indexes and reconstruction visual effect.

10 citations


Journal ArticleDOI
TL;DR: A discrete dynamic convexized method for solving the winner determination problem, which is known to be NP-hard, and an adaptive penalty function to convert the WDP into an equivalent unconstrained integer programming problem.
Abstract: The winner determination problem (WDP) arises in combinatorial auctions. It is known to be NP-hard. In this paper, we propose a discrete dynamic convexized method for solving this problem. We first propose an adaptive penalty function to convert the WDP into an equivalent unconstrained integer programming problem. Based on the structure of the WDP, we construct an unconstrained auxiliary function, which is maximized iteratively using a local search and is updated whenever a better maximizer is found. By increasing the value of a parameter in the auxiliary function, the maximization of the auxiliary function can escape from previously converged local maximizers. To evaluate the performance of the dynamic convexized method, extensive experiments were carried out on realistic test sets from the literature. Computational results and comparisons show that the proposed algorithm improved the best known solutions on a number of benchmark instances.

8 citations


Proceedings ArticleDOI
01 Jan 2016
TL;DR: In this paper, it was shown that for any constant d, unless k-Clique can be solved in n^{o(k)} time, there is no poly(m,n) time algorithm for Gap-k-VectorSum when k = omega((log(log( n)))^{c_0}).
Abstract: This work investigates the hardness of computing sparse solutions to systems of linear equations over F_2. Consider the k-EventSet problem: given a homogeneous system of linear equations over $\F_2$ on $n$ variables, decide if there exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse solution). While there is a simple O(n^{k/2})-time algorithm for it, establishing fixed parameter intractability for k-EventSet has been a notorious open problem. Towards this goal, we show that unless \kclq can be solved in n^{o(k)} time, k-EventSet has no polynomial time algorithm when k = omega(log^2(n)). Our work also shows that the non-homogeneous generalization of the problem - which we call k-VectorSum - is W[1]-hard on instances where the number of equations is O(k*log(n)), improving on previous reductions which produced Omega(n) equations. We use the hardness of k-VectorSum as a starting point to prove the result for k-EventSet, and additionally strengthen the former to show the hardness of approximately learning k-juntas. In particular, we prove that given a system of O(exp(O(k))*log(n)) linear equations, it is W[1]-hard to decide if there is a k-sparse linear form satisfying all the equations or any function on at most k-variables (a k-junta) satisfies at most (1/2 + epsilon)-fraction of the equations, for any constant epsilon > 0. In the setting of computational learning, this shows hardness of approximate non-proper learning of k-parities. In a similar vein, we use the hardness of k-EventSet to show that that for any constant d, unless k-Clique can be solved in n^{o(k)} time, there is no poly(m,n)*2^{o(sqrt{k})} time algorithm to decide whether a given set of $m$ points in F_2^n satisfies: (i) there exists a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the points, or (ii) any non-trivial degree d polynomial P supported on at most k variables evaluates to zero on approx Pr_{F_2^n}[P({z}) = 0] fraction of the points i.e., P is fooled by the set of points. Lastly, we study the approximation in the sparsity of the solution. Let the Gap-k-VectorSum problem be: given an instance of k-VectorSum of size n, decide if there exist a k-sparse solution, or every solution is of sparsity at least k' = (1+delta_0)k. Assuming the Exponential Time Hypothesis, we show that for some constants c_0, delta_0 > 0 there is no poly(n) time algorithm for Gap-k-VectorSum when k = omega((log(log( n)))^{c_0}).

4 citations


Journal ArticleDOI
TL;DR: A Steiner point candidate-based heuristic algorithm framework (SPCF) for solving the Steiner tree problem in graphs and shows that the proposed algorithms can achieve better solution quality and speed performance.
Abstract: The underlying models of many practical problems in various engineering fields are equivalent to the Steiner tree problem in graphs, which is a typical NP-hard combinatorial optimization problem. Thus, developing a fast and effective heuristic for the Steiner tree problem in graphs is of universal significance. By analyzing the advantages and disadvantages of the fast classic heuristics, we find that the shortest paths and Steiner points play important roles in solving the Steiner tree problem in graphs. Based on the analyses, we propose a Steiner point candidate-based heuristic algorithm framework (SPCF) for solving the Steiner tree problem in graphs. SPCF consists of four stages: marking SPCI points, constructing the Steiner tree, eliminating the detour paths, and SPCII-based refining stage. For each procedure of SPCF, we present several alternative strategies to make the trade-off between the effectiveness and efficiency of the algorithm. By finding the shortest path clusters between vertex sets, sever...

2 citations


Journal ArticleDOI
TL;DR: The isomorphism problem of Cayley graphs has been well studied in the literature, such as characterizations of DCI-graphs and CI (DCI)-groups.
Abstract: The isomorphism problem of Cayley graphs has been well studied in the literature, such as characterizations of CI (DCI)-graphs and CI (DCI)-groups. In this paper, we generalize these to vertex-transitive graphs and establish parallel results. Some interesting vertex-transitive graphs are given, including a first example of connected symmetric non-Cayley non-GI-graph. Also, we initiate the study for GI and DGI-groups, defined analogously to the concept of CI and DCI-groups.

1 citations


Journal Article
TL;DR: The isomorphism problem of Cayley graphs has been well studied in the literature, such as characterizations of DCI-graphs and CI (DCI)-groups as discussed by the authors.
Abstract: The isomorphism problem of Cayley graphs has been well studied in the literature, such as characterizations of CI (DCI)-graphs and CI (DCI)-groups. In this paper, we generalize these to vertex-transitive graphs and establish parallel results. Some interesting vertex-transitive graphs are given, including a first example of connected symmetric non-Cayley non-GI-graph. Also, we initiate the study for GI and DGI-groups, defined analogously to the concept of CI and DCI-groups.

Journal ArticleDOI
TL;DR: This paper shows that, from the respects of convergence theory and numerical computational cost, robust constant is valuable significantly for analyzing random global search methods for unconstrained global optimization.
Abstract: Robust analysis is important for designing and analyzing algorithms for global optimization. In this paper, we introduce a new concept, robust constant, to quantitatively characterize the robustness of measurable sets and functions. The new concept is consistent to the theoretical robustness presented in literatures. This paper shows that, from the respects of convergence theory and numerical computational cost, robust constant is valuable significantly for analyzing random global search methods for unconstrained global optimization.

Posted Content
TL;DR: The isomorphism problem of Cayley graphs has been well studied in the literature, such as characterizations of DCI-graphs and CI (DCI)-groups.
Abstract: The isomorphism problem of Cayley graphs has been well studied in the literature, such as characterizations of CI (DCI)-graphs and CI (DCI)-groups. In this paper, we generalize these to vertex-transitive graphs and establish parallel results. Some interesting vertex-transitive graphs are given, including a first example of connected symmetric non-Cayley non-GI-graph. Also, we initiate the study for GI and DGI-groups, defined analogously to the concept of CI and DCI-groups.