scispace - formally typeset
Search or ask a question

Showing papers by "Center for Discrete Mathematics and Theoretical Computer Science published in 2017"


Proceedings ArticleDOI
18 Jun 2017
TL;DR: This paper presents a fast and near-optimal algorithm to solve the mixed-cell-height legalization problem, and provides new generic solutions and research directions for various optimization problems that require solving large-scale quadratic programs efficiently.
Abstract: Modern circuits often contain standard cells of different row heights to meet various design requirements. Higher cells give larger drive strengths at the costs of larger areas and power. Multi-row-height standard cells incur challenging issues to layout designs, especially the mixed-cell-height legalization problem due to the heterogeneous cell structures. Honoring the good cell positions from global placement, we present in this paper a fast and near-optimal algorithm to solve the legalization problem. Fixing the cell ordering from global placement and relaxing the right boundary constraints, we first convert the problem into a linear complementarity problem (LCP). With the converted LCP, we split its matrices to meet the convergence requirement of a modulus-based matrix splitting iteration method (MMSIM), and then apply the MMSIM to solve the LCP. This MMSIM method guarantees the optimality if no cells are placed beyond the right boundary of a chip. Finally, a Tetris-like allocation approach is used to align cells to placement sites on rows and fix the placement of out-of-right-boundary cells, if any. Experimental results show that our proposed algorithm can achieve the best cell displacement and wirelength among all published methods in reasonable runtimes. The MMSIM optimality is theoretically proven and empirically validated. In particular, our formulation provides new generic solutions and research directions for various optimization problems that require solving large-scale quadratic programs efficiently.

42 citations


Proceedings ArticleDOI
01 Jan 2017
TL;DR: A dead-space-aware objective function and an optimization scheme to handle the issue of dead spaces become a critical issue in mixed-cell-height legalization, which cannot be handled well with an Abacus variant alone.
Abstract: For circuit designs in advanced technologies, standard-cell libraries consist of cells with different heights; for example, the number of fins determines the height of cells in the FinFET technology. Cells of larger heights give higher drive strengths, but consume larger areas and power. Such mixed cell heights incur new, complicated challenges for layout designs, due mainly to the heterogeneity in cell dimensions and thus their larger solution spaces. There is not much published work on layout designs with mixed-height standard cells. This paper addresses the legalization problem of mixed-height standard cells, which intends to place cells without any overlap and with minimized displacement. We first study the properties of Abacus, generally considered the best legalization method for traditional single-row-height standard cells but criticized not suitable for handling the new challenge, analyze the capability and insufficiencies of Abacus for tackling the new problem, and remedy Abacuss insufficiencies and extend its advantages to develop an effective and efficient algorithm for the addressed problem. For example, dead spaces become a critical issue in mixed-cell-height legalization, which cannot be handled well with an Abacus variant alone. We thus derive a dead-space-aware objective function and an optimization scheme to handle this issue. Experimental results show that our algorithm can achieve the best wirelength among all published methods in reasonable running time, e.g., about 50% smaller wirelength increase than a state-of-the-art work.

42 citations


Journal ArticleDOI
TL;DR: To handle a multi-objective thermal-aware non-slicing floorplanning optimization problem efficiently, an adaptive hybrid memetic algorithm is presented to optimize the area, the total wirelength, the maximum temperature and the average temperature of a chip.

31 citations


Journal ArticleDOI
TL;DR: In this article, the analytic connectivity of a k-uniform hypergraph H, denoted by H, is studied and several bounds on analytic connectivity that relate it with other graph invariants, such as degree, vertex connectivity, diameter and isoperimetric number.
Abstract: In this paper, we study the analytic connectivity of a k-uniform hypergraph H, denoted by . In addition to computing the analytic connectivity of a complete k-graph, we present several bounds on analytic connectivity that relate it with other graph invariants, such as degree, vertex connectivity, diameter and isoperimetric number.

24 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a discrete relaxation based decomposition framework, which reduces the conflict graph to small size subgraphs by vertex removals, and then relaxes the problem to a 0-1 program, which is solved by the Branch-and-Bound method.
Abstract: In this paper, we consider the triple patterning lithography layout decomposition problem. To address the problem, a discrete relaxation theory is built. For designing a discrete relaxation based decomposition framework, we propose a surface projection method for identifying native conflicts in a layout, and then constructing the conflict graph. Guided by the theory, the conflict graph is reduced to small size subgraphs by vertex removals, which is a discrete relaxation. Furthermore, by ignoring stitch insertions and assigning weights to features, the layout decomposition problem on the small subgraphs is further relaxed to a 0-1 program, which is solved by the Branch-and-Bound method. To obtain a feasible solution of the original problem, legalization methods are introduced to legalize a relaxation solution. At the legalization stage, we prior utilize one-stitch insertion to eliminate conflicts, and use a backtrack coloring algorithm to obtain a better solution. We test our decomposition approach on the ISCAS-85 & 89 benchmarks. Comparisons of experimental results show that our approach finds solutions of some benchmarks better than those by the state-of-the-art decomposers. Especially, according to our discrete relaxation theory, some optimal decompositions are obtained.

20 citations


Journal ArticleDOI
TL;DR: Simulation results illustrate that the proposed double regularization unmixing-based method for hyperspectral image (HSI) superresolution has a better performance than the state-of-the-art methods in terms of both visual effectiveness and quality indices.
Abstract: This letter proposes a novel double regularization unmixing-based method for hyperspectral image (HSI) superresolution. The proposed cost function contains two data-fidelity terms, the endmember regularization term and the abundance regularization term. Since the double regularization unmixing terms are able to exploit the spatial structure information of endmember and abundance, respectively, the nonnegative factorization (spectral unmixing) error is minimized. As a result, the performance of the proposed HSI superresolution method can be enhanced in terms of noise suppression and the special structure information preservation of reconstruction images. Finally, the associated optimization problem is effectively solved by an alternating direction optimization algorithm. Simulation results illustrate that the proposed method has a better performance than the state-of-the-art methods in terms of both visual effectiveness and quality indices.

8 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: Experimental results show that the dynamic convexized method outperforms the greedy randomized adaptive search procedure for the traveling salesman problem.
Abstract: The greedy randomized adaptive search procedure and the dynamic convexized method are two state-of-the-art methods for the traveling salesman problem, which are tour improvement methods. For comparing the performances of the two methods, we give the implementation details, and test the two methods on the TSPLIB standard test instances. Experimental results show that the dynamic convexized method outperforms the greedy randomized adaptive search procedure for the traveling salesman problem.

2 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the complexity of graph planarization via edge contraction and gave an FPT algorithm in time O( n 2 f (k, k ) ), where k is the number of vertices of the graph.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper considers the cut redistribution and DSA template assignment problem, and proposes a dynamic programming algorithm that can guarantee the optimal conflicts number.
Abstract: Wth shrinking transistors in advanced circuit designs, directed self-assembly (DSA) is considered as one of the most promising techniques for cut patterning in 1-D grided design, due to its low cost and high-manufacturing throughput. In this paper, we consider the cut redistribution and DSA template assignment problem. For a given layout, we first convert the problem to a weighted gap conflict graph. Then, a minimum weighted vertex-disjoint-path cover algorithm is proposed to divide the graph into a set of paths. Finally, for each path, a dynamic programming algorithm is used to minimize the conflicts number and the total wire cost. In particular, the dynamic programming algorithm can guarantee the optimal conflicts number. Experimental results show that our proposed method can obtain free-conflict results for all benchmarks, and achieve the best wire cost, i.e., 34.7% less than a state-of-the-art work.

Journal ArticleDOI
TL;DR: This paper considers the contact layer mask and template assignment problem of DSA with triple patterning lithography of general layout with discrete relaxation theory, and transforms the obtained discrete relaxation solution to a legal solution of the original problem.