scispace - formally typeset

Proceedings ArticleDOI

Decoupling network optimization in high speed systems by mixed-integer programming

01 Jun 2014-pp 1010-1013

TL;DR: This paper provides a generic formulation for decoupling capacitor selection and placement problem which is solved by mixed-integer programming.

AbstractPower Integrity is maintained in a high speed system by designing an efficient decoupling network This paper provides a generic formulation for decoupling capacitor selection and placement problem which is solved by mixed-integer programming A real-world example is presented for the same The minimum number of capacitors that could achieve the target impedance over the desired frequency range are found along with their optimal locations In order to solve an industrial problem, the s-parameters data of power plane geometry and capacitors are used for the accurate analysis including bulk capacitors and VRM

...read more


Citations
More filters
Journal ArticleDOI
Abstract: This article proposes an optimization algorithm using the Hessian minimization method, based on the Newton iteration, to evaluate the effectiveness of the placement of multiple decoupling capacitors on a power/ground plane pair. The exact effective decoupling regions are obtained using the Newton iteration method for each decoupling capacitor. The impedance of the IC port is lower than the target impedance no matter where the decoupling capacitor is placed in this region. To optimize specific capacitor placements in this region, the Newton iteration, based on the Hessian matrix, is used to determine the location where the impedance of the IC port is minimized at the antiresonant frequency of the plane pair. This placement optimization algorithm allows for a decoupling design method that can also be applied to a PDN with multiple decoupling capacitors for multiple IC ports. Compared with the method of random selection from within the effective decoupling area, the method proposed here requires fewer decoupling capacitors and less computational time.
Proceedings ArticleDOI
26 Jul 2021
Abstract: In VLSI circuits and systems, it is a common practice to reduce power supply noise in power delivery networks by decoupling capacitors. The optimal selection and placement of decoupling capacitors is crucial for maintaining power integrity efficiently. This paper presents a metaheuristic technique based generic framework for decoupling capacitor optimization in a practical power delivery network. The cumulative impedance of a power delivery network is minimized below the target impedance by optimal selection and placement of decoupling capacitors using state-of-the-art metaheuristic algorithms. A comparative analysis of the performance of these algorithms is presented with the insights of practical implementation.

References
More filters
Book
01 Jan 1988
TL;DR: This chapter discusses the Scope of Integer and Combinatorial Optimization, as well as applications of Special-Purpose Algorithms and Matching.
Abstract: FOUNDATIONS. The Scope of Integer and Combinatorial Optimization. Linear Programming. Graphs and Networks. Polyhedral Theory. Computational Complexity. Polynomial-Time Algorithms for Linear Programming. Integer Lattices. GENERAL INTEGER PROGRAMMING. The Theory of Valid Inequalities. Strong Valid Inequalities and Facets for Structured Integer Programs. Duality and Relaxation. General Algorithms. Special-Purpose Algorithms. Applications of Special- Purpose Algorithms. COMBINATORIAL OPTIMIZATION. Integral Polyhedra. Matching. Matroid and Submodular Function Optimization. References. Indexes.

6,283 citations

Book
01 Jan 1987
TL;DR: This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity.
Abstract: Algorithmic, or automatic, differentiation (AD) is a growing area of theoretical research and software development concerned with the accurate and efficient evaluation of derivatives for function evaluations given as computer programs. The resulting derivative values are useful for all scientific computations that are based on linear, quadratic, or higher order approximations to nonlinear scalar or vector functions. AD has been applied in particular to optimization, parameter identification, nonlinear equation solving, the numerical integration of differential equations, and combinations of these. Apart from quantifying sensitivities numerically, AD also yields structural dependence information, such as the sparsity pattern and generic rank of Jacobian matrices. The field opens up an exciting opportunity to develop new algorithms that reflect the true cost of accurate derivatives and to use them for improvements in speed and reliability. This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity. There is also added material on checkpointing and iterative differentiation. To improve readability the more detailed analysis of memory and complexity bounds has been relegated to separate, optional chapters.The book consists of three parts: a stand-alone introduction to the fundamentals of AD and its software; a thorough treatment of methods for sparse problems; and final chapters on program-reversal schedules, higher derivatives, nonsmooth problems and iterative processes. Each of the 15 chapters concludes with examples and exercises. Audience: This volume will be valuable to designers of algorithms and software for nonlinear computational problems. Current numerical software users should gain the insight necessary to choose and deploy existing AD software tools to the best advantage. Contents: Rules; Preface; Prologue; Mathematical Symbols; Chapter 1: Introduction; Chapter 2: A Framework for Evaluating Functions; Chapter 3: Fundamentals of Forward and Reverse; Chapter 4: Memory Issues and Complexity Bounds; Chapter 5: Repeating and Extending Reverse; Chapter 6: Implementation and Software; Chapter 7: Sparse Forward and Reverse; Chapter 8: Exploiting Sparsity by Compression; Chapter 9: Going beyond Forward and Reverse; Chapter 10: Jacobian and Hessian Accumulation; Chapter 11: Observations on Efficiency; Chapter 12: Reversal Schedules and Checkpointing; Chapter 13: Taylor and Tensor Coefficients; Chapter 14: Differentiation without Differentiability; Chapter 15: Implicit and Iterative Differentiation; Epilogue; List of Figures; List of Tables; Assumptions and Definitions; Propositions, Corollaries, and Lemmas; Bibliography; Index

2,739 citations

Journal ArticleDOI
TL;DR: An outer-approximation algorithm is presented for solving mixed-integer nonlinear programming problems of a particular class and a theoretical comparison with generalized Benders decomposition is presented on the lower bounds predicted by the relaxed master programs.
Abstract: An outer-approximation algorithm is presented for solving mixed-integer nonlinear programming problems of a particular class. Linearity of the integer (or discrete) variables, and convexity of the nonlinear functions involving continuous variables are the main features in the underlying mathematical structure. Based on principles of decomposition, outer-approximation and relaxation, the proposed algorithm effectively exploits the structure of the problems, and consists of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a mixed-integer linear master program. Convergence and optimality properties of the algorithm are presented, as well as a general discussion on its implementation. Numerical results are reported for several example problems to illustrate the potential of the proposed algorithm for programs in the class addressed in this paper. Finally, a theoretical comparison with generalized Benders decomposition is presented on the lower bounds predicted by the relaxed master programs.

1,157 citations

01 Jan 2013
TL;DR: In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.
Abstract: Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.

952 citations


"Decoupling network optimization in ..." refers methods in this paper

  • ...This paper attempts to solve the same problem using a different paradigm: deterministic optimization techniques that are mathematically more rigorous and ensure (under mild assumptions) optimality in a finite number of steps....

    [...]