scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Optimization of Analog RF Circuit parameters using randomness in particle swarm optimization

TL;DR: Stochastic convergence analysis of particle swarm optimization algorithm involving randomness and applying the results to the Analog RF Circuits to optimize the circuit parameters shows that the randomness in defining new position to particle leads to better convergence property.
Abstract: This paper presents stochastic convergence analysis of particle swarm optimization algorithm involving randomness and applying the results to the Analog RF Circuits to optimize the circuit parameters. In every iteration, each particle position is represented as vector and the standard particle swarm algorithm determined by positive real tipple {w,c 1 ,c 2 }. Comparisons for convergence are presented with respect to fixed tipple {w,c 1 ,c 2 } and random tipple {w,c 1 ,c 2 }. Various results show that the randomness in defining new position to particle leads to better convergence property. Also, exploration and exploitation trade off are discussed with examples. It is demonstrated that each particle undergoes both exploration and exploitation in convergence process; if the randomness in the particle generation is considered. The parameters considered for RF circuit are cutoff frequency, Phase Noise and Signal to Noise Ratio (SNR). Results are compared between both fixed values and random values of parameters in convergence analysis of PSO.
Citations
More filters
Journal ArticleDOI
TL;DR: An adaptive particle swarm optimization algorithm based on directed weighted complex network (DWCNPSO) is proposed that can effectively avoid the premature convergence problem and the convergence rate is faster.
Abstract: The disadvantages of particle swarm optimization (PSO) algorithm are that it is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process. To deal with these problems, an adaptive particle swarm optimization algorithm based on directed weighted complex network (DWCNPSO) is proposed. Particles can be scattered uniformly over the search space by using the topology of small-world network to initialize the particles position. At the same time, an evolutionary mechanism of the directed dynamic network is employed to make the particles evolve into the scale-free network when the in-degree obeys power-law distribution. In the proposed method, not only the diversity of the algorithm was improved, but also particles’ falling into local optimum was avoided. The simulation results indicate that the proposed algorithm can effectively avoid the premature convergence problem. Compared with other algorithms, the convergence rate is faster.

74 citations

Journal ArticleDOI
TL;DR: A general form of PSO algorithms is considered, and asymptotic properties of the algorithms using stochastic approximation methods are analyzed, proving that a suitably scaled sequence of swarms converge to the solution of an ordinary differential equation.
Abstract: Recently, much progress has been made on particle swarm optimization (PSO). A number of works have been devoted to analyzing the convergence of the underlying algorithms. Nevertheless, in most cases, rather simplified hypotheses are used. For example, it often assumes that the swarm has only one particle. In addition, more often than not, the variables and the points of attraction are assumed to remain constant throughout the optimization process. In reality, such assumptions are often violated. Moreover, there are no rigorous rates of convergence results available to date for the particle swarm, to the best of our knowledge. In this paper, we consider a general form of PSO algorithms, and analyze asymptotic properties of the algorithms using stochastic approximation methods. We introduce four coefficients and rewrite the PSO procedure as a stochastic approximation type iterative algorithm. Then we analyze its convergence using weak convergence method. It is proved that a suitably scaled sequence of swarms converge to the solution of an ordinary differential equation. We also establish certain stability results. Moreover, convergence rates are ascertained by using weak convergence method. A centered and scaled sequence of the estimation errors is shown to have a diffusion limit.

45 citations


Cites background from "Optimization of Analog RF Circuit p..."

  • ...Some recent work such as [8], [12], [14], [23], [37], [49] provides guidelines for selecting PSO parameters leading to convergence, divergence, or oscillation of the swarm’s particles....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the yield for the rest of the population can be estimated based on the membership degree of FCM and RIs yield values alone, and this new method was applied on two real circuit-sizing optimization problems and the obtained results were compared to the exhaustive approach.
Abstract: This paper presents fuzzy ${c}$ -means-based yield estimation (FUZYE), a methodology that reduces the time impact caused by Monte Carlo (MC) simulations in the context of analog integrated circuits (ICs) yield estimation, enabling it for yield optimization with population-based algorithms, e.g., the genetic algorithm (GA). MC analysis is the most general and reliable technique for yield estimation, yet the considerable amount of time it requires has discouraged its adoption in population-based optimization tools. The proposed methodology reduces the total number of MC simulations that are required, since, at each GA generation, the population is clustered using a fuzzy ${c}$ -means (FCMs) technique, and, only the representative individual (RI) from each cluster is subject to MC simulations. This paper shows that the yield for the rest of the population can be estimated based on the membership degree of FCM and RIs yield values alone. This new method was applied on two real circuit-sizing optimization problems and the obtained results were compared to the exhaustive approach, where all individuals of the population are subject to MC analysis. The FCM approach presents a reduction of 89% in the total number of MC simulations, when compared to the exhaustive MC analysis over the full population. Moreover, a ${k}$ -means-based clustering algorithm was also tested and compared with the proposed FUZYE, with the latest showing an improvement up to 13% in yield estimation accuracy.

22 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider a general form of PSO algorithms, and analyze asymptotic properties of the algorithms using stochastic approximation methods, and prove that a suitably scaled sequence of swarms converge to the solution of an ordinary differential equation.
Abstract: Recently, much progress has been made on particle swarm optimization (PSO). A number of works have been devoted to analyzing the convergence of the underlying algorithms. Nevertheless, in most cases, rather simplified hypotheses are used. For example, it often assumes that the swarm has only one particle. In addition, more often than not, the variables and the points of attraction are assumed to remain constant throughout the optimization process. In reality, such assumptions are often violated. Moreover, there are no rigorous rates of convergence results available to date for the particle swarm, to the best of our knowledge. In this paper, we consider a general form of PSO algorithms, and analyze asymptotic properties of the algorithms using stochastic approximation methods. We introduce four coefficients and rewrite the PSO procedure as a stochastic approximation type iterative algorithm. Then we analyze its convergence using weak convergence method. It is proved that a suitably scaled sequence of swarms converge to the solution of an ordinary differential equation. We also establish certain stability results. Moreover, convergence rates are ascertained by using weak convergence method. A centered and scaled sequence of the estimation errors is shown to have a diffusion limit.

10 citations

Proceedings ArticleDOI
19 Mar 2018
TL;DR: An innovative combination of principal component analysis (PCA) and evolutionary computation is used to increase the optimizer's efficiency, reaching wider solutions sets, and in some cases, solutions sets that can be almost 3 times better in terms of hypervolume.
Abstract: State-of-the-art design of analog and radio frequency integrated circuits is often accomplished using sizing optimization. In this paper, an innovative combination of principal component analysis (PCA) and evolutionary computation is used to increase the optimizer's efficiency. The adopted NSGA-II optimization kernel is improved by applying the genetic operators of mutation and crossover on a transformed design-space, obtained from the latest set of solutions (the parents) using PCA. By applying crossover and mutation on variables that are projections of the principal components, the optimization moves more effectively, finding solutions with better performances, in the same amount of time, than the standard NSGA-II optimization kernel. The proposed method was validated in the optimization of two widely used analog circuits, an amplifier and a voltage controlled oscillator, reaching wider solutions sets, and in some cases, solutions sets that can be almost 3 times better in terms of hypervolume.

9 citations


Cites methods from "Optimization of Analog RF Circuit p..."

  • ...Kuo-Hsuan [25] 2011 Convex optimization Stochastic Fine Tuning Posynomial Simulator Kamisetty et al.[44] 2011 PSO Equation...

    [...]

  • ...A different method can be found in [44][45][46], where particle swarm optimization (PSO) is performed....

    [...]

  • ...Tool/Author/Year Design Plan/ Optimization Method Evaluation ANACONDA [38] 2000 Stochastic pattern search Simulator Sripramong [39] 2002 GA Simulator Alpaydin [35] 2003 Evolutionary strategies + SA Fuzzy + NN trained with Simulator Barros [34] 2006 GA Simulator Castro-Lopez [33] 2008 SA + Powels method Simulator Santos-Tavares [37] 2008 GA Simulator MOJITO [49] 2009 NSGA-II Simulator Pradhan [50] 2009 Multi-Objective SA Layout aware MNA models Matsukawa [23] 2009 Convex Optimization Convex functions Cheng [31] 2009 SA Equations Hongying [40] 2010 GA with VDE Simulator Fakhfakh[46] 2010 Multi-objective PSO Equations Kuo-Hsuan [25] 2011 Convex optimization Stochastic Fine Tuning Posynomial Simulator Kamisetty et al.[44] 2011 PSO Equation Fernandez and Gielen [52] 2011 ORDE Equations and Simulator Benhala et al.[43] 2012 ACO Equation Rocha et al. [36] 2012 NSGA-II Simulator Gupta & Gosh[42] 2012 ACO Simulator Kumar & Duraiswamy [45] 2012 PSO Simulator Genom-POF [47][48] 2012 NSGA-II Simulator AIDA [53] 2015 NSGA-II; MOPSO; MOSA Simulator Afacan & Dündar [54] 2016 Evolutionary strategies + SA Equations and Simulator González-Echevarría, et al. [55] 2017 NSGA-II Simulator Canelas et. al....

    [...]

References
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Journal ArticleDOI
TL;DR: This paper analyzes a particle's trajectory as it moves in discrete time, then progresses to the view of it in continuous time, leading to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies.
Abstract: The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. This paper analyzes a particle's trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions.

8,287 citations

Proceedings ArticleDOI
16 Jul 2000
TL;DR: It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension.
Abstract: The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors.

2,922 citations

Journal ArticleDOI
TL;DR: The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory and graphical parameter selection guidelines are derived, resulting in results superior to previously published results.

2,554 citations

Journal ArticleDOI
TL;DR: This letter presents a formal stochastic convergence analysis of the standard particle swarm optimization (PSO) algorithm, which involves with randomness.

433 citations