scispace - formally typeset
Journal ArticleDOI

EM-based optimization of microwave circuits using artificial neural networks: the state-of-the-art

Abstract
This paper reviews the current state-of-the-art in electromagnetic (EM)-based design and optimization of microwave circuits using artificial neural networks (ANNs). Measurement-based design of microwave circuits using ANNs is also reviewed. The conventional microwave neural optimization approach is surveyed, along with typical enhancing techniques, such as segmentation, decomposition, hierarchy, design of experiments, and clusterization. Innovative strategies for ANN-based design exploiting microwave knowledge are reviewed, including neural space-mapping methods. The problem of developing synthesis neural networks is treated. EM-based statistical analysis and yield optimization using neural networks is reviewed. The key issues in transient EM-based design using neural networks are summarized. The use of ANNs to speed up "global modeling" for EM-based design of monolithic microwave integrated circuits is briefly described. Future directions in ANN techniques to microwave design are suggested.

read more

Citations
More filters
Journal ArticleDOI

Simulated Annealing Particle Swarm Optimization for High-Efficiency Power Amplifier Design

TL;DR: The perfectly inelastic collision used in conventional PSO is replaced by the perfectly elastic collision to improve the searching ability through reducing kinetic energy consumption and solve the nonconvergence problem in simulation.
Journal ArticleDOI

Multi-fidelity modeling with different input domain definitions using deep Gaussian processes

TL;DR: Deep Gaussian Processes for multi-fidelity (MF-DGP) are extended to the case where a different parametrization is used for each fidelity, and the performance of the proposed multifidelity modeling technique is assessed.
Journal ArticleDOI

Nonlinear Electronic/Photonic Component Modeling Using Adjoint State-Space Dynamic Neural Network Technique

TL;DR: The proposed technique is an extension of the existing state-space dynamic neural network technique that simultaneously adds the derivative information to the training patterns of nonlinear components, allowing the training to be done with less data without sacrificing model accuracy, and makes training faster and more efficient.
Journal ArticleDOI

Implicit space mapping with adaptive selection of preassigned parameters

TL;DR: The authors apply suitable assessment techniques that help in automatically making the right selection of the model and, consequently, its associated parameters and present a modified version of the adaptive SM to improve performance.
Journal ArticleDOI

Small-signal and noise modeling of class of HEMTs using knowledge-based artificial neural networks

TL;DR: Introducing gate width at the input of proposed neural networks, as well as the S/noise parameters of a device that belongs to the same class as the modeled device representing the prior knowledge, leads to very accurate scattering and noise parameters' modeling.
References
More filters
Book

Neural Networks: A Comprehensive Foundation

Simon Haykin
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Journal ArticleDOI

Backpropagation through time: what it does and how to do it

TL;DR: This paper first reviews basic backpropagation, a simple method which is now being widely used in areas like pattern recognition and fault diagnosis, and describes further extensions of this method, to deal with systems other than neural networks, systems involving simultaneous equations or true recurrent networks, and other practical issues which arise with this method.
Journal ArticleDOI

A Class of Methods for Solving Nonlinear Simultaneous Equations

TL;DR: In this article, the authors discuss certain modifications to Newton's method designed to reduce the number of function evaluations required during the iterative solution process of an iterative problem solving problem, such that the most efficient process will be that which requires the smallest number of functions evaluations.
Journal ArticleDOI

Optimal Global Rates of Convergence for Nonparametric Regression

TL;DR: In this article, it was shown that the optimal rate of convergence for an estimator of an unknown regression function (i.e., a regression function of order 2p + d) with respect to a training sample of size n = (p - m)/(2p + 2p+d) is O(n−1/n−r) under appropriate regularity conditions, where n−1 is the optimal convergence rate if q < q < \infty.
Related Papers (5)