scispace - formally typeset
Search or ask a question
Author

Jose E. Rayas-Sanchez

Bio: Jose E. Rayas-Sanchez is an academic researcher from University of Guadalajara. The author has contributed to research in topics: Space mapping & Artificial neural network. The author has an hindex of 19, co-authored 120 publications receiving 1911 citations. Previous affiliations of Jose E. Rayas-Sanchez include Western Institute of Technology and Higher Education & University of Texas at Austin.


Papers
More filters
Journal ArticleDOI
Abstract: This paper reviews the current state-of-the-art in electromagnetic (EM)-based design and optimization of microwave circuits using artificial neural networks (ANNs). Measurement-based design of microwave circuits using ANNs is also reviewed. The conventional microwave neural optimization approach is surveyed, along with typical enhancing techniques, such as segmentation, decomposition, hierarchy, design of experiments, and clusterization. Innovative strategies for ANN-based design exploiting microwave knowledge are reviewed, including neural space-mapping methods. The problem of developing synthesis neural networks is treated. EM-based statistical analysis and yield optimization using neural networks is reviewed. The key issues in transient EM-based design using neural networks are summarized. The use of ANNs to speed up "global modeling" for EM-based design of monolithic microwave integrated circuits is briefly described. Future directions in ANN techniques to microwave design are suggested.

321 citations

Journal ArticleDOI
TL;DR: In this paper, the authors presented modeling of microwave circuits using artificial neural networks (ANN's) based on space-mapping (SM) technology, which decrease the cost of training, improve generalization ability, and reduce the complexity of the ANN topology with respect to the classical neuromodeling approach.
Abstract: For the first time, we present modeling of microwave circuits using artificial neural networks (ANN's) based on space-mapping (SM) technology, SM-based neuromodels decrease the cost of training, improve generalization ability, and reduce the complexity of the ANN topology with respect to the classical neuromodeling approach. Five creative techniques are proposed to generate SM-based neuromodels. A frequency-sensitive neuromapping is applied to overcome the limitations of empirical models developed under quasi-static conditions, Huber optimization is used to train the ANN's. We contrast SM-based neuromodeling with the classical neuromodeling approach as well as with other state-of-the-art neuromodeling techniques. The SM-based neuromodeling techniques are illustrated by a microstrip bend and a high-temperature superconducting filter.

216 citations

Journal ArticleDOI
04 Dec 2006
TL;DR: The accuracy and efficiency of the technique is illustrated through the design and statistical analysis of a classical synthetic problem and a microstrip notch filter with mitered bends.
Abstract: A computationally efficient method for highly accurate electromagnetics-based statistical analysis and yield estimation of RF and microwave circuits is described in this paper. The statistical analysis is realized around a space-mapped nominal solution. Our method consists of applying a constrained Broyden-based linear input space-mapping approach to design, followed by an output neural space-mapping modeling process in which not only the responses, but the design parameters and independent variable are used as inputs to the output neural network. The output neural network is trained using reduced sets of training and testing data generated around the space-mapped nominal solution. We illustrate the accuracy and efficiency of our technique through the design and statistical analysis of a classical synthetic problem and a microstrip notch filter with mitered bends

147 citations

Journal ArticleDOI
TL;DR: In this article, a neural space-mapping (NSM) optimization for electromagnetic-based design is proposed, where the initial mapping is established by performing upfront fine-model analyses at a reduced number of base points.
Abstract: We propose, for the first time, neural space-mapping (NSM) optimization for electromagnetic based design. NSM optimization exploits our space-mapping (SM)-based neuromodeling techniques to efficiently approximate the mapping. A novel procedure that does not require troublesome parameter extraction to predict the next point is proposed. The initial mapping is established by performing upfront fine-model analyses at a reduced number of base points. Coarse-model sensitivities are exploited to select those base points. Huber optimization is used to train, without testing points, simple SM-based neuromodels at each NSM iteration. The technique is illustrated by a high-temperature superconducting quarter-wave parallel coupled-line microstrip filter and a bandstop microstrip filter with quarter-wave resonant open stubs.

108 citations

01 Nov 2000
TL;DR: This work proposes, for the first time, neural space-mapping (NSM) optimization for electromagnetic based design and proposes a novel procedure that does not require troublesome parameter extraction to predict the next point.
Abstract: We review Neural Space Mapping (NSM) optimization for electromagnetic-based design of RF and microwave circuits. NSM optimization exploits our Space Mapping-based neuromodeling techniques to efficiently approximate a suitable mapping at each iteration. Coarse model sensitivities are exploited to select suitable fine model base points for the initial mapping.

106 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: For the first time, a mathematical motivation is presented and SM is placed into the context of classical optimization to achieve a satisfactory solution with a minimal number of computationally expensive "fine" model evaluations.
Abstract: We review the space-mapping (SM) technique and the SM-based surrogate (modeling) concept and their applications in engineering design optimization. For the first time, we present a mathematical motivation and place SM into the context of classical optimization. The aim of SM is to achieve a satisfactory solution with a minimal number of computationally expensive "fine" model evaluations. SM procedures iteratively update and optimize surrogates based on a fast physically based "coarse" model. Proposed approaches to SM-based optimization include the original algorithm, the Broyden-based aggressive SM algorithm, various trust-region approaches, neural SM, and implicit SM. Parameter extraction is an essential SM subproblem. It is used to align the surrogate (enhanced coarse model) with the fine model. Different approaches to enhance uniqueness are suggested, including the recent gradient parameter-extraction approach. Novel physical illustrations are presented, including the cheese-cutting and wedge-cutting problems. Significant practical applications are reviewed.

1,044 citations

Journal ArticleDOI
TL;DR: Fundamental concepts in this emerging area of neural-network computational modules are described at teaching RF/microwave engineers what neural networks are, why they are useful, when they can be used, and how to use them.
Abstract: Neural-network computational modules have recently gained recognition as an unconventional and useful tool for RF and microwave modeling and design. Neural networks can be trained to learn the behavior of passive/active components/circuits. A trained neural network can be used for high-level design, providing fast and accurate answers to the task it has learned. Neural networks are attractive alternatives to conventional methods such as numerical modeling methods, which could be computationally expensive, or analytical methods which could be difficult to obtain for new devices, or empirical modeling solutions whose range and accuracy may be limited. This tutorial describes fundamental concepts in this emerging area aimed at teaching RF/microwave engineers what neural networks are, why they are useful, when they can be used, and how to use them. Neural-network structures and their training methods are described from the RF/microwave designer's perspective. Electromagnetics-based training for passive component models and physics-based training for active device models are illustrated. Circuit design and yield optimization using passive/active neural models are also presented. A multimedia slide presentation along with narrative audio clips is included in the electronic version of this paper. A hyperlink to the NeuroModeler demonstration software is provided to allow readers practice neural-network-based design concepts.

608 citations

Journal ArticleDOI
TL;DR: A survey on related modeling and optimization strategies that may help to solve High-dimensional, Expensive (computationally), Black-box (HEB) problems and two promising approaches are identified to solve HEB problems.
Abstract: The integration of optimization methodologies with computational analyses/simulations has a profound impact on the product design. Such integration, however, faces multiple challenges. The most eminent challenges arise from high-dimensionality of problems, computationally-expensive analysis/simulation, and unknown function properties (i.e., black-box functions). The merger of these three challenges severely aggravates the difficulty and becomes a major hurdle for design optimization. This paper provides a survey on related modeling and optimization strategies that may help to solve High-dimensional, Expensive (computationally), Black-box (HEB) problems. The survey screens out 207 references including multiple historical reviews on relevant subjects from more than 1,000 papers in a variety of disciplines. This survey has been performed in three areas: strategies tackling high-dimensionality of problems, model approximation techniques, and direct optimization strategies for computationally-expensive black-box functions and promising ideas behind non-gradient optimization algorithms. Major contributions in each area are discussed and presented in an organized manner. The survey exposes that direct modeling and optimization strategies to address HEB problems are scarce and sporadic, partially due to the difficulty of the problem itself. Moreover, it is revealed that current modeling research tends to focus on sampling and modeling techniques themselves and neglect studying and taking the advantages of characteristics of the underlying expensive functions. Based on the survey results, two promising approaches are identified to solve HEB problems. Directions for future research are also discussed.

535 citations

Journal ArticleDOI
TL;DR: The proposed extension of the sequential kriging optimization method, surrogate systems are exploited to reduce the total evaluation cost and manifests sensible search patterns, robust performance, and appreciable reduction in total evaluation costs as compared to the original method.
Abstract: When cost per evaluation on a system of interest is high, surrogate systems can provide cheaper but lower-fidelity information. In the proposed extension of the sequential kriging optimization method, surrogate systems are exploited to reduce the total evaluation cost. The method utilizes data on all systems to build a kriging metamodel that provides a global prediction of the objective function and a measure of prediction uncertainty. The location and fidelity level of the next evaluation are selected by maximizing an augmented expected improvement function, which is connected with the evaluation costs. The proposed method was applied to test functions from the literature and a metal-forming process design problem via finite element simulations. The method manifests sensible search patterns, robust performance, and appreciable reduction in total evaluation cost as compared to the original method.

407 citations