scispace - formally typeset
Search or ask a question
Author

Robert F. Stengel

Bio: Robert F. Stengel is an academic researcher from Princeton University. The author has contributed to research in topics: Control system & Robustness (computer science). The author has an hindex of 40, co-authored 213 publications receiving 8369 citations. Previous affiliations of Robert F. Stengel include Massachusetts Institute of Technology & Charles Stark Draper Laboratory.


Papers
More filters
Book
20 Sep 1994

1,720 citations

Journal ArticleDOI
TL;DR: In this article, robust flight control systems with nonlinear dynamic inversion structure are synthesized for the longitudinal motion of a hypersonic aircraft containing twenty-eight inertial and aerodynamic uncertain parameters, and the system robustness is characterized by the probability of instability and probabilities of violations of thirty-eight performance criteria, subjected to the variations of the uncertain system parameters.
Abstract: For the longitudinal motion of a hypersonic aircraft containing twenty-eight inertial and aerodynamic uncertain parameters, robust flight control systems with nonlinear dynamic inversion structure are synthesized. The system robustness is characterized by the probability of instability and probabilities of violations of thirty-eight performance criteria, subjected to the variations of the uncertain system parameters. The design cost function is defined as a weighted quadratic sum of these probabilities. The control system is designed using a genetic algorithm to search a design parameter space of the nonlinear dynamic inversion structure. During the search iteration, Monte Carlo evaluation is used to estimate the system robustness and cost function. This approach explicitly takes into account the design requirements and makes full use of engineering knowledge in the design process to produce practical and efficient control systems. A4 MY, m 4 Nomenclatm-e speed of sound, ftls drag coefficient lift coefficient moment coefficient due to pitch rate moment coefficient due to angle of attack moment coefficient due to elevator deflection thrust coefficient reference length, 80 ft drag, lbf altitude, ft moment of inertia, 7 X lo6 slug-ft2 lift, lbf Mach number pitching moment, lbf-ft mass, 9375 slugs pitch rate, radis radius of the Earth, 20,903,500 ft radial distance from Earth’s center, ft reference area, 3603 ft2 thrust, lbf velocity, ft/S angle of attack, rad throttle setting flight-path angle, rad elevator deflection, rad gravitational constant, 1.39 X 1Or6 ft3/s2~ density of air, slugsIft

544 citations

Journal ArticleDOI
TL;DR: Inverse dynamics are generated for specific command variable sets of a 12-state nonlinear aircraft model to develop a control system that provides satisfactory response over the entire flight envelope.

500 citations

Book
01 Aug 1986
TL;DR: The Mathematics of Control and Estimation Optimal Trajectories and Neighboring-Optimal Solutions Optimal State Estimation Stochastic Optimal Control Linear Multivariable Control Epilogue Index.
Abstract: The Mathematics of Control and Estimation Optimal Trajectories and Neighboring-Optimal Solutions Optimal State Estimation Stochastic Optimal Control Linear Multivariable Control Epilogue Index.

407 citations

Journal ArticleDOI
TL;DR: An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented and shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Abstract: An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

295 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.

10,217 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Journal ArticleDOI
TL;DR: Central issues of reinforcement learning are discussed, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

6,895 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Posted Content
TL;DR: A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

5,970 citations