Author
Richard H. Middleton
Other affiliations: Hamilton Institute, University of California, Maynooth University ...read more
Bio: Richard H. Middleton is an academic researcher from University of Newcastle. The author has contributed to research in topics: Control theory & Linear system. The author has an hindex of 48, co-authored 393 publications receiving 12037 citations. Previous affiliations of Richard H. Middleton include Hamilton Institute & University of California.
Papers published on a yearly basis
Papers
More filters
01 Jan 2002
TL;DR: In this article, a neural network was proposed for adaptive output feedback control of uncertain nonlinear systems using neural networks and a simple linear observer was introduced to estimate the derivatives of the tracking error.
Abstract: IX. SUMMARY A new approach has been proposed for adaptive output feedback control of uncertain nonlinear systems using neural networks. A simple linear observer was introduced to estimate the derivatives of the tracking error. These estimates are used as inputs to the neural network and in the adaptation laws as an error signal. Ultimate boundedness of all error signals was proven by Lyapunov’s direct method. Simulations of a second-order system illustrated the theoretical results.
••
01 Aug 2019TL;DR: To study robust local stability of a controller design based on feedback linearisation applied to a wind turbine with a permanent magnet synchronous generator, an analysis of the eigenvalues of the small signal model is employed.
Abstract: The objective of this paper is to assess parameter robustness of a controller design based on feedback linearisation applied to a wind turbine with a permanent magnet synchronous generator (PMSG). To study robust local stability, we employ an analysis of the eigenvalues of the small signal model. Some controller parameters are deduced based on measured system variables. It is therefore of interest to know bounds on measurement errors that are guaranteed to preserve local stability. Finally, simulations of a 2MW direct drive PMSG wind turbine system illustrate the effectiveness of the method.
••
TL;DR: Algorithms to connect and disconnect nodes to/from an existing graph are proposed, which are able to maintain the regularity of a graph while requiring minimal reconfiguration of the network.
Abstract: In this letter we propose algorithms to connect and disconnect nodes to/from an existing graph, which are able to maintain the regularity of a graph while requiring minimal reconfiguration of the network. Further, simulations suggest that the algorithms maintain a minimum bound on the algebraic connectivity of the graph. These properties of the algorithms suit their use in plug-and-play networks.
••
24 Jun 2014TL;DR: It is shown that if a constant inter-vehicle spacing policy is used, the interconnection becomes unstable after the string size surpasses a critical value, but stability can be recovered for any string size.
Abstract: In this paper, we study a formation control scheme for a 1D string of vehicles. Each member tracks the movement of its immediate predecessor but also the first vehicle tracks the position of the last member of the string. We discuss conditions for the stability of the full interconnected system and show that if a constant inter-vehicle spacing policy is used, the interconnection becomes unstable after the string size surpasses a critical value. Moreover, we show that if constant time headway is used in the spacing policy, stability can be recovered for any string size. String stability is also achieved as a consequence.
••
TL;DR: It is shown that the simple inclusion of implicit constraints in a controller formulation results in a Controller that achieves maximal controllability for a class of open-loop unstable systems.
Cited by
More filters
••
[...]
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality.
Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …
33,785 citations
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
[...]
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These
9,929 citations
••
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or
7,563 citations
••
15 Oct 1995TL;DR: In this article, the authors present a model for dynamic control systems based on Adaptive Control System Design Steps (ACDS) with Adaptive Observers and Parameter Identifiers.
Abstract: 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty.
4,378 citations