scispace - formally typeset
Search or ask a question
Author

Richard H. Middleton

Bio: Richard H. Middleton is an academic researcher from University of Newcastle. The author has contributed to research in topics: Control theory & Linear system. The author has an hindex of 48, co-authored 393 publications receiving 12037 citations. Previous affiliations of Richard H. Middleton include Hamilton Institute & University of California.


Papers
More filters
Proceedings ArticleDOI
17 Jul 2013
TL;DR: It is shown that if there is at most one open loop unstable plant pole, then the transient response will remain bounded as the control horizon tends to infinity, and will approach a value determined by the solution to a certain algebraic Riccati equation.
Abstract: Recently, a finite horizon minimum variance control problem was proposed using feedback over a Gaussian communication channel. Because only the terminal state is penalized, it was shown that linear communication and control strategies are optimal and achieve the information theoretic minimum cost. However, because the transient state is not penalized, the transient behavior can be poor. In the present paper, we show that if there is at most one open loop unstable plant pole, then the transient response will remain bounded as the control horizon tends to infinity, and will approach a value determined by the solution to a certain algebraic Riccati equation.

6 citations

Journal ArticleDOI
TL;DR: In this paper, a more thorough investigation of the nature of these sufficient conditions for one particular class of robust adaptive control algorithms, namely those employing a relative deadzone, is presented, and it is shown that the robustness properties in the adaptive case are of the same order of magnitude as can be achieved by a non-adaptive controller if the model is known.

6 citations

Proceedings ArticleDOI
04 Jun 2003
TL;DR: It is shown that, except in special cases, such a system will exhibit a tradeoff between disturbance response and stability robustness, and the severity of this tradeoff is governed by a dimensionless plant parameter.
Abstract: We consider a feedback system whose performance output differs from its measure output and show that, except in special cases, such a system will exhibit a tradeoff between disturbance response and stability robustness. The severity of this tradeoff is governed by a dimensionless plant parameter, and tends to be significant for systems with lightly damped zeros. The results are illustrated with the problem of noise cancellation in an acoustic duct.

6 citations

Journal ArticleDOI
TL;DR: In this paper, a mathematical model is presented which is able to predict the entire trajectory of the HIV/AIDS dynamics, then a possible explanation for this progression is examined, and a dynamical analysis of this model reveals a set of parameters which may produce two real equilibrium in the model.

6 citations

Posted Content
TL;DR: The requirement that the systems Hamiltonian is strictly convex and separable is relaxed, which allows the controller to be applied to a large class of mechanical systems, including underactuated systems with non-constant mass matrix.
Abstract: In this paper we present a method to robustify energy-shaping controllers for port-Hamiltonian (pH) systems by adding an integral action that rejects unknown additive disturbances. The proposed controller preserves the pH structure and, by adding to the new energy function a suitable cross term between the plant and the controller coordinates, it avoids the unnatural coordinate transformation used in the past. This paper extends our previous work by relaxing the requirement that the systems Hamiltonian is strictly convex and separable, which allows the controller to be applied to a large class of mechanical systems, including underactuated systems with non-constant mass matrix. Furthermore, it is shown that the proposed integral action control is robust against unknown damping in the case of fully-actuated systems.

6 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Proceedings ArticleDOI
15 Oct 1995
TL;DR: In this article, the authors present a model for dynamic control systems based on Adaptive Control System Design Steps (ACDS) with Adaptive Observers and Parameter Identifiers.
Abstract: 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty.

4,378 citations