scispace - formally typeset
Search or ask a question
Author

Richard H. Middleton

Bio: Richard H. Middleton is an academic researcher from University of Newcastle. The author has contributed to research in topics: Control theory & Linear system. The author has an hindex of 48, co-authored 393 publications receiving 12037 citations. Previous affiliations of Richard H. Middleton include Hamilton Institute & University of California.


Papers
More filters
Journal ArticleDOI
TL;DR: New stronger bounds are provided that consider simultaneously the effect of two real nonminimum phase zeros that imply undershoot in the step response of linear systems.
Abstract: It has been known for some time that real nonminimum phase zeros imply undershoot in the step response of linear systems. Bounds on such undershoot depend on the settling time demanded and the zero locations. In this note, we review such constraints for linear time invariant systems and provide new stronger bounds that consider simultaneously the effect of two real nonminimum phase zeros. Using the concept of zero dynamics, we extend these results to a class of nonlinear systems.

50 citations

Journal ArticleDOI
TL;DR: This paper investigates 2D mixed continuous-discrete-time systems whose coefficients are polynomial functions of an uncertain vector constrained into a semialgebraic set and shows that a nonconservative linear matrix inequality condition for ensuring robust stability can be obtained by introducing complex Lyapunov functions depending polynomially on the uncertain vector and a frequency.

49 citations

Journal ArticleDOI
TL;DR: This note considers the particular case of a minimum phase plant with relative degree one and a single unstable pole at z=phi over a first order moving average Gaussian channel and shows that there exist linear encoding and decoding schemes that achieve stabilization within the SNR constraint precisely when CFB ges log2 |phi|.
Abstract: Recent developments in information theory by Y.-H. Kim have established the feedback capacity of a first order moving average additive Gaussian noise channel. Separate developments in control theory have examined linear time invariant feedback control stabilization under signal to noise ratio (SNR) constraints, including colored noise channels. This note considers the particular case of a minimum phase plant with relative degree one and a single unstable pole at z=phi (with |phi| > 1) over a first order moving average Gaussian channel. SNR constrained stabilization in this case is possible precisely when the feedback capacity of the channel satisfies CFB ges log2 |phi|. Furthermore, using the results of Kim we show that there exist linear encoding and decoding schemes that achieve stabilization within the SNR constraint precisely when CFB ges log2 |phi|.

49 citations

Proceedings ArticleDOI
01 Dec 2007
TL;DR: This work proposes a linear control and communication scheme for the purposes of stabilization and disturbance attenuation when a discrete Gaussian channel is present in the feedback loop and shows how the gain and filter may be chosen to minimize the variance of the plant output.
Abstract: We propose a linear control and communication scheme for the purposes of stabilization and disturbance attenuation when a discrete Gaussian channel is present in the feedback loop. Specifically, the channel input is amplified by a constant gain before transmission and the channel output is processed through a linear time invariant filter to produce the control signal. We show how the gain and filter may be chosen to minimize the variance of the plant output. For an order one plant, our scheme achieves the theoretical minimum taken over a much broader class of compensators.

48 citations

Journal ArticleDOI
TL;DR: It is shown that the lowest SNR required for closed-loop stability increases by a factor that may grow exponentially with the time-delay and the unstable open loop poles of the system, contributing to the quantification of performance tradeoffs in integrated control and communication environments.

48 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Proceedings ArticleDOI
15 Oct 1995
TL;DR: In this article, the authors present a model for dynamic control systems based on Adaptive Control System Design Steps (ACDS) with Adaptive Observers and Parameter Identifiers.
Abstract: 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty.

4,378 citations