scispace - formally typeset
Search or ask a question
Author

Richard H. Middleton

Bio: Richard H. Middleton is an academic researcher from University of Newcastle. The author has contributed to research in topics: Control theory & Linear system. The author has an hindex of 48, co-authored 393 publications receiving 12037 citations. Previous affiliations of Richard H. Middleton include Hamilton Institute & University of California.


Papers
More filters
Posted Content
TL;DR: In this paper, the authors investigate the performance of linear consensus algorithms subject to a scaling of the underlying network size and show that consensus over expander families is fragile to a grounding of the network (resulting in leader-follower consensus), which may deteriorate system performance by orders of magnitude in large networks, or cause instability in high-order consensus.
Abstract: We investigate the performance of linear consensus algorithms subject to a scaling of the underlying network size. Specifically, we model networked systems with $n^{\text{th}}$ order integrator dynamics over families of undirected, weighted graphs with bounded nodal degrees. In such networks, the algebraic connectivity affects convergence rates, sensitivity, and, for high-order consensus ($n \ge 3$), stability properties. This connectivity scales unfavorably in network size, except in expander families, where consensus performs well regardless of network size. We show, however, that consensus over expander families is fragile to a grounding of the network (resulting in leader-follower consensus). We show that grounding may deteriorate system performance by orders of magnitude in large networks, or cause instability in high-order consensus. Our results, which we illustrate through simulations, also point to a fundamental limitation to the scalability of consensus networks with leaders, which does not apply to leaderless networks.
Journal ArticleDOI
TL;DR: The original paper (X. Ding, L. Guo and P.M. Frank, IEEE Trans. Automat. Contr., vol. 39, no. 8, 1994) uses a factorization approach to parameterize all stable observers.
Abstract: The original paper (X. Ding, L. Guo and P.M. Frank, IEEE Trans. Automat. Contr., vol. 39, no. 8, p. 1648-52, 1994) uses a factorization approach to parameterize all stable observers. The authors state that the results are interesting, but draw attention to their own parallel results (Syst. Contr. Lett., vol. 13, p. 161-3, 1989, and Digital and Control and Estimation, Englewood Cliffs, NJ: Prentice-Hall, 1990). >
01 Jan 2001
TL;DR: In this paper, the optimal tracking performance for linear time invariant SIMO systems responding to a step reference signal is studied, and an integral square error criterion is used as the measure of the tracking performance.
Abstract: This paper studies the optimal tracking performance for linear timeinvariant SIMO systems responding to a step reference signal. An integral square error criterion is used as the measure of the tracking performance. First, a formula of the tracking error is derived for stable multivariable systems, which is applicable to both right-invertible and non-right-invertible cases. Then, explicit expressions of the tracking error for SIMO systems are developed. The results show that together with the nonminimum phase zeros and unstable poles of the plant, the variation of the plant direction with frequency also contributes to the tracking difficulty in SIMO systems.
Journal ArticleDOI
TL;DR: In this paper, a Kalman filter is used to combine channel soundings made with pilot tones that are deployed at different times and in different frequency bands on a pre-selected pattern.
01 Jan 1999
TL;DR: An integral operator that arises in the context of linear dynamical systems, which describes the time evolution of the state probability density function, is presented and it is shown that this finite rank operator converges in norm to the integral operator.
Abstract: In this paper we present an integral operator that arises in the context of linear dynamical systems, which describes the time evolution of the state probability density function. We propose a finite rank approximation to this integral operator and show that this finite rank operator converges in norm to the integral operator. We discuss Markov chains arising from this finite rank approximation, and show that the eigenvalues of the transition matrices of these Markov chains converge to the eigenvalues of the integral operator as the number of divisions in the statediscretization is increased. AMS subject classification. 93C30.

Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Proceedings ArticleDOI
15 Oct 1995
TL;DR: In this article, the authors present a model for dynamic control systems based on Adaptive Control System Design Steps (ACDS) with Adaptive Observers and Parameter Identifiers.
Abstract: 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty.

4,378 citations