scispace - formally typeset
Search or ask a question
Author

John B. Moore

Bio: John B. Moore is an academic researcher from Australian National University. The author has contributed to research in topics: Adaptive control & Linear-quadratic-Gaussian control. The author has an hindex of 50, co-authored 352 publications receiving 18573 citations. Previous affiliations of John B. Moore include Akita University & University of Hong Kong.


Papers
More filters
Journal Article
TL;DR: This book helps to fill the void in the market and does that in a superb manner by covering the standard topics such as Kalman filtering, innovations processes, smoothing, and adaptive and nonlinear estimation.
Abstract: Estimation theory has had a tremendous impact on many problem areas over the past two decades. Beginning with its original use in the aerospace industry, its applications can now be found in many different areas such as control and communjcations, power systems, transportation systems, bioengineering, image processing, etc. Along with linear system theory and optimal control, a course in estimation theorycan be found in the graduate system and control curriculum,of most schools in the country. In fact, it is probably one of the most,salable courses as far as employment is concerned. However, despite its economic value and the amount of activities in the field, very few books on estimation theory have appeared recently. This book helps to fill the void in the market and does that in a superb manner. Although the book is called OptimalFiltering, the coverage is restricted to discrete time filtering. A more appropriate title would thus be Optimal Discrete Time ,Filtering. The authors’ decision to concentrate on discrete time f lters is due to “recent technological developments as well as the easier path offered students and instructors.” This is probably a wise move since a thorough treatment of continuous time filtering will require a better knowledge o f stochastic processes than most graduate students or engineers will have. As it stands now, the text requires little background beyond that of linear system theory and probability theory. Written by active researchers, in the area, the book covers the standard topics such as Kalman filtering, innovations processes, smoothing, and adaptive and nonlinear estimation. Much of the material in the book has been around for a long time and has been widely used, by practitioners in the area: Some results are more recent. However,-it .has been difficult to locate all of them presented in a n organized manner within a single text. This is especially true of the chapters dealing with the computation aspects and nonlinear and adaptive estimation. After a short introductory chapter, Chapter 2 introduces the mathematical model to be used throughout most of the book. The discrete time Kalman filter is 1 hen presented in Chapter 3, along with some applications. Chapter 4 contains a treatment

4,836 citations

Book
01 Jun 1979
TL;DR: In this article, an augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems, with step-by-step explanations that show clearly how to make practical use of the material.
Abstract: This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material. The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the engineering properties of the regulator. Topics include degree of stability, phase and gain margin, tolerance of time delay, effect of nonlinearities, asymptotic properties, and various sensitivity problems. The third section explores state estimation and robust controller design using state-estimate feedback. Numerous examples emphasize the issues related to consistent and accurate system design. Key topics include loop-recovery techniques, frequency shaping, and controller reduction, for both scalar and multivariable systems. Self-contained appendixes cover matrix theory, linear systems, the Pontryagin minimum principle, Lyapunov stability, and the Riccati equation. Newly added to this Dover edition is a complete solutions manual for the problems appearing at the conclusion of each section.

3,254 citations

Book
16 Dec 1994
TL;DR: This paper presents a meta-modelling procedure called Markov Model Processing that automates the very labor-intensive and therefore time-heavy and therefore expensive process of HMMEstimation.
Abstract: Hidden Markov Model Processing.- Discrete-Time HMM Estimation.- Discrete States and Discrete Observations.- Continuous-Range Observations.- Continuous-Range States and Observations.- A General Recursive Filter.- Practical Recursive Filters.- Continuous-Time HMM Estimation.- Discrete-Range States and Observations.- Markov Chains in Brownian Motion.- Two-Dimensional HMM Estimation.- Hidden Markov Random Fields.- HMM Optimal Control.- Discrete-Time HMM Control.- Risk-Sensitive Control of HMM.- Continuous-Time HMM Control.

1,415 citations

Book
01 Feb 1994
TL;DR: Details of Matrix Eigenvalue Methods, including Double Bracket Isospectral Flows, and Singular Value Decomposition are revealed.
Abstract: Contents: Matrix Eigenvalue Methods.- Double Bracket Isospectral Flows.- Singular Value Decomposition.- Linear Programming.- Approximation and Control.- Balanced Matrix Factorizations.- Invariant Theory and System Balancing.- Balancing via Gradient Flows.- Sensitivity Optimization.- Linear Algebra.- Dynamical Systems.- Global Analysis.

800 citations

Journal ArticleDOI
TL;DR: In this article, the concepts of detectability and stabilizability are explored for time-varying systems, including invariance under feedback, an extended version of the lemma of Lyapunov, existence of stabilizing feedback laws, linear quadratic filtering and control, and the existence of approximate canonical forms.
Abstract: The concepts of detectability and stabilizability are explored for time-varying systems. We study duality, invariance under feedback, an extended version of the lemma of Lyapunov, existence of stabilizing feedback laws, linear quadratic filtering and control, and the existence of approximate canonical forms.

405 citations


Cited by
More filters
Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: A generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph, that computes-either exactly or approximately-various marginal functions derived from the global function.
Abstract: Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of "local" functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms.

6,637 citations

BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations