scispace - formally typeset
Search or ask a question
Author

Prashant G. Mehta

Other affiliations: Bosch, University of Massachusetts Amherst, Cornell University  ...read more
Bio: Prashant G. Mehta is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Particle filter & Optimal control. The author has an hindex of 30, co-authored 189 publications receiving 3398 citations. Previous affiliations of Prashant G. Mehta include Bosch & University of Massachusetts Amherst.


Papers
More filters
Journal ArticleDOI
TL;DR: It is proved that with arbitrary small amounts of mistuning, the asymptotic behavior of the least stable closed loop eigenvalue can be improved to O(1/N) in the limit of a large number of vehicles.
Abstract: We consider a decentralized bidirectional control of a platoon of N identical vehicles moving in a straight line. The control objective is for each vehicle to maintain a constant velocity and inter-vehicular separation using only the local information from itself and its two nearest neighbors. Each vehicle is modeled as a double integrator. To aid the analysis, we use continuous approximation to derive a partial differential equation (PDE) approximation of the discrete platoon dynamics. The PDE model is used to explain the progressive loss of closed-loop stability with increasing number of vehicles, and to devise ways to combat this loss of stability. If every vehicle uses the same controller, we show that the least stable closed-loop eigenvalue approaches zero as O(1/N2) in the limit of a large number (N) of vehicles. We then show how to ameliorate this loss of stability by small amounts of "mistuning", i.e., changing the controller gains from their nominal values. We prove that with arbitrary small amounts of mistuning, the asymptotic behavior of the least stable closed loop eigenvalue can be improved to O(1/N). All the conclusions drawn from analysis of the PDE model are corroborated via numerical calculations of the state-space platoon model.

281 citations

Journal ArticleDOI
05 Apr 2011
TL;DR: A class of convex Nash games where strategy sets are coupled across agents through a common constraint and payoff functions are linked via a scaled congestion cost metric is considered, showing that the equilibrium is locally unique both in the primal space as well as in the larger primal-dual space.
Abstract: We consider a class of convex Nash games where strategy sets are coupled across agents through a common constraint and payoff functions are linked via a scaled congestion cost metric. A solution to a related variational inequality problem provides a set of Nash equilibria characterized by common Lagrange multipliers for shared constraints. While this variational problem may be characterized by a non-monotone map, it is shown to admit solutions, even in the absence of restrictive compactness assumptions on strategy sets. Additionally, we show that the equilibrium is locally unique both in the primal space as well as in the larger primal-dual space. The existence statements can be generalized to accommodate a piecewise-smooth congestion metric while affine restrictions, surprisingly, lead to both existence and global uniqueness guarantees. In the second part of the technical note, we discuss distributed computation of such equilibria in monotone regimes via a distributed iterative Tikhonov regularization (ITR) scheme. Application to a class of networked rate allocation games suggests that the ITR schemes perform better than their two-timescale counterparts.

179 citations

Journal ArticleDOI
TL;DR: The feedback particle filter introduced in this paper is a new approach to approximate nonlinear filtering, motivated by techniques from mean-field game theory, and numerical algorithms are introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators.
Abstract: The feedback particle filter introduced in this paper is a new approach to approximate nonlinear filtering, motivated by techniques from mean-field game theory. The filter is defined by an ensemble of controlled stochastic systems (the particles). Each particle evolves under feedback control based on its own state, and features of the empirical distribution of the ensemble. The feedback control law is obtained as the solution to an optimal control problem, in which the optimization criterion is the Kullback-Leibler divergence between the actual posterior, and the common posterior of any particle. The following conclusions are obtained for diffusions with continuous observations: 1) The optimal control solution is exact: The two posteriors match exactly, provided they are initialized with identical priors. 2) The optimal filter admits an innovation error-based gain feedback structure. 3) The optimal feedback gain is obtained via a solution of an Euler-Lagrange boundary value problem; the feedback gain equals the Kalman gain in the linear Gaussian case. Numerical algorithms are introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. In some cases it is found that the filter exhibits significantly lower variance when compared to the bootstrap particle filter.

169 citations

Proceedings ArticleDOI
29 Jul 2010
TL;DR: In this paper, an aggregation-based model reduction method for thermal models of large buildings is proposed, where the baseline thermal model is represented as an RC-network and the proposed model reduction methodology is used to obtain a simpler (with fewer states) multi-scale representation of this network.
Abstract: This paper proposes an aggregation-based model reduction method for thermal models of large buildings. Using an electric analogy, the baseline thermal model is represented as an RC-network. The proposed model reduction methodology is used to obtain a simpler (with fewer states) multi-scale representation of this network. The methodology preserves the electrical analogy and retains the physical intuition during the model reduction process. The theoretical results are illustrated with the aid of examples.

140 citations

Journal ArticleDOI
TL;DR: A Lyapunov measure is proposed, shown to be a stochastic counterpart of stability just as an invariant measure is a counterpart of the attractor (recurrence) and useful for the study of more general (weaker and set-wise) notions of stability.
Abstract: This paper is concerned with the analysis and computational methods for verifying global stability of an attractor set of a nonlinear dynamical system. Based upon a stochastic representation of deterministic dynamics, a Lyapunov measure is proposed for these purposes. This measure is shown to be a stochastic counterpart of stability (transience) just as an invariant measure is a counterpart of the attractor (recurrence). It is a dual of the Lyapunov function and is useful for the study of more general (weaker and set-wise) notions of stability. In addition to the theoretical framework, constructive methods for computing approximations to the Lyapunov measures are presented. These methods are based upon set-oriented numerical approaches. Several equivalent descriptions, including a series formula and a system of linear inequalities, are provided for computational purposes. These descriptions allow one to carry over the intuition from the linear case with stable equilibrium to nonlinear systems with globally stable attractor sets. Finally, in certain cases, the exact relationship between Lyapunov functions and Lyapunov measures is also given.

135 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Apr 2003
TL;DR: The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it as mentioned in this paper, and also presents new ideas and alternative interpretations which further explain the success of the EnkF.
Abstract: The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.

2,975 citations