scispace - formally typeset
Search or ask a question

Showing papers by "Michel Gevers published in 2009"


Journal ArticleDOI
TL;DR: This work provides a complete analysis, for arbitrary model structures, of the minimum degree of richness required to guarantee the nonsingularity of the information matrix, and particularize these results to all commonly used model structures.
Abstract: In prediction error identification, the information matrix plays a central role. Specifically, when the system is in the model set, the covariance matrix of the parameter estimates converges asymptotically, up to a scaling factor, to the inverse of the information matrix. The existence of a finite covariance matrix thus depends on the positive definiteness of the information matrix, and the rate of convergence of the parameter estimate depends on its ?size?. The information matrix is also the key tool in the solution of optimal experiment design procedures, which have become a focus of recent attention. Introducing a geometric framework, we provide a complete analysis, for arbitrary model structures, of the minimum degree of richness required to guarantee the nonsingularity of the information matrix. We then particularize these results to all commonly used model structures, both in open loop and in closed loop. In a closed-loop setup, our results provide an unexpected and precisely quantifiable trade-off between controller degree and required degree of external excitation.

133 citations


Proceedings ArticleDOI
01 Aug 2009
TL;DR: A simple two-step procedure is presented that can cope with the situation where the unknown plant may or may not have non minimum phase zeros, leading possibly to an unstable closed loop.
Abstract: Model Reference control design methods fail when the plant has one or more non minimum phase zeros that are not included in the reference model, leading possibly to an unstable closed loop. This is a very serious problem for data-based control design methods where the plant is typically unknown. For Iterative Feedback Tuning a procedure was proposed in [1] to overcome this difficulty. In this paper we extend this idea to Virtual Reference Feedback Tuning, another data-based control design method. We present a simple two-step procedure that can cope with the situation where the unknown plant may or may not have non minimum phase zeros.

56 citations


Journal ArticleDOI
TL;DR: By introducing the concept of informative data at a particular parameter value, this paper is able to establish a number of equivalences and connections between these four ingredients of the identification problem, for both open-loop and closed-loop identification.

11 citations


Proceedings ArticleDOI
12 Aug 2009
TL;DR: This work shows that assuming knowledge of the poles of a transfer function one can derive upper bounds on the H∞ norm as a constant multiple of its H2 norm, and provides tight upper bounds also for the case where the transfer functions are restricted to those having a real valued impulse response.
Abstract: Various optimal control strategies exist in the literature. Prominent approaches are Robust Control and Linear Quadratic Regulators, the first one being related to the H∞ norm of a system, the second one to the H2 norm. In 1994, F. De Bruyne et al [1] showed that assuming knowledge of the poles of a transfer function one can derive upper bounds on the H∞ norm as a constant multiple of its H2 norm. We strengthen these results by providing tight upper bounds also for the case where the transfer functions are restricted to those having a real valued impulse response. Moreover the results are extended by studying spaces consisting of transfer functions with a common denominator polynomial. These spaces, called rational modules, have the feature that their analytic properties, captured in the integral kernel reproducing them, are accessible by means of purely algebraic techniques.

2 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: The role of input and model class selection for the auto-covariance of the estimated transfer function is explained without reference to any particular parametrization and the Fisher information metric is shown to provide an asymptotically tight lower bound for the positive kernel representing the covariance at the system which generated the input-output data.
Abstract: This paper adresses the variance quantification problem for system identification based on the prediction error framework. The role of input and model class selection for the auto-covariance of the estimated transfer function is explained without reference to any particular parametrization. This is achieved by lifting the concept of covariance from the parameter space to the system manifold where it is represented by a positive kernel instead of a positive definite matrix. The Fisher information metric as defined in information geometry allows an interpretation as a signal-to-noise ratio weighted standard metric after embedding the system manifold in the Hardy space of square integrable analytic functions. The reproducing kernel of the tangent space with respect to this metric is shown to provide an asymptotically tight lower bound for the positive kernel representing the covariance at the system which generated the input-output data.