scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Control of large-scale dynamic systems by aggregation

01 Jun 1968-IEEE Transactions on Automatic Control (IEEE)-Vol. 13, Iss: 3, pp 246-253
TL;DR: Using the quantitative definition of weak coupling proposed by Milne, a suboptimal control policy for the weakly coupled system is derived and questions of performance degradation and of stability of such suboptimally controlled systems are answered.
Abstract: A method is proposed to obtain a model of a dynamic system with a state vector of high dimension. The model is derived by "aggregating" the original system state vector into a lower-dimensional vector. Some properties of the aggregation method are investigated in the paper. The concept of aggregation, a generalization of that of projection, is related to that of state vector partition and is useful not only in building a model of reduced dimension, but also in unifying several topics in the control theory such as regulators with incomplete state feedback, characteristic value computations, model controls, and bounds on the solution of the matrix Riccati equations, etc. Using the quantitative definition of weak coupling proposed by Milne, a suboptimal control policy for the weakly coupled system is derived. Questions of performance degradation and of stability of such suboptimally controlled systems are also answered in the paper.
Citations
More filters
Book ChapterDOI
01 Jan 1997
TL;DR: It would be difficult today to find a field in which searchers or engineers do not make profit of computer codes efficiency in their every day life.
Abstract: For the past 25 years, there has been an ever increasing interest for computerized I analysis, and it would be difficult today to find a field in which searchers or engineers do not make profit of computer codes efficiency in their every day life.

5 citations

Proceedings ArticleDOI
28 Jul 2014
TL;DR: A new approach for hierarchical control for nonlinear systems is discussed based on the notion of approximate simulation relation recently introduced and a new technique for the design of interface functions which lift the controller of abstract or approximate system to the concrete or complex system is presented.
Abstract: Hierarchical control method imposing a (at least) two-layer hierarchical structure on the control system architecture is a recently developed efficient method for the investigation of system properties and the design of control laws. This method is worth studying because of the powerful forces it shows in complex dynamics, especially large-scale systems. In this paper, a new approach for hierarchical control for nonlinear systems is discussed based on the notion of approximate simulation relation recently introduced. An approximate simulation could allow us to synthesize hierarchical control law and guarantee that an approximate bound is easily obtained. Then, we present a new technique for the design of interface functions which lift the controller of abstract or approximate system to the concrete or complex system. Finally, an example illustrates the correctness and the effectiveness of our design technique.

5 citations

Book ChapterDOI
01 Jan 2015
TL;DR: This chapter provides an overview of five project contributions – performance monitoring based on the DiSL instrumentation framework, measurement evaluation using the SPL formalism, performance modeling with fluid semantics, adaptation with DEECo and design with IRM-SA – all in the context of the cloud case study.
Abstract: The ASCENS project works with systems of self-aware, self-adaptive and self-expressive ensembles. Performance awareness represents a concern that cuts across multiple aspects of such systems, from the techniques to acquire performance information by monitoring, to the methods of incorporating such information into the design making and decision making processes. This chapter provides an overview of five project contributions – performance monitoring based on the DiSL instrumentation framework, measurement evaluation using the SPL formalism, performance modeling with fluid semantics, adaptation with DEECo and design with IRM-SA – all in the context of the cloud case study.

5 citations

Posted Content
TL;DR: In this article, the authors present a simple stochastic multi-sector growth model for macroeconomic systems, based on the fractal Levy exponent of stock market index fluctuations and the Pareto exponent of investors wealth distribution.
Abstract: Masanao Aoki developed a new methodology for a basic problem of economics: deducing rigorously the macroeconomic dynamics as emerging from the interactions of many individual agents. This includes deduction of the fractal / intermittent fluctuations of macroeconomic quantities from the granularity of the mezo-economic collective objects (large individual wealth, highly productive geographical locations, emergent technologies, emergent economic sectors) in which the micro-economic agents self-organize. In particular, we present some theoretical predictions, which also met extensive validation from empirical data in a wide range of systems: - The fractal Levy exponent of the stock market index fluctuations equals the Pareto exponent of the investors wealth distribution. The origin of the macroeconomic dynamics is therefore found in the granularity induced by the wealth / capital of the wealthiest investors. - Economic cycles consist of a Schumpeter 'creative destruction' pattern whereby the maxima are cusp-shaped while the minima are smooth. In between the cusps, the cycle consists of the sum of 2 'crossing exponentials': one decaying and the other increasing. This unification within the same theoretical framework of short term market fluctuations and long term economic cycles offers the perspective of a genuine conceptual synthesis between micro- and macroeconomics. Joining another giant of contemporary science - Phil Anderson - Aoki emphasized the role of rare, large fluctuations in the emergence of macroeconomic phenomena out of microscopic interactions and in particular their non self-averaging, in the language of statistical physics. In this light, we present a simple stochastic multi-sector growth model.

5 citations

Journal ArticleDOI
TL;DR: In this paper, a state space method for building time series models without detrending each component of data vectors is presented, which uses the recent algorithm based on the singular value decomposition of the Hankel matrix and a two step sequential procedure suggested by the notion of dynamic aggregation.
Abstract: A state space method for building time series models without detrending each component of data vectors is presented. The method uses the recent algorithm based on the singular value decomposition of the Hankel matrix and a two step sequential procedure suggested by the notion of dynamic aggregation.

5 citations

References
More filters
Journal ArticleDOI
TL;DR: A technique is presented for the decomposition of a linear program that permits the problem to be solved by alternate solutions of linear sub-programs representing its several parts and a coordinating program that is obtained from the parts by linear transformations.
Abstract: A technique is presented for the decomposition of a linear program that permits the problem to be solved by alternate solutions of linear sub-programs representing its several parts and a coordinating program that is obtained from the parts by linear transformations. The coordinating program generates at each cycle new objective forms for each part, and each part generates in turn from its optimal basic feasible solutions new activities columns for the interconnecting program. Viewed as an instance of a “generalized programming problem” whose columns are drawn freely from given convex sets, such a problem can be studied by an appropriate generalization of the duality theorem for linear programming, which permits a sharp distinction to be made between those constraints that pertain only to a part of the problem and those that connect its parts. This leads to a generalization of the Simplex Algorithm, for which the decomposition procedure becomes a special case. Besides holding promise for the efficient computation of large-scale systems, the principle yields a certain rationale for the “decentralized decision process” in the theory of the firm. Formally the prices generated by the coordinating program cause the manager of each part to look for a “pure” sub-program analogue of pure strategy in game theory, which he proposes to the coordinator as best he can do. The coordinator finds the optimum “mix” of pure sub-programs using new proposals and earlier ones consistent with over-all demands and supply, and thereby generates new prices that again generates new proposals by each of the parts, etc. The iterative process is finite.

2,281 citations

01 Jan 1960
TL;DR: In this article, the authors considered the problem of least square feedback control in a linear time-invariant system with n states, and proposed a solution based on the concept of controllability.
Abstract: THIS is one of the two ground-breaking papers by Kalman that appeared in 1960—with the other one (discussed next) being the filtering and prediction paper. This first paper, which deals with linear-quadratic feedback control, set the stage for what came to be known as LQR (Linear-Quadratic-Regulator) control, while the combination of the two papers formed the basis for LQG (Linear-Quadratic-Gaussian) control. Both LQR and LQG control had major influence on researchers, teachers, and practitioners of control in the decades that followed. The idea of designing a feedback controller such that the integral of the square of tracking error is minimized was first proposed by Wiener [17] and Hall [8], and further developed in the influential book by Newton, Gould and Kaiser [12]. However, the problem formulation in this book remained unsatisfactory from a mathematical point of view, but, more importantly, the algorithms obtained allowed application only to rather low order systems and were thus of limited value. This is not surprising since it basically took until theH2-interpretation in the 1980s of LQG control before a satisfactory formulation of least squares feedback control design was obtained. Kalman’s formulation in terms of finding the least squares control that evolves from an arbitrary initial state is a precise formulation of the optimal least squares transient control problem. The paper introduced the very important notion of c ntrollability, as the possibility of transfering any initial state to zero by a suitable control action. It includes the necessary and sufficient condition for controllability in terms of the positive definiteness of the Controllability Grammian, and the fact that the linear time-invariant system withn states,

1,451 citations

Journal ArticleDOI
TL;DR: A method is proposed for reducing large matrices by constructing a matrix of lower order which has the same dominant eigenvalues and eigenvectors as the original system.
Abstract: Often it is possible to represent physical systems by a number of simultaneous linear differential equations with constant coefficients, \dot{x} = Ax + r but for many processes (e.g., chemical plants, nuclear reactors), the order of the matrix A may be quite large, say 50×50, 100×100, or even 500×500. It is difficult to work with these large matrices and a means of approximating the system matrix by one of lower order is needed. A method is proposed for reducing such matrices by constructing a matrix of lower order which has the same dominant eigenvalues and eigenvectors as the original system.

614 citations

Journal ArticleDOI
TL;DR: In this article, a constructive design procedure for the problem of estimating the state vector of a discrete-time linear stochastic system with time-invariant dynamics when certain constraints are imposed on the number of memory elements of the estimator is presented.
Abstract: The paper presents a constructive design procedure for the problem of estimating the state vector of a discrete-time linear stochastic system with time-invariant dynamics when certain constraints are imposed on the number of memory elements of the estimator. The estimator reconstructs the state vector exactly for deterministic systems while the steady-state performance in the stochastic case may be comparable to that obtained by the optimal (unconstrained) Wiener-Kalman filter.

68 citations