scispace - formally typeset
Search or ask a question

Showing papers on "Optimal control published in 2011"


Book
10 Apr 2011
TL;DR: In this article, nonlinear model predictive control (NMPC) is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner.
Abstract: Nonlinear Model Predictive Control is a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner These results are complemented by discussions of feasibility and robustness NMPC schemes with and without stabilizing terminal constraints are detailed and intuitive examples illustrate the performance of different NMPC variants An introduction to nonlinear optimal control algorithms gives insight into how the nonlinear optimisation routine the core of any NMPC controller works An appendix covering NMPC software and accompanying software in MATLAB and C++(downloadable from wwwspringercom/ISBN) enables readers to perform computer experiments exploring the possibilities and limitations of NMPC

1,234 citations


Journal ArticleDOI
TL;DR: It is shown that unbounded synchronization regions that achieve synchronization on arbitrary digraphs containing a spanning tree can be guaranteed by using linear quadratic regulator based optimal control and observer design methods at each node.
Abstract: This technical note studies synchronization of identical general linear systems on a digraph containing a spanning tree. A leader node or command generator is considered, which generates the desired tracking trajectory. A framework for cooperative tracking control is proposed, including full state feedback control, observer design and dynamic output feedback control. The classical system theory notion of duality is extended to networked systems. It is shown that unbounded synchronization regions that achieve synchronization on arbitrary digraphs containing a spanning tree can be guaranteed by using linear quadratic regulator based optimal control and observer design methods at each node.

870 citations


Proceedings ArticleDOI
09 May 2011
TL;DR: It is experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.
Abstract: We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.

817 citations


Journal ArticleDOI
TL;DR: In static simulation for a power-split hybrid vehicle, the fuel economy of the vehicle using the control algorithm proposed in this brief is found to be very close-typically within 1%-to the fuel Economy through global optimal control that is based on dynamic programming (DP).
Abstract: A number of strategies for the power management of hybrid electric vehicles (HEVs) are proposed in the literature. A key challenge is to achieve near-optimality while keeping the methodology simple. The Pontryagin's minimum principle (PMP) is suggested as a viable real-time strategy. In this brief, the global optimality of the principle under reasonable assumptions is described from a mathematical viewpoint. Instantaneous optimal control with an appropriate equivalent parameter for battery usage is shown to be possibly a global optimal solution under the assumption that the internal resistance and open-circuit voltage of a battery are independent of the state-of-charge (SOC). This brief also demonstrates that the optimality of the equivalent consumption minimization strategy (ECMS) results from the close relation of ECMS to the optimal-control-theoretic concept of PMP. In static simulation for a power-split hybrid vehicle, the fuel economy of the vehicle using the control algorithm proposed in this brief is found to be very close-typically within 1%-to the fuel economy through global optimal control that is based on dynamic programming (DP).

768 citations


Journal ArticleDOI
TL;DR: In this article, a necessary and sufficient condition for consensusability under a common control protocol is given, which explicitly reveals how the intrinsic entropy rate of the agent dynamic and the communication graph jointly affect consensusability.
Abstract: This paper investigates the joint effect of agent dynamic, network topology and communication data rate on consensusability of linear discrete-time multi-agent systems. Neglecting the finite communication data rate constraint and under undirected graphs, a necessary and sufficient condition for consensusability under a common control protocol is given, which explicitly reveals how the intrinsic entropy rate of the agent dynamic and the communication graph jointly affect consensusability. The result is established by solving a discrete-time simultaneous stabilization problem. A lower bound of the optimal convergence rate to consensus, which is shown to be tight for some special cases, is provided as well. Moreover, a necessary and sufficient condition for formationability of multi-agent systems is obtained. As a special case, the discrete-time second-order consensus is discussed where an optimal control gain is designed to achieve the fastest convergence. The effects of undirected graphs on consensability/formationability and optimal convergence rate are exactly quantified by the ratio of the second smallest to the largest eigenvalues of the graph Laplacian matrix. An extension to directed graphs is also made. The consensus problem under a finite communication data rate is finally investigated.

537 citations


Journal ArticleDOI
TL;DR: A novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method and a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method.
Abstract: In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

530 citations


Journal ArticleDOI
TL;DR: Detailed simulations with a heavy duty truck show that the developed ACC system provides significant benefits in terms of fuel economy and tracking capability while at the same time also satisfying driver desired car following characteristics.
Abstract: This paper presents a novel vehicular adaptive cruise control (ACC) system that can comprehensively address issues of tracking capability, fuel economy and driver desired response. A hierarchical control architecture is utilized in which a lower controller compensates for nonlinear vehicle dynamics and enables tracking of desired acceleration. The upper controller is synthesized under the framework of model predictive control (MPC) theory. A quadratic cost function is developed that considers the contradictions between minimal tracking error, low fuel consumption and accordance with driver dynamic car-following characteristics while driver longitudinal ride comfort, driver permissible tracking range and rear-end safety are formulated as linear constraints. Employing a constraint softening method to avoid computing infeasibility, an optimal control law is numerically calculated using a quadratic programming algorithm. Detailed simulations with a heavy duty truck show that the developed ACC system provides significant benefits in terms of fuel economy and tracking capability while at the same time also satisfying driver desired car following characteristics.

471 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the behavior of a small-scale ORC used to recover energy from a variable flow rate and temperature waste heat source, and compare three different control strategies.

437 citations


Journal ArticleDOI
TL;DR: An automatic C-code generation strategy for real-time nonlinear model predictive control (NMPC) is presented, which is designed for applications with kilohertz sample rates and shows a promising performance being able to provide feedback in much less than a millisecond.

414 citations


Journal ArticleDOI
01 Feb 2011
TL;DR: It is shown that, similar to Q-learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control.
Abstract: Approximate dynamic programming (ADP) is a class of reinforcement learning methods that have shown their importance in a variety of applications, including feedback control of dynamical systems. ADP generally requires full information about the system internal states, which is usually not available in practical situations. In this paper, we show how to implement ADP methods using only measured input/output data from the system. Linear dynamical systems with deterministic behavior are considered herein, which are systems of great interest in the control system community. In control system theory, these types of methods are referred to as output feedback (OPFB). The stochastic equivalent of the systems dealt with in this paper is a class of partially observable Markov decision processes. We develop both policy iteration and value iteration algorithms that converge to an optimal controller that requires only OPFB. It is shown that, similar to Q-learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control. Only the order of the system, as well as an upper bound on its "observability index," must be known. The learned OPFB controller is in the form of a polynomial autoregressive moving-average controller that has equivalent performance with the optimal state variable feedback gain.

406 citations


Journal ArticleDOI
TL;DR: An hp‐adaptive pseudospectral method that iteratively determines the number of segments, the width of each segment, and the polynomial degree required in each segment in order to obtain a solution to a user‐specified accuracy.
Abstract: SUMMARY An hp-adaptive pseudospectral method is presented for numerically solving optimal control problems The method presented in this paper iteratively determines the number of segments, the width of each segment, and the polynomial degree required in each segment in order to obtain a solution to a userspecified accuracy Starting with a global pseudospectral approximation for the state, on each iteration the method determines locations for the segment breaks and the polynomial degree in each segment for use on the next iteration The number of segments and the degree of the polynomial on each segment continue to be updated until a user-specified tolerance is met The terminology ‘hp’ is used because the segment widths (denoted h) and the polynomial degree (denoted p) in each segment are determined simultaneously It is found that the method developed in this paper leads to higher accuracy solutions with less computational effort and memory than is required in a global pseudospectral method Consequently, the method makes it possible to solve complex optimal control problems using pseudospectral methods in cases where a global pseudospectral method would be computationally intractable Finally, the utility of the method is demonstrated on a variety of problems of varying complexity Copyright 2010 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: This work designs sparse and block sparse feedback gains that minimize the variance amplification of distributed systems and takes advantage of the separability of the sparsity-promoting penalty functions to decompose the minimization problem into sub-problems that can be solved analytically.
Abstract: We design sparse and block sparse feedback gains that minimize the variance amplification (i.e., the $H_2$ norm) of distributed systems. Our approach consists of two steps. First, we identify sparsity patterns of feedback gains by incorporating sparsity-promoting penalty functions into the optimal control problem, where the added terms penalize the number of communication links in the distributed controller. Second, we optimize feedback gains subject to structural constraints determined by the identified sparsity patterns. In the first step, the sparsity structure of feedback gains is identified using the alternating direction method of multipliers, which is a powerful algorithm well-suited to large optimization problems. This method alternates between promoting the sparsity of the controller and optimizing the closed-loop performance, which allows us to exploit the structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-promoting penalty functions to decompose the minimization problem into sub-problems that can be solved analytically. Several examples are provided to illustrate the effectiveness of the developed approach.

Journal ArticleDOI
TL;DR: A new iterative adaptive dynamic programming (ADP) method is proposed to solve a class of continuous-time nonlinear two-person zero-sum differential games and the convergence property of the performance index function is proved.

Journal ArticleDOI
TL;DR: Using linear co-positive Lyapunov functions, results for the synthesis of stabilizing, guaranteed performance and optimal control laws for switched linear systems are presented and applied to a simplified human immunodeficiency viral mutation model.
Abstract: This paper has been motivated by the problem of viral mutation in HIV infection. Under simplifying assumptions, viral mutation treatment dynamics can be viewed as a positive switched linear system. Using linear co-positive Lyapunov functions, results for the synthesis of stabilizing, guaranteed performance and optimal control laws for switched linear systems are presented. These results are then applied to a simplified human immunodeficiency viral mutation model. The optimal switching control law is compared with the law obtained through an easily computable guaranteed cost function. Simulation results show the effectiveness of these methods. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Numerical results show that the use of LGR collocation as described in this paper leads to the ability to determine accurate primal and dual solutions for both finite and infinite-horizon optimal control problems.
Abstract: A method is presented for direct trajectory optimization and costate estimation of finite-horizon and infinite-horizon optimal control problems using global collocation at Legendre-Gauss-Radau (LGR) points. A key feature of the method is that it provides an accurate way to map the KKT multipliers of the nonlinear programming problem to the costates of the optimal control problem. More precisely, it is shown that the dual multipliers for the discrete scheme correspond to a pseudospectral approximation of the adjoint equation using polynomials one degree smaller than that used for the state equation. The relationship between the coefficients of the pseudospectral scheme for the state equation and for the adjoint equation is established. Also, it is shown that the inverse of the pseudospectral LGR differentiation matrix is precisely the matrix associated with an implicit LGR integration scheme. Hence, the method presented in this paper can be thought of as either a global implicit integration method or a pseudospectral method. Numerical results show that the use of LGR collocation as described in this paper leads to the ability to determine accurate primal and dual solutions for both finite and infinite-horizon optimal control problems.

Journal ArticleDOI
TL;DR: In this article, a tube-based model predictive control of linear systems is proposed to achieve robust control of nonlinear systems subject to additive disturbances, where the local linear controller is replaced by an ancillary model predictive controller that forces the trajectories of the disturbed system to lie in a tube whose center is the reference trajectory.
Abstract: This paper extends tube-based model predictive control of linear systems to achieve robust control of nonlinear systems subject to additive disturbances. A central or reference trajectory is determined by solving a nominal optimal control problem. The local linear controller, employed in tube-based robust control of linear systems, is replaced by an ancillary model predictive controller that forces the trajectories of the disturbed system to lie in a tube whose center is the reference trajectory thereby enabling robust control of uncertain nonlinear systems to be achieved. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process, is studied.
Abstract: We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

Book
28 Jul 2011
TL;DR: The author covers adjoint-based derivative computation and the efficient solution of Newton systems by multigrid and preconditioned iterative methods.
Abstract: Semismooth Newton methods are a modern class of remarkably powerful and versatile algorithms for solving constrained optimization problems with partial differential equations (PDEs), variational inequalities, and related problems. This book provides a comprehensive presentation of these methods in function spaces, striking a balance between thoroughly developed theory and numerical applications. Although largely self-contained, the book also covers recent developments in the field, such as state-constrained problems and offers new material on topics such as improved mesh independence results. The theory and methods are applied to a range of practically important problems, including optimal control of semilinear elliptic differential equations, obstacle problems, and flow control of instationary Navier-Stokes fluids. In addition, the author covers adjoint-based derivative computation and the efficient solution of Newton systems by multigrid and preconditioned iterative methods. Audience: This book is appropriate for researchers and practitioners in PDE-constrained optimization, nonlinear optimization, and numerical analysis, as well as engineers interested in the current theory and methods for solving variational inequalities. It is also suitable as a text for an advanced graduate-level course in the aforementioned topics or applied functional analysis. Contents: Notation; Preface; Chapter One: Introduction; Chapter Two: Elements of Finite-Dimensional Nonsmooth Analysis; Chapter Three: Newton Methods for Semismooth Operator Equations; Chapter Four: Smoothing Steps and Regularity Conditions; Chapter Five: Variational Inequalities and Mixed Problems; Chapter Six: Mesh Independence; Chapter Seven: Trust-Region Globalization; Chapter Eight: State-Constrained and Related Problems; Chapter Nine: Several Applications; Chapter Ten: Optimal Control of Incompressible Navier-Stokes Flow; Chapter Eleven: Optimal Control of Compressible Navier-Stokes Flow; Appendix; Bibliography; Index.

Journal ArticleDOI
TL;DR: The efficiency of the chopped random basis optimal control technique in optimizing different quantum processes is studied and it is shown that in the considered cases it obtain results equivalent to those obtained via different optimal control methods while using less resources.
Abstract: In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

Journal ArticleDOI
TL;DR: A nonlinear model predictive control method with a fast optimization algorithm is implemented to derive the vehicle control inputs based on road gradient conditions obtained from digital road maps and reveals the ability of the eco-driving system in significantly reducing fuel consumption of a vehicle.
Abstract: This paper presents a novel development of an ecological (eco) driving system for running a vehicle on roads with up-down slopes. Fuel consumed in a vehicle is greatly influenced by road gradients, aside from its velocity and acceleration characteristics. Therefore, optimum control inputs can only be computed through anticipated rigorous reasoning using information concerning road terrain, model of the vehicle dynamics, and fuel consumption characteristics. In this development, a nonlinear model predictive control method with a fast optimization algorithm is implemented to derive the vehicle control inputs based on road gradient conditions obtained from digital road maps. The fuel consumption model of a typical vehicle is formulated using engine efficiency characteristics and used in the objective function to ensure fuel economy driving. The proposed eco-driving system is simulated on a typical road with various shapes of up-down slopes. Simulation results reveal the ability of the eco-driving system in significantly reducing fuel consumption of a vehicle. The fuel saving behavior is graphically illustrated, compared, and analyzed to focus on the significance of this development.

Journal ArticleDOI
TL;DR: These pseudospectral methods can be written equivalently in either a differential or an implicit integral form and it is shown that the map @f:[-1,+1)->[0,+~) can be tuned to improve the quality of the discrete approximation.

Journal ArticleDOI
TL;DR: This paper studies the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach and uses an iterative ADP algorithm to obtain the optimal control law.
Abstract: In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an -error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, the optimal design of LC filter, controller parameters, and damping resistance is carried out in case of grid-connected mode, while controller parameters and power sharing coefficients are optimized in case for autonomous mode.
Abstract: The dynamic nature of the distribution network challenges the stability and control effectiveness of the microgrids in both grid-connected and autonomous modes. In this paper, linear and nonlinear models of microgrids operating in different modes are presented. Optimal design of LC filter, controller parameters, and damping resistance is carried out in case of grid-connected mode. On the other hand, controller parameters and power sharing coefficients are optimized in case of autonomous mode. The control problem has been formulated as an optimization problem where particle swarm optimization is employed to search for optimal settings of the optimized parameters in each mode. In addition, nonlinear time-domain-based as well as eigenvalue-based objective functions are proposed to minimize the error in the measured power and to enhance the damping characteristics, respectively. Finally, the nonlinear time-domain simulation has been carried out to assess the effectiveness of the proposed controllers under different disturbances and loading conditions. The results show satisfactory performance with efficient damping characteristics of the microgrid considered in this study. Additionally, the effectiveness of the proposed approach for optimizing different parameters and its robustness have been confirmed through the eigenvalue analysis and nonlinear time-domain simulations.

Journal ArticleDOI
TL;DR: This paper considers the infinite horizon optimal control of logical control networks, including Boolean control networks as a special case and proves that the optimization technique developed for conventional logical control Networks is also applicable to this μ-memory case.
Abstract: This paper considers the infinite horizon optimal control of logical control networks, including Boolean control networks as a special case. Using the framework of game theory, the optimal control problem is formulated. In the sight of the algebraic form of a logical control network, its cycles can be calculated algebraically. Then the optimal control is revealed over a certain cycle. When the games, using memory μ >; 1 (which means the players only consider previous μ steps' action at each step), are considered, the higher order logical control network is introduced and its algebraic form is also presented, which corresponds to a conventional logical control network (i.e., μ = 1 ). Then it is proved that the optimization technique developed for conventional logical control networks is also applicable to this μ-memory case.

Journal ArticleDOI
TL;DR: In this paper, the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value, was studied.
Abstract: We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.

Journal ArticleDOI
TL;DR: A novel gain scheduling Proportional-plus-Integral (PI) control strategy is suggested for automatic generation control of the two area thermal power system with governor dead-band nonlinearity, and the obtained optimal PI-controller improves the dynamic performance of the power system as expected.

Journal ArticleDOI
TL;DR: A model-based control approach for PHEV energy management that is based on minimizing the overall CO2 emissions produced-directly and indirectly-from vehicle utilization is proposed and implemented in an energy-based simulator of a prototype PHEV and validated on experimental data.
Abstract: Plug-in hybrid electric vehicles (PHEVs) are currently recognized as a promising solution for reducing fuel consumption and emissions due to the ability of storing energy through direct connection to the electric grid. Such benefits can be achieved only with a supervisory energy management strategy that optimizes the energy utilization of the vehicle. This control problem is particularly challenging for PHEVs due to the possibility of depleting the battery during usage and the vehicle-to-grid interaction during recharge. This paper proposes a model-based control approach for PHEV energy management that is based on minimizing the overall CO2 emissions produced-directly and indirectly-from vehicle utilization. A supervisory energy manager is formulated as a global optimal control problem and then cast into a local problem by applying the Pontryagin's minimum principle. The proposed controller is implemented in an energy-based simulator of a prototype PHEV and validated on experimental data. A simulation study is conducted to calibrate the control parameters and to investigate the influence of vehicle usage conditions, environmental factors, and geographic scenarios on the PHEV performance using a large database of regulatory and “real-world” driving profiles.

Journal ArticleDOI
TL;DR: The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers.
Abstract: An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS) The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers

Journal ArticleDOI
TL;DR: It is shown that event-driven intermittent control provides a framework to explain the behaviour of the human operator under a wider range of conditions than continuous control, and explains why the intermittent control hypothesis is consistent with the continuous control hypothesis for certain experimental conditions.
Abstract: The paradigm of continuous control using internal models has advanced understanding of human motor control. However, this paradigm ignores some aspects of human control, including intermittent feedback, serial ballistic control, triggered responses and refractory periods. It is shown that event-driven intermittent control provides a framework to explain the behaviour of the human operator under a wider range of conditions than continuous control. Continuous control is included as a special case, but sampling, system matched hold, an intermittent predictor and an event trigger allow serial open-loop trajectories using intermittent feedback. The implementation here may be described as “continuous observation, intermittent action”. Beyond explaining unimodal regulation distributions in common with continuous control, these features naturally explain refractoriness and bimodal stabilisation distributions observed in double stimulus tracking experiments and quiet standing, respectively. Moreover, given that human control systems contain significant time delays, a biological-cybernetic rationale favours intermittent over continuous control: intermittent predictive control is computationally less demanding than continuous predictive control. A standard continuous-time predictive control model of the human operator is used as the underlying design method for an event-driven intermittent controller. It is shown that when event thresholds are small and sampling is regular, the intermittent controller can masquerade as the underlying continuous-time controller and thus, under these conditions, the continuous-time and intermittent controller cannot be distinguished. This explains why the intermittent control hypothesis is consistent with the continuous control hypothesis for certain experimental conditions.

Proceedings ArticleDOI
Anders Rantzer1
01 Dec 2011
TL;DR: It is shown that a stabilizing distributed feedback controller, when it exists, can be computed using linear programming and the same methods are used to minimize the closed loop input-output gain.
Abstract: Stabilization and optimal control is studied for state space systems with nonnegative coefficients (positive systems). In particular, we show that a stabilizing distributed feedback controller, when it exists, can be computed using linear programming. The same methods are also used to minimize the closed loop input-output gain. An example devoted to distributed control of a vehicle platoon is examined.