scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Robust control of dynamically interacting systems

01 Jul 1988-International Journal of Control (Taylor & Francis Group)-Vol. 48, Iss: 1, pp 65-88
TL;DR: This paper describes an approach to the design of ‘interaction controllers’ and contrasts this with an Approach to the Design of Approaches toDynamic interaction with the environment is fundamental to the process of manipulation.
Abstract: Dynamic interaction with the environment is fundamental to the process of manipulation. This paper describes an approach to the design of ‘interaction controllers’ and contrasts this with an approa...
Citations
More filters
Journal ArticleDOI
01 Oct 1993
TL;DR: It is shown that a proper use of an four channels is of critical importance in achieving high performance telepresence in the sense of accurate transmission of task impedances to the operator.
Abstract: Tools for quantifying teleoperation system performance and stability when communication delays are present are provided A general multivariable system architecture is utilized which includes all four-types of data transmission between master and slave: force and velocity in both directions It is shown that a proper use of an four channels is of critical importance in achieving high performance telepresence in the sense of accurate transmission of task impedances to the operator It is also shown that transparency and robust stability (passivity) are conflicting design goals in teleoperation systems The analysis is illustrated by comparing transparency and stability in two common architectures, as well as a recent passivated approach and a new transparency-optimized architecture, using simplified one-degree-of-freedom examples >

2,083 citations

Journal ArticleDOI
01 Mar 1998
TL;DR: Evidence is presented that robot-aided therapy does not have adverse effects, that patients tolerate the procedure, and that peripheral manipulation of the impaired limb may influence brain recovery, and one approach using kinematic data in a robot- aided assessment procedure.
Abstract: The authors' goal is to apply robotics and automation technology to assist, enhance, quantify, and document neurorehabilitation. This paper reviews a clinical trial involving 20 stroke patients with a prototype robot-aided rehabilitation facility developed at the Massachusetts Institute of Technology, Cambridge, (MIT) and tested at Burke Rehabilitation Hospital, White Plains, NY. It also presents the authors' approach to analyze kinematic data collected in the robot-aided assessment procedure. In particular, they present evidence (1) that robot-aided therapy does not have adverse effects, (2) that patients tolerate the procedure, and (3) that peripheral manipulation of the impaired limb may influence brain recovery. These results are based on standard clinical assessment procedures. The authors also present one approach using kinematic data in a robot-aided assessment procedure.

1,346 citations

Journal ArticleDOI
TL;DR: In this article, a physically motivated, passivity-based formalism is used to provide energy conservation and stability guarantees in the presence of transmission delays, and an adaptive tracking controller is incorporated for the control of the remote robotic system and can be used to simplify, transform or enhance the remote dynamics perceived by the operator.
Abstract: A study is made of how the existence of transmission time delays affects the application of advanced robot control schemes to effective force-reflecting telerobotic systems. This application best exploits the presence of the human operator while making full use of available robot control technology and computing power. A physically motivated, passivity-based formalism is used to provide energy conservation and stability guarantees in the presence of transmission delays. The notion of wave variable is utilized to characterize time-delay systems and leads to a configuration for force-reflecting teleoperation. The effectiveness of the approach is demonstrated experimentally. Within the same framework, an adaptive tracking controller is incorporated for the control of the remote robotic system and can be used to simplify, transform, or enhance the remote dynamics perceived by the operator. >

1,286 citations

Journal ArticleDOI
22 Nov 2001-Nature
TL;DR: The results show that humans learn to stabilize unstable dynamics using the skilful and energy-efficient strategy of selective control of impedance geometry.
Abstract: To manipulate objects or to use tools we must compensate for any forces arising from interaction with the physical environment. Recent studies indicate that this compensation is achieved by learning an internal model of the dynamics1,2,3,4,5,6, that is, a neural representation of the relation between motor command and movement5,7. In these studies interaction with the physical environment was stable, but many common tasks are intrinsically unstable8,9. For example, keeping a screwdriver in the slot of a screw is unstable because excessive force parallel to the slot can cause the screwdriver to slip and because misdirected force can cause loss of contact between the screwdriver and the screw. Stability may be dependent on the control of mechanical impedance in the human arm because mechanical impedance can generate forces which resist destabilizing motion. Here we examined arm movements in an unstable dynamic environment created by a robotic interface. Our results show that humans learn to stabilize unstable dynamics using the skilful and energy-efficient strategy of selective control of impedance geometry.

1,027 citations


Cites background from "Robust control of dynamically inter..."

  • ...In these studies interaction with the physical environment was stable, but many common tasks are intrinsically unstabl...

    [...]

Journal ArticleDOI
12 May 1992
TL;DR: New control schemes of master-slave manipulators are proposed that provide the ideal kinesthetic coupling such that the operator can maneuver the system as though he/she were directly manipulating the remote object himself/herself.
Abstract: In this paper, the analysis and design of master-slave teleoperation systems are discussed. The goal of this paper is to build a superior master-slave system that can provide good maneuverability. We first analyze a one degree-of-freedom system including operator and object dynamics. Second, some ideal responses of master-slave systems are defined and a quantitative index of maneuverability is given, based on the concept of ideal responses. Third, we propose new control schemes of master-slave manipulators that provide the ideal kinesthetic coupling such that the operator can maneuver the system as though he/she were directly manipulating the remote object himself/herself. The proposed control scheme requires accurate dynamic models of the master and slave arms, but neither parameters of the remote object nor the operator dynamics is necessary. Finally, the proposed control scheme is introduced to a prototype master-slave system and the experimental results show the validity of the proposed scheme. >

953 citations


Cites background from "Robust control of dynamically inter..."

  • ...Colgate et a1.[ 5 ] showed that the necessary and sufficient condition for the system which may interact to any passive environments to be stable is that the system itself must be passive....

    [...]

References
More filters
Book
01 Oct 1972
TL;DR: In this article, the authors provide an excellent introduction to feedback control system design, including a theoretical approach that captures the essential issues and can be applied to a wide range of practical problems.
Abstract: Linear Optimal Control SystemsFeedback Control TheoryOptimal ControlLinear Optimal ControlOptimal Control SystemsThe Zeros of Linear Optimal Control Systems and Their Role in High Feedback Gain Stability DesignOptimal ControlLinear State-Space Control SystemsOptimal Control of Dynamic Systems Driven by Vector MeasuresApplied Linear Optimal Control Paperback with CD-ROMNonlinear and Optimal Control SystemsLinear SystemsLinear Control TheoryLinear Systems and Optimal ControlOptimal Control Methods for Linear Discrete-Time Economic SystemsOptimal Control Theory for Infinite Dimensional SystemsInfinite Dimensional Linear Control SystemsStochastic Linear-Quadratic Optimal Control Theory: Open-Loop and Closed-Loop SolutionsApplications of Optimal Control Theory to Computer Controller DesignSwitching and Learning in Feedback SystemsContinuous Time Dynamical SystemsNew Trends in Optimal Filtering and Control for Polynomial and Time-Delay SystemsThe Theory and Application of Linear Optimal ControlTurnpike Theory of Continuous-Time Linear Optimal Control ProblemsLinear Optimal Control SystemsLinear Control TheoryCalculus of Variations and Optimal Control TheoryOptimal ControlNonlinear Controllability and Optimal ControlOptimal Control TheoryOptimal Control Of Singularly Perturbed Linear Systems And ApplicationsOptimal Control SystemsDesign criterion for improving the sensitivity of linear optimal control systemsLinear Stochastic Control SystemsConstrained Optimal Control of Linear and Hybrid SystemsOptimal Control Of Singularly Perturbed Linear Systems And ApplicationsPredictive Control for Linear and Hybrid SystemsOptimal ControlOptimal Control Theory with Applications in EconomicsNonlinear Optimal Control Theory Successfully classroom-tested at the graduate level, Linear Control Theory: Structure, Robustness, and Optimization covers three major areas of control engineering (PID control, robust control, and optimal control). It provides balanced coverage of elegant mathematical theory and useful engineering-oriented results. The first part of the book develops results relating to the design of PID and first-order controllers for continuous and discrete-time linear systems with possible delays. The second section deals with the robust stability and performance of systems under parametric and unstructured uncertainty. This section describes several elegant and sharp results, such as Kharitonov’s theorem and its extensions, the edge theorem, and the mapping theorem. Focusing on the optimal control of linear systems, the third part discusses the standard theories of the linear quadratic regulator, Hinfinity and l1 optimal control, and associated results. Written by recognized leaders in the field, this book explains how control theory can be applied to the design of real-world systems. It shows that the techniques of three term controllers, along with the results on robust and optimal control, are invaluable to developing and solving research problems in many areas of engineering.An excellent introduction to feedback control system design, this book offers a theoretical approach that captures the essential issues and can be applied to a wide range of practical problems. Its explorations of recent developments in the field emphasize the relationship of new procedures to classical control theory, with a focus on single input and output systems that keeps concepts accessible to students with limited backgrounds. The text is geared toward a single-semester senior course or a graduate-level class for students of electrical engineering. The opening chapters constitute a basic treatment of feedback design. Topics include a detailed formulation of the control design program, the fundamental issue of performance/stability robustness tradeoff, and the graphical design technique of loopshaping. Subsequent chapters extend the discussion of the loopshaping technique and connect it with notions of optimality. Concluding chapters examine controller design via optimization, offering a mathematical approach that is useful for multivariable systems.Upper-level undergraduate text introduces aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization. Numerous figures, tables. Solution guide available upon request. 1970 edition.Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book.For more than forty years, the equation y’(t) = Ay(t) + u(t) in Banach spaces has been used as model for optimal control processes described by partial differential equations, in particular heat and diffusion processes. Many of the outstanding open problems, however, have remained open until recently, and some have never been solved. This book is a survey of all results know to the author, with emphasis on very recent results (1999 to date). The book is restricted to linear equations and two particular problems (the time optimal problem, the norm optimal problem) which results in a more focused and concrete treatment. As experience shows, results on linear equations are the basis for the treatment of their semilinear counterparts, and techniques for the time and norm optimal problems can often be generalized to more general cost functionals. The main object of this book is to be a state-of-the-art monograph on the theory of the time and norm optimal controls for y’(t) = Ay(t) + u(t) that ends at the very latest frontier of research, with open problems and indications for future research. Key features: · Applications to optimal diffusion processes. · Applications to optimal heat propagation processes. · Modelling of optimal processes governed by partial differential equations. · Complete bibliography. · Includes the latest research on the subject. · Does not assume anything from the reader except basic functional analysis. · Accessible to researchers and advanced graduate students alike · Applications to optimal diffusion processes. · Applications to optimal heat propagation processes. · Modelling of optimal processes governed by partial differential equations. · Complete bibliography. · Includes the latest research on the subject. · Does not assume anything from the reader except basic functional analysis. · Accessible to researchers and advanced graduate students alikeLinear Stochastic Control Systems presents a thorough description of the mathematical theory and fundamental principles of linear stochastic control systems. Both continuous-time and discrete-time systems are thoroughly covered. Reviews of the modern probability and random processes theories and the Itô stochastic differential equations are provided. Discrete-time stochastic systems theory, optimal estimation and Kalman filtering, and optimal stochastic control theory are studied in detail. A modern treatment of these same topics for continuous-time stochastic control systems is included. The text is written in an easy-to-understand style, and the reader needs only to have a background of elementary real analysis and linear deterministic systems theory to comprehend the subject matter. This graduate textbook is also suitable for self-study, professional training, and as a handy research reference. Linear Stochastic Control Systems is self-contained and provides a step-by-step development of the theory, with many illustrative examples, exercises, and engineering applications.This outstanding reference presents current, state-of-the-art research on importantproblems of finite-dimensional nonlinear optimal control and controllability theory. Itpresents an overview of a broad variety of new techniques useful in solving classicalcontrol theory problems.Written and edited by renowned mathematicians at the forefront of research in thisevolving field, Nonlinear Controllability and Optimal Control providesdetailed coverage of the construction of solutions of differential inclusions by means ofdirectionally continuous sections Lie algebraic conditions for local controllability the use of the Campbell-Hausdorff series to derive properties of optimal trajectories the Fuller phenomenon the theory of orbits and more.Containing more than 1,300 display equations, this exemplary, instructive reference is aninvaluable source for mathematical researchers and applied mathematicians, electrical andelectronics, aerospace, mechanical, control, systems, and computer engineers, and graduatestudents in these disciplines .This book is based on lectures from a one-year course at the Far Eastern Federal University (Vladivostok, Russia) as well as on workshops on optimal control offered to students at various mathematical departments at the university level. The main themes of the theory of linear and nonlinear systems are considered, including the basic problem of establishing the necessary and sufficient conditions of optimal processes. In the

4,294 citations

Book
01 Jan 1983
TL;DR: The SYSKIT is a linear system software toolkit that contains a highly integrated set of programs with applications to system dynamics, controls and vibrations.
Abstract: From the Publisher: The SYSKIT is a linear system software toolkit that contains a highly integrated set of programs with applications to system dynamics,controls and vibrations. It includes file management,a time processor for eigenvalue and time response,a conversion module for vibrations,a frequency domain module for frequency response and root locus,and output display modules for time,frequency,and root locus data.

435 citations

Dissertation
01 May 1981
TL;DR: A very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived and are able to guarantee feedback system stability in the face of model errors of larger magnitude.
Abstract: The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

68 citations

Dissertation
01 Jan 1987
TL;DR: Thesis. as discussed by the authors, Mass. Institute of Technology, Dept. of Mechanical Engineering, Boston, Massachusetts, U.S.A. (M.S., 1987).
Abstract: Thesis. (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1987.

12 citations

01 May 1981
TL;DR: In this paper, the robustness of linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria, and robustness tests are unified under a common framework based on the nature and structure of model errors.
Abstract: The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

9 citations