scispace - formally typeset
Search or ask a question

Showing papers on "Sliding mode control published in 1980"


Journal ArticleDOI
TL;DR: In this article, the authors present a technique which adopts the idea of "inverse problem" and extends the results of "resolved-motion-rate" controls, which deals directly with the position and orientation of the hand.
Abstract: Position control of a manipulator involves the practical problem of solving for the correct input torques to apply to the joints for a set of specified positions, velocities, and accelerations. Since the manipulator is a nonlinear system whose joints are highly coupled, it is very difficult to control. This paper presents a technique which adopts the idea of "inverse problem" and extends the results of "resolved-motion-rate" controls. The method deals directly with the position and orientation of the hand. It differs from others in that accelerations are specified and that all the feedback control is done at the hand level. The control algorithm is shown to be asymptotically convergent. A PDP 11/45 computer is used as part of a controller which computes the input torques/forces at each sampling period for the control system using the Newton-Euler formulation of equations of motion. The program is written in floating point assembly language, and has an average execution time of less than 11.5 ms for a Stanford manipulator. This makes a sampling frequency of 87 Hz possible. The controller is verified by an example which includes a simulated manipulator.

1,231 citations


Journal ArticleDOI
TL;DR: The control laws of this paper are perhaps the easiest way to stabilize a linear system with delay in the control.
Abstract: Feedback controls based on the receding horizon method have proven to be a useful and easy tool in stabilizing linear ordinary differential systems. In this paper the receding horizon method is applied to linear systems with delay in the control. An open-loop optimal control which minimizes control energy subject to certain side constraints is first derived and then transformed to a closed-loop control via the receding horizon concept. The resulting feedback system is shown to be asymptotically stable under a complete controllability condition. It is also shown how the receding horizon control suggests a more general class of stabilizing feedback control laws. The control laws of this paper are perhaps the easiest way to stabilize a linear system with delay in the control.

463 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of stabilizing a nonlinear control system by means of a feedback control law, in cases where the entire state of the system is not available for measurement.
Abstract: In this paper, we study the problem of stabilizing a nonlinear control system by means of a feedback control law, in cases where the entire state of the system is not available for measurement. The proposed method of stabilization consists of three parts: 1) determine a stabilizing control law based on state feedback, assuming the state vector x(t) can be measured; 2) construct a state detection mechanism, which generates a vector z(t) such that z(t)-x(t)\rightarrow 0 as t\rightarrow \infty and 3) apply the previously determined control law to z(t) . This scheme is well established for linear time-invariant systems, and its global convergence has previously been studied in the case of nonlinear systems. Hence, the contribution of this paper is in showing that such a scheme works in the absence of any linearity assumptions, and in studying both local asymptotic stability and global asymptotic stability.

203 citations


Proceedings ArticleDOI
01 Dec 1980
TL;DR: In this paper, it was shown that it is impossible to stabilize a controllable system by means of a continuous feedback, even if memory is allowed, and that continuous stabilization with memory is always possible.
Abstract: We show that, in general, it is impossible to stabilize a controllable system by means of a continuous feedback, even if memory is allowed. No optimality considerations are involved. All state spaces are Euclidean spaces, so no obstructions arising from the state space topology are involved either. For one dimensional state and input, we prove that continuous stabilization with memory is always possible.

157 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm for the solution of optimal control problems with constraints on the control, but without constraints on trajectory or the terminal state, is presented, where reduction of a cost at each iteration is guaranteed.
Abstract: This paper presents an algorithm for the solution of optimal control problems with constraints on the control, but without constraints on the trajectory or the terminal state In this algorithm, reduction of a cost at each iteration is guaranteed Global convergence conditions for the algorithm are investigated and an example is worked out

122 citations


Journal ArticleDOI
TL;DR: In this paper, an algebraic separation property of linear state feedback control and adaptive state observation is established based on a novel adaptive observer, which does not require signal boundedness in its stability proof.
Abstract: Based on a novel adaptive observer, which does not require signal boundedness in its stability proof, an algebraic separation property of linear state feedback control and adaptive state observation is established. This means, whenever a linear, stabilizing state feedback control law is realized with the state replaced by the state estimate of the given stable adaptive observer, then the resulting nonlinear control system is also globally asymptotically Lyapunov stable with respect to the initial state and parameter observation error of the adaptive observer. In particular, no assumptions on the system dynamics nor on the speed of the adaptation are made.

20 citations


Journal ArticleDOI
TL;DR: An unstable digital simulation of a closed-loop control system employing a deterministic observer that is stable in continuous-ti me is presented and the two approaches to the design of a digital control system are compared by using a second-order dynamic system.
Abstract: Digital control of a continuoiis-time system implies discretization in time of the system. The discretization is likely to bring about changes in the dynamic characteristics of the continuous-time system. Two approaches to the design of a digital control system can be considered: 1) discretization of a previously designed continuoustime control system and 2) direct design in the discrete-time domain. The difference between these two approaches and their effects on a closed-loop control system employing a Luenberger-type observer is demonstrated. The first approach consists of simulating a continuous-tim e control system on a digital computer. Because the possibility exists that a given set of parameters of the continuous-time system, such as gains and observer poles, will produce a change in the dynamic characteristics of the system during digital simulation, the question is how large the sampling time can be without affecting adversely the discrete simulation. An example of an unstable digital simulation of a closed-loop control system employing a deterministic observer that is stable in continuous-ti me is presented. The second approach inherently guards against changes in the dynamic characteristics that may be caused by discretization. The two approaches are compared by using a second-order dynamic system.

7 citations


Journal ArticleDOI
J. Casti1
TL;DR: In this paper, an approach to the problem of determining bifurcation-free optimal control laws using the theory of catastrophes is presented, under the assumption that the linearized system dynamics in the neighborhood of the equilibrium are of gradient type.
Abstract: An approach to the problem of determining bifurcation-free optimal control laws is presented using the theory of catastrophes. Under the assumption that the linearized system dynamics in the neighborhood of the equilibrium are of gradient type, conditions are given to ensure that a linear feedback law simultaneously minimize a quadratic objective and generate a bifurcation-free trajectory. Explicit results are presented for the case of two system inputs (the cusp catastrophe). Extensions to the case of nongradient dynamics and/or nonquadratic costs are also discussed.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the parameter adaptive control problem of linear stochastic distributed-parameter systems is considered and treated by invoking the partitioning/non-linear separation approach, both continuous-time and discrete-time systems are studied, with volume and/or boundary control and disturbance inputs.
Abstract: The parameter adaptive control problem of linear stochastic distributed-parameter systems is considered and treated by invoking the partitioning/non-linear separation approach. Both continuous-time and discrete-time systems are studied, with volume and/or boundary control and disturbance inputs. The following cases are considered: spatially-continuous and point-wise measurements/control actions, continuous and quantized parameter space, non-gaussian initial state, uncertainty in continuous and point-wise measurements, cost-function with spatial-derivative penalties, systems with pure time delays, composite distributed- and lumped-parameter systems, scanning-type control, and simultaneous adaptive system and measurement control. The uncertain parameters are assumed to be time-and-space independent, but the spatial independence can be alleviated as will be shown elsewhere.

7 citations


Journal ArticleDOI
TL;DR: An algorithm of adjusting the real system outputs to given desired values, assumed to be described by an approximate mathematical model, for the optimizing control layer of the hierarchical (multilayer) control structure of the system.

6 citations


Proceedings ArticleDOI
01 Dec 1980
TL;DR: In this article, the authors considered the class of nonlinear, nonautonomous control systems and gave a sufficient condition for this property to be preserved under small perturbations of the control system.
Abstract: In the class of nonlinear, nonautonomous control systems we consider the property of controllability to a compact set on a fixed time interval, and we give a sufficient condition for this property to be preserved under small perturbations of the control system.

Journal ArticleDOI
TL;DR: In this article, it was shown that the so-called optimal-aim control strategy might destabilize a controllable linear time-invariant system and raise a serious question about the efficacy of this strategy when applied to a more complicated nonlinear power system.
Abstract: It is shown that the so-called optimal-aim control strategy [1] might destabilize a controllable linear time-invariant system. This raises a serious question about the efficacy of this strategy when applied to a more complicated nonlinear power system.

Journal ArticleDOI
TL;DR: In this paper, the authors describe how variation in the control effort affects the topological structure of the switching curves for the time-optimal control problem and show that continuous variation of α causes a sequence of abrupt changes in the connectivity of the switch locus.
Abstract: This paper describes how variation in the control effort affects the topological structure of the switching curves for the time-optimal control problem $$\ddot x + a^2 x - x^3 r(x) = u, \left| u \right| \leqslant \alpha ,$$ wherer is nonzero and satisfiesr(x)≥0 and where α is the variable control restraint. It is shown that continuous variation of α causes a sequence of abrupt changes in the connectivity of the switching locus. Whenr is convex, an upper bound is found for the control threshold above which the structure of the optimal feedback control synthesis remains stable, and this bound is related to a physical constant of the system: the maximum amplitude of the restoring force.

Journal ArticleDOI
TL;DR: In this article, it is shown that the set of two-dimensional linear control systems with a convex polyhedron as control domain, which exhibit such paradoxical behavior (completely characterized by Brunovský), has a nonempty interior.
Abstract: When dealing with the time-optimal problem for linear control systems, there may be a difference between optimal open-loop control and corresponding synthesized feedback control, since in the latter case one is led to allow for generalized (Filippov) solutions. In this note, it is shown that the set of two-dimensional linear control systems with a convex polyhedron as control domain, which exhibit such paradoxical behavior (completely characterized by Brunovský), has a nonempty interior, in a natural and appropriately defined topology on the space of all such linear control systems.