scispace - formally typeset
Search or ask a question
Author

G.L. Amicucci

Bio: G.L. Amicucci is an academic researcher from Sapienza University of Rome. The author has contributed to research in topics: Lyapunov optimization & Lyapunov exponent. The author has an hindex of 2, co-authored 2 publications receiving 43 citations.

Papers
More filters
Proceedings ArticleDOI
10 Dec 1997
TL;DR: Based on integral Lyapunov inequality associated to discrete-time dynamics, some preliminary results on control LyAPunov design are set.
Abstract: Based on integral Lyapunov inequality associated to discrete-time dynamics, some preliminary results on control Lyapunov design are set.

35 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the steady state property of the unobservable part of a given nonlinear dynamics is equivalent to the existence of a state detector (detectability).
Abstract: It is shown that the steady state property of the unobservable part of a given nonlinear dynamics is equivalent, under some boundedness assumptions, to the existence of a state detector (detectability). This property is illustrated, discussed and related to the existence of partial observers; local and global aspects are considered.

8 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG) and demonstrates the comparison between both mechanisms.
Abstract: In this paper, the authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG). For the inverse optimal scheme, a control Lyapunov function (CLF) is proposed to obtain an inverse optimal control law in order to achieve trajectory tracking. A posteriori, it is established that this control law minimizes a meaningful cost function. The CLFs depend on matrix selection in order to achieve the control objectives; this matrix is determined by two mechanisms: initially, fixed parameters are proposed for this matrix by a trial-and-error method and then by using the PSO algorithm. The inverse optimal control scheme is illustrated via simulations for the DFIG, including the comparison between both mechanisms.

77 citations

Proceedings ArticleDOI
01 Dec 2010
TL;DR: An inverse optimal control approach for output tracking of discrete-time nonlinear systems, avoiding to solve the associated Hamilton-Jacobi-Bellman (HJB) equation, and minimizing a meaningful cost function is presented.
Abstract: This paper presents an inverse optimal control approach for output tracking of discrete-time nonlinear systems, avoiding to solve the associated Hamilton-Jacobi-Bellman (HJB) equation, and minimizing a meaningful cost function. This stabilizing optimal controller is based on discrete-time passivity theory. The applicability of the proposed approach is illustrated via simulations by trajectory tracking control of a planar robot.

45 citations

Journal ArticleDOI
TL;DR: Simulation results show that control-based techniques can reduce the amount of medication while simultaneously reach the efficacy levels of the treatment schedules by the Food and Drug Administration.
Abstract: Influenza A virus infections are causes of severe illness resulting in high levels of mortality. Neuraminidase inhibitors such as zanamivir and oseltamivir are used to treat influenza; however, treatment recommendations remain debatable. In this paper, a discrete-time inverse optimal impulsive control scheme based on passivation is proposed to address the antiviral treatment scheduling problem. We adapt results regarding stability, passivity, and optimality for the impulsive action. The study is founded on mathematical models whose parameters are adjusted to data from clinical trials where participants were experimentally infected with influenza H1N1 and treated with either zanamivir or oseltamivir. Simulation results show that control-based techniques can reduce the amount of medication while simultaneously reach the efficacy levels of the treatment schedules by the Food and Drug Administration. Monte Carlo simulations disclose the robustness of the proposed control-based techniques.

40 citations

Proceedings ArticleDOI
09 Dec 2003
TL;DR: It is shown how standard iterative methods for solving linear and nonlinear equations can be approached from the point of view of control, leading to both continuous and discrete-time versions of the well-known Newton-Raphson and conjugate gradient algorithms as well as their common variants.
Abstract: It is shown how standard iterative methods for solving linear and nonlinear equations can be approached from the point of view of control. Appropriate choices of control Lyapunov functions lead to both continuous and discrete-time versions of the well-known Newton-Raphson and conjugate gradient algorithms as well as their common variants. Insights into these algorithms that result from the control approach are discussed.

39 citations

Proceedings ArticleDOI
13 Oct 2011
TL;DR: An inverse optimal control approach for exponential stabilization of discrete-time nonlinear systems, avoiding to solve the associated Hamilton-Jacobi-Bellman (HJB) equation, and minimizing a meaningful cost function is presented.
Abstract: This paper presents an inverse optimal control approach for exponential stabilization of discrete-time nonlinear systems, avoiding to solve the associated Hamilton-Jacobi-Bellman (HJB) equation, and minimizing a meaningful cost function. This stabilizing optimal controller is based on a discrete-time control Lyapunov function. The applicability of the proposed approach is illustrated via simulations by stabilization of an example.

34 citations