scispace - formally typeset
Search or ask a question
Author

M Maarten Steinbuch

Bio: M Maarten Steinbuch is an academic researcher from Eindhoven University of Technology. The author has contributed to research in topics: Control theory & Feed forward. The author has an hindex of 51, co-authored 630 publications receiving 11892 citations. Previous affiliations of M Maarten Steinbuch include Nanyang Technological University & Delft University of Technology.


Papers
More filters
Proceedings ArticleDOI
09 Jul 2007
TL;DR: Cogging compensating piecewise ILC (CCPILC) is proposed that enables to use one learned feedforward signal for different setpoints and start positions without losing performance.
Abstract: Iterative learning control (ILC) is an effective control technique for motion systems that perform repetitively the same trajectory (setpoint). The result of the learning procedure is a feedforward signal that perfectly compensates all deterministic dynamics in the system for the learned setpoint performed at a specific start position. For other setpoints and start positions, the learned feedforward signal will not be perfect, because the learned deterministic dynamics are setpoint- and position-dependent. In this paper cogging compensating piecewise ILC (CCPILC) is proposed that enables to use one learned feedforward signal for different setpoints and start positions without losing performance. The learned feedforward signal will therefore be decomposed into a setpoint- and a position-dependent part, such that both parts can be adapted individually according to the desired change in setpoint and/or start position.

11 citations

Proceedings ArticleDOI
27 Nov 2007
TL;DR: A new drive principle was developed, which results in overlapping tip trajectories of the drive legs and a continuous and smooth motion of the stage, which improves the performance of the piezoelectric actuator.
Abstract: Piezoelectric actuators are commonly used for positioning high precision stages. The need for new actuators and drive principles are being invoked by the increasing demands regarding speed and accuracy. The used walking piezo actuator is a non-resonant piezoelectric actuator that can drive a stage through four piezoelectric drive legs. In this paper, the piezo actuator is used to drive a stage with one degree-of-freedom. A model of the stage and the piezo actuator was made to develop and test a feedback control strategy. A new drive principle was developed, which results in overlapping tip trajectories of the drive legs and a continuous and smooth motion of the stage. Feedforward control and gain scheduling further improve the performance. Constant motion up to 2 mm/s is performed with a tracking error below 400 nm. Point-to-point movements of 5 nm up to the complete stroke of the stage are performed with a final static error below the encoder resolution.

11 citations

Journal ArticleDOI
TL;DR: The design of an energy controller for a mechanical hybrid powertrain, which is suitable for implementation in real-time hardware, and is transparent, causal, and robust, where the latter is shown by simulations for various driving cycles and start conditions.
Abstract: This brief presents the design of an energy controller for a mechanical hybrid powertrain, which is suitable for implementation in real-time hardware. The mechanical hybrid powertrain uses a compact flywheel module to add hybrid functionalities to a conventional powertrain that consists of an internal combustion engine and a continuously variable transmission. The control objective is to minimize the overall fuel consumption for a given driving cycle. The design approach follows a generic framework to: 1) solve the optimization problem using optimal control; 2) make the optimal controller causal using a prediction of the future driving conditions; and 3) make the causal controller robust by tuning of one key calibration parameter. The highly constrained optimization problem is solved with dynamic programming. The future driving conditions are predicted using a model that smoothly approximates statistical data, and implemented in the receding model predictive control framework. The controller is made tunable by rule extraction from the model predictive controller, based on physical understanding of the system. The resulting real-time controller is transparent, causal, and robust, where the latter is shown by simulations for various driving cycles and start conditions.

11 citations

01 Jan 2010
TL;DR: The approach includes the design of a control strategy for the variator, which enables regenerative braking and the experiments show that the CVT can be used very efficiently in the proposed hybrid drive train.
Abstract: This paper describes the approach and the results of efficiency experiments on a push-belt Continuously Variable Transmission (CVT) in a new hybrid drive train. The hybrid drive train uses the push-belt CVT to “charge” a flywheel, with the kinetic energy of the vehicle during regenerative braking and “discharge” during flywheel driving. The experiments are performed on a test rig with two electric machines following prescribed speed and torque trajectories, representing the flywheel and vehicle load. The approach includes the design of a control strategy for the variator, which enables regenerative braking. The experiments show that the CVT can be used very efficiently in the proposed hybrid drive train.

10 citations

Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this paper, a data-based frequency domain optimal control synthesis method is proposed. But the underlying cost function is selected from a databased symmetric root-locus, which gives insight in the closed-loop pole locations that will be achieved by the controller.
Abstract: This paper describes a data-based frequency domain optimal control synthesis method. Plant frequency response data is used to compute the frequency response of the controller using a spectral decomposition of the optimal return difference. The underlying cost function is selected from a databased symmetric root-locus, which gives insight in the closed-loop pole locations that will be achieved by the controller. A simulation study shows the abilities of the proposed method.

10 citations


Cited by
More filters
Book
05 Oct 1997
TL;DR: In this article, the authors introduce linear algebraic Riccati Equations and linear systems with Ha spaces and balance model reduction, and Ha Loop Shaping, and Controller Reduction.
Abstract: 1. Introduction. 2. Linear Algebra. 3. Linear Systems. 4. H2 and Ha Spaces. 5. Internal Stability. 6. Performance Specifications and Limitations. 7. Balanced Model Reduction. 8. Uncertainty and Robustness. 9. Linear Fractional Transformation. 10. m and m- Synthesis. 11. Controller Parameterization. 12. Algebraic Riccati Equations. 13. H2 Optimal Control. 14. Ha Control. 15. Controller Reduction. 16. Ha Loop Shaping. 17. Gap Metric and ...u- Gap Metric. 18. Miscellaneous Topics. Bibliography. Index.

3,471 citations

Journal ArticleDOI
TL;DR: In this paper, a review of electrical energy storage technologies for stationary applications is presented, with particular attention paid to pumped hydroelectric storage, compressed air energy storage, battery, flow battery, fuel cell, solar fuel, superconducting magnetic energy storage and thermal energy storage.
Abstract: Electrical energy storage technologies for stationary applications are reviewed. Particular attention is paid to pumped hydroelectric storage, compressed air energy storage, battery, flow battery, fuel cell, solar fuel, superconducting magnetic energy storage, flywheel, capacitor/supercapacitor, and thermal energy storage. Comparison is made among these technologies in terms of technical characteristics, applications and deployment status.

3,031 citations

Journal ArticleDOI
TL;DR: Though beginning its third decade of active research, the field of ILC shows no sign of slowing down and includes many results and learning algorithms beyond the scope of this survey.
Abstract: This article surveyed the major results in iterative learning control (ILC) analysis and design over the past two decades. Problems in stability, performance, learning transient behavior, and robustness were discussed along with four design techniques that have emerged as among the most popular. The content of this survey was selected to provide the reader with a broad perspective of the important ideas, potential, and limitations of ILC. Indeed, the maturing field of ILC includes many results and learning algorithms beyond the scope of this survey. Though beginning its third decade of active research, the field of ILC shows no sign of slowing down.

2,645 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: This work proposes an LSTM model which can learn general human movement and predict their future trajectories and outperforms state-of-the-art methods on some of these datasets.
Abstract: Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.

2,587 citations

Journal ArticleDOI
TL;DR: This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

2,391 citations