scispace - formally typeset
Search or ask a question
Author

Robert Babuska

Bio: Robert Babuska is an academic researcher from Delft University of Technology. The author has contributed to research in topics: Fuzzy logic & Reinforcement learning. The author has an hindex of 56, co-authored 371 publications receiving 15388 citations. Previous affiliations of Robert Babuska include Carnegie Mellon University & Czech Technical University in Prague.


Papers
More filters
Posted Content
TL;DR: This article newly provides an approximate computational scheme for the reach-avoid specification based on the Fitted Value Iteration algorithm, and gives a-priori computable formal probabilistic bounds on the error made by the approximation algorithm: the output of the numerical scheme is quantitatively assessed and thus meaningful for safety-critical applications.
Abstract: This article deals with stochastic processes endowed with the Markov (memoryless) property and evolving over general (uncountable) state spaces. The models further depend on a non-deterministic quantity in the form of a control input, which can be selected to affect the probabilistic dynamics. We address the computation of maximal reach-avoid specifications, together with the synthesis of the corresponding optimal controllers. The reach-avoid specification deals with assessing the likelihood that any finite-horizon trajectory of the model enters a given goal set, while avoiding a given set of undesired states. This article newly provides an approximate computational scheme for the reach-avoid specification based on the Fitted Value Iteration algorithm, which hinges on random sample extractions, and gives a-priori computable formal probabilistic bounds on the error made by the approximation algorithm: as such, the output of the numerical scheme is quantitatively assessed and thus meaningful for safety-critical applications. Furthermore, we provide tighter probabilistic error bounds that are sample-based. The overall computational scheme is put in relationship with alternative approximation algorithms in the literature, and finally its performance is practically assessed over a benchmark case study.

7 citations

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper uses an adaptive fuzzy observer to estimate the uncertainties in the state matrices of a two-degrees-of-freedom robot arm model and analyzes the improvement in the achievable controller performance when using the adaptive observer.
Abstract: Recently, adaptive fuzzy observers have been introduced that are capable of estimating uncertainties along with the states of a nonlinear system represented by an uncertain Takagi-Sugeno (TS) model. In this paper, we use such an adaptive observer to estimate the uncertainties in the state matrices of a two-degrees-of-freedom robot arm model. The TS model of the robot arm is constructed using the sector nonlinearity approach. The estimates are used in updating the model, and the updated model is used to design a controller for the robot arm. We analyze the improvement in the achievable controller performance when using the adaptive observer.

7 citations

Journal ArticleDOI
TL;DR: In this paper, a decomposition of the nonlinear process model into two simpler subsystems is proposed, and a different type of observer is considered for each subsystem, i.e., a particle filter and an unscented Kalman filter.

7 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: An observer-based balancing control strategy that is robust to persistent perturbations on the ground slope which is based on the estimation of the disturbance without the need for an inertial measurement unit is presented.
Abstract: Maintaining postural balance is a fundamental requirement for humanoid robots faced with unknown terrain or external perturbations. In this paper we present an observer-based balancing control strategy that is robust to persistent perturbations on the ground slope which is based on the estimation of the disturbance without the need for an inertial measurement unit. We use the workspace control method in combination with an observer to perform specific maneuvers while maintaining postural balance. We employ a linear observer, a gain-scheduling observer and an extended Kalman filter and compare the results obtained. For the control task considered, while the linear observer falls short in this setting, both the gain-scheduling observer and the extended Kalman filter perform well, which is confirmed by experimental results on a planar biped robot.

7 citations

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This paper designs and experimentally evaluates two nonlinear controllers for a magnetic manipulation (Magman) system, which consists of four electromagnetic coils arranged linearly, and benchmark two non linear control methods, namely feedback linearization and a constrained state-dependent Riccati equation (SDRE) control.
Abstract: Precise magnetic manipulation has numerous applications, ranging from manufacturing to the medical field. Owing to the nonlinear nature of the electromagnetic force, magnetic manipulation requires advanced nonlinear control. In this paper, we design and experimentally evaluate two nonlinear controllers for a magnetic manipulation (Magman) system, which consists of four electromagnetic coils arranged linearly. The current through the coils is controlled in order to accurately position a steel ball, rolling freely in a track above the coils. We benchmark two nonlinear control methods, namely feedback linearization and a constrained state-dependent Riccati equation (SDRE) control. These methods are chosen due to their widespread use in academia as well as industrial applications. On the actual setup, constrained SDRE has performed considerably better in terms of the settling time, overshoot, and the amount of control effort when compared to feedback linearization.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Apr 2003
TL;DR: The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it as mentioned in this paper, and also presents new ideas and alternative interpretations which further explain the success of the EnkF.
Abstract: The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.

2,975 citations

Journal ArticleDOI
TL;DR: This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

2,391 citations