scispace - formally typeset
Search or ask a question
Author

Robert Babuska

Bio: Robert Babuska is an academic researcher from Delft University of Technology. The author has contributed to research in topics: Fuzzy logic & Reinforcement learning. The author has an hindex of 56, co-authored 371 publications receiving 15388 citations. Previous affiliations of Robert Babuska include Carnegie Mellon University & Czech Technical University in Prague.


Papers
More filters
Proceedings ArticleDOI
26 Jun 1994
TL;DR: It is shown in this paper that the interpolation between neighboring linear models depends on the relation between model parameters.
Abstract: In Sugeno-Takagi reasoning, rule premises describe fuzzy subspaces of inputs and rule consequents are linear input-output relations. Fuzzy models with this structure approximate global nonlinearities by a weighted average of local linear functions. It is shown in this paper that the interpolation between neighboring linear models depends on the relation between model parameters. In certain cases undesirable interpolation is obtained, or when model parameters are estimated from system input-output data, linear functions in rule conclusions may not represent local behavior of the system. >

25 citations

Proceedings ArticleDOI
18 Aug 2011
TL;DR: In this article, the authors consider the problem of maximizing the algebraic connectivity of the communication graph in a network of mobile robots by moving them into appropriate positions and formulate an approximate problem as a Semi-Definite Program (SDP).
Abstract: We consider the problem of maximizing the algebraic connectivity of the communication graph in a network of mobile robots by moving them into appropriate positions. We describe the Laplacian of the graph as dependent on the pairwise distance between the robots and formulate an approximate problem as a Semi-Definite Program (SDP). We propose a consistent, non-iterative distributed solution by solving local SDP's which use information only from nearby neighboring robots. Numerical simulations show the performance of the algorithm with respect to the centralized solution.

25 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: Experimental results and evaluation of a compensation method which improves the tracking performance of a nominal feedback controller by means of reinforcement learning (RL) have shown that the proposed RL-based compensation method significantly improves the performance of the nominal feedback controllers.
Abstract: In this article we provide experimental results and evaluation of a compensation method which improves the tracking performance of a nominal feedback controller by means of reinforcement learning (RL). The compensator is based on the actor-critic scheme and it adds a correction signal to the nominal control input with the goal to improve the tracking performance using on-line learning. The algorithm has been evaluated on a 6 DOF industrial robot manipulator with the objective to accurately track different types of reference trajectories. An extensive experimental study has shown that the proposed RL-based compensation method significantly improves the performance of the nominal feedback controller.

25 citations

Proceedings ArticleDOI
02 Dec 2001
TL;DR: The paper describes an algorithm that can be used to train the Takagi-Sugeno (TS) type neuro-fuzzy network very efficiently and takes into account the modified error index extension of sum squared error as the new performance index of the network.
Abstract: The paper describes an algorithm that can be used to train the Takagi-Sugeno (TS) type neuro-fuzzy network very efficiently. The training algorithm is very efficient in the sense that it can bring the performance index of the network, such as the sum squared error (SSE), down to the desired error goal much faster than that the classical backpropagation algorithm (BPA). The proposed training algorithm is based on a slight modification of the Levenberg-Marquardt training algorithm (LMA) which takes into account the modified error index extension of sum squared error as the new performance index of the network. The Levenberg-Marquardt algorithm uses the Jacobian matrix in order to approximate the Hessian matrix and that is the most important and difficult step in implementing this LMA. Therefore, a simple technique has been described to compute first the transpose of Jacobian matrix by comparing two equations and thereafter by further transposing the former one the actual Jacobian matrix is computed that is found to be robust against the modified error index extension. Furthermore, care has been taken to suppress or control the oscillation magnitude during the training of neuro-fuzzy network. Finally, the above training algorithm is tested on neuro-fuzzy modeling and prediction applications of time series and nonlinear plant.

25 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Apr 2003
TL;DR: The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it as mentioned in this paper, and also presents new ideas and alternative interpretations which further explain the success of the EnkF.
Abstract: The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.

2,975 citations

Journal ArticleDOI
TL;DR: This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

2,391 citations