scispace - formally typeset
Search or ask a question
Author

Eric Berger

Bio: Eric Berger is an academic researcher from Stanford University. The author has contributed to research in topics: Mobile robot & Articulated robot. The author has an hindex of 4, co-authored 5 publications receiving 856 citations.

Papers
More filters
Book ChapterDOI
01 Jan 2006
TL;DR: A successful application of reinforcement learning to designing a controller for sustained inverted flight on an autonomous helicopter, using a stochastic, nonlinear model of the helicopter’s dynamics.
Abstract: Helicopters have highly stochastic, nonlinear, dynamics, and autonomous helicopter flight is widely regarded to be a challenging control problem. As helicopters are highly unstable at low speeds, it is particularly difficult to design controllers for low speed aerobatic maneuvers. In this paper, we describe a successful application of reinforcement learning to designing a controller for sustained inverted flight on an autonomous helicopter. Using data collected from the helicopter in flight, we began by learning a stochastic, nonlinear model of the helicopter’s dynamics. Then, a reinforcement learning algorithm was applied to automatically learn a controller for autonomous inverted hovering. Finally, the resulting controller was successfully tested on our autonomous helicopter platform.

587 citations

Proceedings ArticleDOI
19 May 2008
TL;DR: A novel concept for a mobile, 2-armed, 25-degree-of- freedom system with backdrivable joints, low mechanical impedance, and a 5 kg payload per arm is described.
Abstract: The most critical challenge for Personal Robotics is to manage the issue of human safety and yet provide the physical capability to perform useful work. This paper describes a novel concept for a mobile, 2-armed, 25-degree-of- freedom system with backdrivable joints, low mechanical impedance, and a 5 kg payload per arm. System identification, design safety calculations and performance evaluation studies of the first prototype are included, as well as plans for a future development.

196 citations

Proceedings Article
01 Jan 2007
TL;DR: The hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration of the STAIR 1 robot responding to a verbal command to fetch an item are described.
Abstract: The STanford Artificial Intelligence Robot (STAIR) project is a long-term group effort aimed at producing a viable home and office assistant robot. As a small concrete step towards this goal, we showed a demonstration video at the 2007 AAAI Mobile Robot Exhibition of the STAIR 1 robot responding to a verbal command to fetch an item. Carrying out this task involved the integration of multiple components, including spoken dialog, navigation, computer visual object detection, and robotic grasping. This paper describes the hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration.

73 citations

Patent
25 Nov 2009
TL;DR: In this paper, a humanoid robotic system may include a mobile base, a spine structure, a body structure and at least one robotic arm, each of which is movably configured to have significant human-scale capabilities in prescribed environments.
Abstract: Systems and methods related to construction, configuration, and utilization of humanoid robotic systems and aspects thereof are described. A system may include a mobile base, a spine structure, a body structure, and at least one robotic arm, each of which is movably configured to have significant human-scale capabilities in prescribed environments. The one or more robotic arms may be rotatably coupled to the body structure, which may be mechanically associated with the mobile base and spine such that it may be deflectably elevated and rolled relative to the base simultaneously and independently. Aspects of the one or more arms may be counterbalanced with one or more spring-based counterbalancing mechanisms which facilitate backdriveability and payload features.

53 citations

Patent
19 May 2008
TL;DR: In this article, an actuation system for a multi-segmented robot linkage is presented, which includes a gravity counter balancing mechanism for the robot and a payload in contact with the robot.
Abstract: An actuation system for a multi-segmented robot linkage is provided. The system includes (i) a gravity counter balancing mechanism for the multi-segmented robot linkage and a payload in contact with the multi-segmented robot linkage and (ii) a plurality of actuators acting on the joints of the multi-segmented robot linkage, whereby the actuators are high-bandwidth back-drivable actuators.

1 citations


Cited by
More filters
Proceedings Article
01 Jan 2009
TL;DR: This paper discusses how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS.
Abstract: This paper gives an overview of ROS, an opensource robot operating system. ROS is not an operating system in the traditional sense of process management and scheduling; rather, it provides a structured communications layer above the host operating systems of a heterogenous compute cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS.

8,387 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings, which analyzes and categorizes the multiple ways in which examples are gathered, as well as the various techniques for policy derivation.

3,343 citations

Journal ArticleDOI
TL;DR: This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

2,391 citations

Journal Article
TL;DR: In this article, a guided policy search method is used to map raw image observations directly to torques at the robot's motors, with supervision provided by a simple trajectory-centric reinforcement learning method.
Abstract: Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.

1,934 citations

Journal ArticleDOI
TL;DR: Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higher-level understanding of the visual world as discussed by the authors.
Abstract: Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.

1,743 citations