scispace - formally typeset
Search or ask a question
Author

Benjamin Pitzer

Bio: Benjamin Pitzer is an academic researcher from Bosch. The author has contributed to research in topics: Middleware (distributed applications) & Mobile robot. The author has an hindex of 18, co-authored 29 publications receiving 1098 citations. Previous affiliations of Benjamin Pitzer include Google & Karlsruhe Institute of Technology.

Papers
More filters
Book ChapterDOI
01 Jan 2017
TL;DR: Rosbridge provides a simple, socket-based programmatic access to robot interfaces and algorithms provided by ROS, the open-source “Robot Operating System”, the current state-of-the-art in robot middleware.
Abstract: We present rosbridge, a middleware abstraction layer which provides robotics technology with a standard, minimalist applications development framework accessible to applications programmers who are not themselves roboticists. Rosbridge provides a simple, socket-based programmatic access to robot interfaces and algorithms provided (for now) by ROS, the open-source “Robot Operating System”, the current state-of-the-art in robot middleware. In particular, it facilitates the use of web technologies such as Javascript for the purpose of broadening the use and usefulness of robotic technology. We demonstrate potential applications in the interface design, education, human-robot interaction and remote laboratory environments.

176 citations

Journal IssueDOI
TL;DR: This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that successfully entered the finals of the 2007 DARPA Urban Challenge competition.
Abstract: This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that successfully entered the finals of the 2007 DARPA Urban Challenge competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. Environmental perception mainly relies on a recent laser scanner that delivers both range and reflectivity measurements. Whereas range measurements are used to provide three-dimensional scene geometry, measuring reflectivity allows for robust lane marker detection. Mission and maneuver planning is conducted using a hierarchical state machine that generates behavior in accordance with California traffic laws. We conclude with a report of the results achieved during the competition. © 2008 Wiley Periodicals, Inc.

157 citations

Proceedings ArticleDOI
04 Jun 2008
TL;DR: In this article, a real-time lane marker detection system based on current sensor technology was developed and implemented, which allows the robust estimation of a deviations between a digital map and the real world.
Abstract: The detection of lane markers is a pre-requisite for many driver assistance systems as well as for autonomous vehicles. In this paper, the lane marker detection approach that was developed by Team AnnieWAY for the DARPA Urban Challenge 2007 is described. Based on current sensor technology, a robust real-time lane marker detection was developed and implemented. The system allows the robust estimation of a deviations between a digital map and the real world.

117 citations

Proceedings ArticleDOI
24 Dec 2012
TL;DR: The semantic object maps presented in this article, which is called SOM+, extend the first generation of SOMs presented by Rusu et al. in that the representation of SOM+ is designed more thoroughly and that SOM+ also include knowledge about the appearance and articulation of furniture objects.
Abstract: In this article we investigate the representation and acquisition of Semantic Objects Maps (SOMs) that can serve as information resources for autonomous service robots performing everyday manipulation tasks in kitchen environments. These maps provide the robot with information about its operation environment that enable it to perform fetch and place tasks more efficiently and reliably. To this end, the semantic object maps can answer queries such as the following ones: “What do parts of the kitchen look like?”, “How can a container be opened and closed?”, “Where do objects of daily use belong?”, “What is inside of cupboards/drawers?”, etc. The semantic object maps presented in this article, which we call SOM+, extend the first generation of SOMs presented by Rusu et al. [1] in that the representation of SOM+ is designed more thoroughly and that SOM+ also include knowledge about the appearance and articulation of furniture objects. Also, the acquisition methods for SOM+ substantially advance those developed in [1] in that SOM+ are acquired autonomously and with low-cost (Kinect) instead of very accurate (laser-based) 3D sensors. In addition, perception methods are more general and are demonstrated to work in different kitchen environments.

102 citations

Proceedings ArticleDOI
05 Dec 2011
TL;DR: A system that is capable of fully autonomously transforming a clothing item from a random crumpled configuration into a folded state is presented and a method to compute valid grasp poses on the cloth which accounts for deformability is described.
Abstract: The physical properties of highly deformable objects such as clothing poses a challenging problem for autonomously acting systems. Especially, grasping and manipulation require new approaches that can accommodate for an object's variable and changing appearance. In this paper, we present a system that is capable of fully autonomously transforming a clothing item from a random crumpled configuration into a folded state. We describe a method to compute valid grasp poses on the cloth which accounts for deformability. Our algorithm includes a novel fold detection and grasp generation strategy, which suggests grasp poses on cloth folds. Machine learning techniques are used to evaluate these grasp poses. In our experiments, we use a stock PR2 robot whose two arms alternatingly perform grasps on a T-shirt equipped with fiducial markers. The goal of this grasp sequence is to bring the T-shirt into a configuration from which the robot can fold it. In several experiments, we demonstrate the performance of our approach.

76 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: What is now the de-facto standard formulation for SLAM is presented, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers.
Abstract: Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?

1,828 citations

Journal ArticleDOI
TL;DR: A review of motion planning techniques implemented in the intelligent vehicles literature, with a description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is presented.
Abstract: Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.

1,162 citations

Journal ArticleDOI
TL;DR: An overview of the autonomous vehicle is given and details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios are presented.
Abstract: 125 years after Bertha Benz completed the first overland journey in automotive history, the Mercedes Benz S-Class S 500 INTELLIGENT DRIVE followed the same route from Mannheim to Pforzheim, Germany, in fully autonomous manner. The autonomous vehicle was equipped with close-to-production sensor hardware and relied solely on vision and radar sensors in combination with accurate digital maps to obtain a comprehensive understanding of complex traffic situations. The historic Bertha Benz Memorial Route is particularly challenging for autonomous driving. The course taken by the autonomous vehicle had a length of 103 km and covered rural roads, 23 small villages and major cities (e.g. downtown Mannheim and Heidelberg). The route posed a large variety of difficult traffic scenarios including intersections with and without traffic lights, roundabouts, and narrow passages with oncoming traffic. This paper gives an overview of the autonomous vehicle and presents details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios.

783 citations

Journal ArticleDOI
01 Apr 2014
TL;DR: This paper presents a generic break down of the problem of road or lane perception into its functional building blocks and elaborate the wide range of proposed methods within this scheme.
Abstract: The problem of road or lane perception is a crucial enabler for advanced driver assistance systems. As such, it has been an active field of research for the past two decades with considerable progress made in the past few years. The problem was confronted under various scenarios, with different task definitions, leading to usage of diverse sensing modalities and approaches. In this paper we survey the approaches and the algorithmic techniques devised for the various modalities over the last 5 years. We present a generic break down of the problem into its functional building blocks and elaborate the wide range of proposed methods within this scheme. For each functional block, we describe the possible implementations suggested and analyze their underlying assumptions. While impressive advancements were demonstrated at limited scenarios, inspection into the needs of next generation systems reveals significant gaps. We identify these gaps and suggest research directions that may bridge them.

735 citations

Journal ArticleDOI
Abstract: Currently autonomous or self-driving vehicles are at the heart of academia and industry research because of its multi-faceted advantages that includes improved safety, reduced congestion, lower emissions and greater mobility. Software is the key driving factor underpinning autonomy within which planning algorithms that are responsible for mission-critical decision making hold a significant position. While transporting passengers or goods from a given origin to a given destination, motion planning methods incorporate searching for a path to follow, avoiding obstacles and generating the best trajectory that ensures safety, comfort and efficiency. A range of different planning approaches have been proposed in the literature. The purpose of this paper is to review existing approaches and then compare and contrast different methods employed for the motion planning of autonomous on-road driving that consists of (1) finding a path, (2) searching for the safest manoeuvre and (3) determining the most feasible trajectory. Methods developed by researchers in each of these three levels exhibit varying levels of complexity and performance accuracy. This paper presents a critical evaluation of each of these methods, in terms of their advantages/disadvantages, inherent limitations, feasibility, optimality, handling of obstacles and testing operational environments. Based on a critical review of existing methods, research challenges to address current limitations are identified and future research directions are suggested so as to enhance the performance of planning algorithms at all three levels. Some promising areas of future focus have been identified as the use of vehicular communications (V2V and V2I) and the incorporation of transport engineering aspects in order to improve the look-ahead horizon of current sensing technologies that are essential for planning with the aim of reducing the total cost of driverless vehicles. This critical review on planning techniques presented in this paper, along with the associated discussions on their constraints and limitations, seek to assist researchers in accelerating development in the emerging field of autonomous vehicle research.

599 citations