scispace - formally typeset
Search or ask a question
Proceedings Article

STAIR: Hardware and Software Architecture

01 Jan 2007-pp 31-37
TL;DR: The hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration of the STAIR 1 robot responding to a verbal command to fetch an item are described.
Abstract: The STanford Artificial Intelligence Robot (STAIR) project is a long-term group effort aimed at producing a viable home and office assistant robot. As a small concrete step towards this goal, we showed a demonstration video at the 2007 AAAI Mobile Robot Exhibition of the STAIR 1 robot responding to a verbal command to fetch an item. Carrying out this task involved the integration of multiple components, including spoken dialog, navigation, computer visual object detection, and robotic grasping. This paper describes the hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article
01 Jan 2009
TL;DR: This paper discusses how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS.
Abstract: This paper gives an overview of ROS, an opensource robot operating system. ROS is not an operating system in the traditional sense of process management and scheduling; rather, it provides a structured communications layer above the host operating systems of a heterogenous compute cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS.

8,387 citations


Cites background from "STAIR: Hardware and Software Archit..."

  • ...ROS was designed to meet a specific set of challenges encountered when developing large-scale service robots as part of the STAIR project [2] at Stanford University1 and the Personal Robots Program [3] at Willow Garage,2 but the resulting architecture is far more general than the service-robot and mobile-manipulation domains....

    [...]

  • ...…set of challenges encountered when developing large-scale service robots as part of the STAIR project [2] at Stanford University1 and the Personal Robots Program [3] at Willow Garage,2 but the resulting architecture is far more general than the service-robot and mobile-manipulation domains....

    [...]

Journal ArticleDOI
TL;DR: The dissertation presented in this article proposes Semantic 3D Object Models as a novel representation of the robot’s operating environment that satisfies these requirements and shows how these models can be automatically acquired from dense 3D range data.
Abstract: Environment models serve as important resources for an autonomous robot by providing it with the necessary task-relevant information about its habitat. Their use enables robots to perform their tasks more reliably, flexibly, and efficiently. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models: for manipulation purposes their models have to include the objects present in the world, together with their position, form, and other aspects, as well as an interpretation of these objects with respect to the robot tasks. The dissertation presented in this article (Rusu, PhD thesis, 2009) proposes Semantic 3D Object Models as a novel representation of the robot’s operating environment that satisfies these requirements and shows how these models can be automatically acquired from dense 3D range data.

908 citations

Journal ArticleDOI
TL;DR: This paper describes HSR’s development background since 2006, and technical detail of hardware design and software architecture, and describes its omnidirectional mobile base using the dual-wheel caster-drive mechanism, which is the basis of HSR's operational movement and a novel whole body motion control system.
Abstract: There has been an increasing interest in mobile manipulators that are capable of performing physical work in living spaces worldwide, corresponding to an aging population with declining birth rates with the expectation of improving quality of life (QoL). We assume that overall research and development will accelerate by using a common robot platform among a lot of researchers since that enables them to share their research results. Therefore we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment and we have provided it to various research institutes to establish the developers community. Currently, the number of HSR users is expanding to 44 sites in 12 countries worldwide (as of November 30th, 2018). To activate the community, we assume that the robot competition will be effective. As a result of international public offering, HSR has been adopted as a standard platform for international robot competitions such as RoboCup@Home and World Robot Summit (WRS). HSR is provided to participants of those competitions. In this paper, we describe HSR’s development background since 2006, and technical detail of hardware design and software architecture. Specifically, we describe its omnidirectional mobile base using the dual-wheel caster-drive mechanism, which is the basis of HSR’s operational movement and a novel whole body motion control system. Finally, we describe the verification of autonomous task capability and the results of utilization in RoboCup@Home in order to demonstrate the effect of introducing the platform.

139 citations

Journal ArticleDOI
21 Jun 2012
TL;DR: Some of the structure present in everyday environments that Herb 2.0 has been able to harness for manipulation and interaction are revealed and some of the lessons learned from extensively testing the integrated platform in kitchen and office environments are described.
Abstract: We present the hardware design, software architecture, and core algorithms of Herb 2.0, a bimanual mobile manipulator developed at the Personal Robotics Lab at Carnegie Mellon University, Pittsburgh, PA. We have developed Herb 2.0 to perform useful tasks for and with people in human environments. We exploit two key paradigms in human environments: that they have structure that a robot can learn, adapt and exploit, and that they demand general-purpose capability in robotic systems. In this paper, we reveal some of the structure present in everyday environments that we have been able to harness for manipulation and interaction, comment on the particular challenges on working in human spaces, and describe some of the lessons we learned from extensively testing our integrated platform in kitchen and office environments.

125 citations

Proceedings ArticleDOI
10 May 2010
TL;DR: The primary target of this work is human-robot collaboration, especially for service robots in complicated application scenarios, and a series of case study was conducted on Ke Jia with positive results, verifying its ability of acquiring knowledge through spoken dialog with users, autonomous solving problems by virtue of acquired causal knowledge, and autonomous planning for complex tasks.
Abstract: The primary target of this work is human-robot collaboration, especially for service robots in complicated application scenarios. Three assumptions and four requirements are identified. State-of-the-art, general-purpose Natural Language Processing (NLP), Commonsense Reasoning (in particular, ASP), and Robotics techniques are integrated in a layered architecture. The architecture and mechanisms have been implemented on a service robot, Ke Jia. Instead of command languages, small limited segments of natural languages are employed in spoken dialog between Ke Jia and its users. The information in the dialog is extracted, classified and transferred into inner representation by Ke Jia's NLP mechanism, and further used autonomously in problem-solving and planning. A series of case study was conducted on Ke Jia with positive results, verifying its ability of acquiring knowledge through spoken dialog with users, autonomous solving problems by virtue of acquired causal knowledge, and autonomous planning for complex tasks.

93 citations


Cites methods from "STAIR: Hardware and Software Archit..."

  • ...Categories and Subject Descriptors I.2 [Computing Methodologies]: Artificial Intelligence General Terms Design, Experimentation Keywords Human-robot interaction, Cognitive robotics, Modeling natural language, Knowledge representation...

    [...]

References
More filters
Proceedings Article
01 Jul 2003
TL;DR: Current usage of Player and Stage is reviewed, and some interesting research opportunities opened up by this infrastructure are identified.
Abstract: This paper describes the Player/Stage software tools applied to multi-robot, distributed-robot and sensor network systems. Player is a robot device server that provides network transparent robot control. Player seeks to constrain controller design as little as possible; it is device independent, non-locking and language- and style-neutral. Stage is a lightweight, highly configurable robot simulator that supports large populations. Player/Stage is a community Free Software project. Current usage of Player and Stage is reviewed, and some interesting research opportunities opened up by this infrastructure are identified.

1,628 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: The performance of the approach constitutes a suggestive plausibility proof for a class of feedforward models of object recognition in cortex and exhibits excellent recognition performance and outperforms several state-of-the-art systems on a variety of image datasets including many different object categories.
Abstract: We introduce a novel set of features for robust object recognition. Each element of this set is a complex feature obtained by combining position- and scale-tolerant edge-detectors over neighboring positions and multiple orientations. Our system's architecture is motivated by a quantitative model of visual cortex. We show that our approach exhibits excellent recognition performance and outperforms several state-of-the-art systems on a variety of image datasets including many different object categories. We also demonstrate that our system is able to learn from very few examples. The performance of the approach constitutes a suggestive plausibility proof for a class of feedforward models of object recognition in cortex.

969 citations

Proceedings ArticleDOI
16 May 1998
TL;DR: This paper presents further improvements on the earlier vector field histogram (VFH) method developed by Borenstein-Koren (1991) for real-time mobile robot obstacle avoidance, offering several improvements that result in smoother robot trajectories and greater reliability.
Abstract: This paper presents further improvements on the earlier vector field histogram (VFH) method developed by Borenstein-Koren (1991) for real-time mobile robot obstacle avoidance. The enhanced method, called VFH+, offers several improvements that result in smoother robot trajectories and greater reliability. VFH+ reduces some of the parameter tuning of the original VFH method by explicitly compensating for the robot width. Also added in VFH+ is a better approximation of the mobile robot trajectory, which results in higher reliability.

693 citations


"STAIR: Hardware and Software Archit..." refers methods in this paper

  • ...Our implementation used a Voronoi-based global planner and VFH+ (Ulrich & Borenstein 1998) to avoid local obstacles....

    [...]

Proceedings ArticleDOI
27 Oct 2003
TL;DR: The authors' open-source robot control software, the Carnegie Mellon Navigation (CARMEN) Toolkit, is described, which chooses not to adopt strict software standards, but to instead focus on good design practices.
Abstract: In this paper we describe our open-source robot control software, the Carnegie Mellon Navigation (CARMEN) Toolkit. The ultimate goals of CARMEN are to lower the barrier to implementing new algorithms on real and simulated robots and to facilitate sharing of research and algorithms between different institutions. In order for CARMEN to be as inclusive of various research approaches as possible, we have chosen not to adopt strict software standards, but to instead focus on good design practices. This paper outlines the lessons we have learned in developing these practices.

401 citations

Proceedings Article
04 Dec 2006
TL;DR: This work presents a learning algorithm that neither requires, nor tries to build, a 3-d model of the object, instead it predicts, directly as a function of the images, a point at which to grasp the object.
Abstract: We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set.

159 citations


"STAIR: Hardware and Software Archit..." refers methods in this paper

  • ...Grasping To pick up the object, the robot used the grasping algorithm developed by (Saxena et al. 2006a; 2006b)....

    [...]