TL;DR: The general system architecture is introduced and some results in detail regarding hybrid reasoning and planning used in RACE are sketches, and instances of learning from the experiences of real robot task execution are sketched.
Abstract: This paper reports on the aims, the approach, and the results of the European project RACE. The project aim was to enhance the behavior of an autonomous robot by having the robot learn from conceptualized experiences of previous performance, based on initial models of the domain and its own actions in it. This paper introduces the general system architecture; it then sketches some results in detail regarding hybrid reasoning and planning used in RACE, and instances of learning from the experiences of real robot task execution. Enhancement of robot competence is operationalized in terms of performance quality and description length of the robot instructions, and such enhancement is shown to result from the RACE system.
•01 Jan 2014
TL;DR: This paper proposes an Execution Knowledge that encodes the connection between planning models and the actual actions and observations for a given physical system and presents an execution monitoring framework that captures the expectations about physical plan execution.
Abstract: Despite the progress made in planning androbotics, autonomous plan execution on a robot remainschallenging. One of the problems is that (classical) plannersuse abstract models which are disconnected from the sensorand actuation information available during execution. Thisconnection is typically created in a non-systematic way by somesystem-specific execution software. In this paper we proposeto explicitly represent Execution Knowledge that encodes theconnection between planning models and the actual actionsand observations for a given physical system. We present anexecution monitoring framework in which Execution Knowl-edge captures the expectations about physical plan execution.A violation of these expectations indicates an execution failure.
TL;DR: In this paper, it was shown that one's acquaintances, one's immediate neighbors in the acquaintance network, are far from being a random sample of the population, and that this biases the numbers of neighbors two and more steps away.
Abstract: Recent work has demonstrated that many social networks, and indeed many networks of other types also, have broad distributions of vertex degree. Here we show that this has a substantial impact on the shape of ego-centered networks, i.e., sets of network vertices that are within a given distance of a specified central vertex, the ego. This in turn affects concepts and methods based on ego-centered networks, such as snowball sampling and the "ripple effect". In particular, we argue that one's acquaintances, one's immediate neighbors in the acquaintance network, are far from being a random sample of the population, and that this biases the numbers of neighbors two and more steps away. We demonstrate this concept using data drawn from academic collaboration networks, for which, as we show, current simple theories for the typical size of ego-centered networks give numbers that differ greatly from those measured in reality. We present an improved theoretical model which gives significantly better results.
TL;DR: Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks.
Abstract: This paper presents an artificial cognitive system tightly integrating object perception and manipulation for assistive robotics. This is necessary for assistive robots, not only to perform manipulation tasks in a reasonable amount of time and in an appropriate manner, but also to robustly adapt to new environments by handling new objects. In particular, this system includes perception capabilities that allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. To achieve these goals, it is critical to detect, track and recognize objects in the environment as well as to conceptualize experiences and learn novel object categories in an open-ended manner, based on human–robot interaction. Interaction capabilities were developed to enable human users to teach new object categories and instruct the robot to perform complex tasks. A naive Bayes learning approach with a Bag-of-Words object representation are used to acquire and refine object category models. Perceptual memory is used to store object experiences, feature dictionary and object category models. Working memory is employed to support communication purposes between the different modules of the architecture. A reactive planning approach is used to carry out complex tasks. To examine the performance of the proposed architecture, a quantitative evaluation and a qualitative analysis are carried out. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks.
TL;DR: An object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario that integrates detection, tracking, learning and recognition of tabletop objects and the Point Cloud Library is used in nearly all modules.
Abstract: This paper describes a 3D object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario. This system, developed within the scope of the European project RACE, integrates detection, tracking, learning and recognition of tabletop objects. Interaction capabilities were also developed to enable a human user to take the role of instructor and teach new object categories. Thus, the system learns in an incremental and open-ended way from user-mediated experiences. Based on the analysis of memory requirements for storing both semantic and perceptual data, a dual memory approach, comprising a semantic memory and a perceptual memory, was adopted. The perceptual memory is the central data structure of the described perception and learning system. The goal of this paper is twofold: on one hand, we provide a thorough description of the developed system, starting with motivations, cognitive considerations and architecture design, then providing details on the developed modules, and finally presenting a detailed evaluation of the system; on the other hand, we emphasize the crucial importance of the Point Cloud Library (PCL) for developing such system.11This paper is a revised and extended version of Oliveira et?al. (2014). We describe an object perception and perceptual learning system.The system is able to detect, track and recognize tabletop objects.The system learns novel object categories in an open-ended fashion.The Point Cloud Library is used in nearly all modules of the system.The system was developed and used in the European project RACE.
TL;DR: An efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner, which is able to interact with human users, learning new object categories continuously over time is presented.
Abstract: 3D object detection and recognition is increasingly used for manipulation and navigation tasks in service robots. It involves segmenting the objects present in a scene, estimating a feature descriptor for the object view and, finally, recognizing the object view by comparing it to the known object categories. This paper presents an efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner. In this paper, “open-ended” implies that the set of object categories to be learned is not known in advance. The training instances are extracted from on-line experiences of a robot, and thus become gradually available over time, rather than at the beginning of the learning process. This paper focuses on two state-of-the-art questions: (1) How to automatically detect, conceptualize and recognize objects in 3D scenes in an open-ended manner? (2) How to acquire and use high-level knowledge obtained from the interaction with human users, namely when they provide category labels, in order to improve the system performance? This approach starts with a pre-processing step to remove irrelevant data and prepare a suitable point cloud for the subsequent processing. Clustering is then applied to detect object candidates, and object views are described based on a 3D shape descriptor called spin-image. Finally, a nearest-neighbor classification rule is used to predict the categories of the detected objects. A leave-one-out cross validation algorithm is used to compute precision and recall, in a classical off-line evaluation setting, for different system parameters. Also, an on-line evaluation protocol is used to assess the performance of the system in an open-ended setting. Results show that the proposed system is able to interact with human users, learning new object categories continuously over time.
TL;DR: Problem in different research areas related to mobile manipulation from the cognitive perspective are outlined, recently published works and the state-of-the-art approaches to address these problems are reviewed, and open problems to be solved are discussed.
Abstract: Service robots are expected to play an important role in our daily lives as our companions in home and work environments in the near future. An important requirement for fulfilling this expectation is to equip robots with skills to perform everyday manipulation tasks, the success of which is crucial for most home chores, such as cooking, cleaning, and shopping. Robots have been used successfully for manipulation tasks in wellstructured and controlled factory environments for decades. Designing skills for robots working in uncontrolled human environments raises many potential challenges in various subdisciplines, such as computer vision, automated planning, and human-robot interaction. In spite of the recent progress in these fields, there are still challenges to tackle. This article outlines problems in different research areas related to mobile manipulation from the cognitive perspective, reviews recently published works and the state-of-the-art approaches to address these problems, and discusses open problems to be solved to realize robot assistants that can be used in manipulation tasks in unstructured human environments.