scispace - formally typeset
Search or ask a question
Author

Stefan Konecny

Bio: Stefan Konecny is an academic researcher from Örebro University. The author has contributed to research in topics: Robot & Hierarchical task network. The author has an hindex of 3, co-authored 3 publications receiving 65 citations.

Papers
More filters
Proceedings Article
15 Mar 2013
TL;DR: The architecture and knowledge-representation framework for a service robot being developed in the EU project RACE is described, and examples illustrating how learning from experiences will be achieved are presented.
Abstract: One way to improve the robustness and flexibility of robot performance is to let the robot learn from its ex- periences. In this paper, we describe the architecture and knowledge-representation framework for a service robot being developed in the EU project RACE, and present examples illustrating how learning from experi- ences will be achieved. As a unique innovative feature, the framework combines memory records of low-level robot activities with ontology-based high-level seman- tic descriptions.

37 citations

Proceedings Article
01 Jan 2014
TL;DR: It is shown that the combined used of causal, temporal and categorical knowledge allows the robot to detect failures even when the effects of actions are not directly observable.
Abstract: Robots are expected to carry out complex plans in real world environments. This requires the robot to track the progress of plan execution and detect failures which may occur. Planners use very abstract world models to generate plans. Additional causal, temporal, categorical knowledge about the execution, which is not included in the planner's model, is often avail- able. Can we use this knowledge to increase robustness of execution and provide early failure detection? We propose to use a dedicated Execution Model to monitor the executed plan based on runtime observations and rich execution knowl- edge. We show that the combined used of causal, temporal and categorical knowledge allows the robot to detect failures even when the effects of actions are not directly observable. A dedicated Execution model also introduces a degree of mod- ularity, since the platform- and execution-specific knowledge does not need to be encoded into the planner.

18 citations

Proceedings ArticleDOI
17 Dec 2015
TL;DR: An integrated system that uses a physics-based simulation to predict robot action results and durations, combined with a Hierarchical Task Network (HTN) planner and semantic execution monitoring and improves on state-of-the-art AI plan-based systems by feeding simulated prediction results back into the execution system.
Abstract: Real-world robotic systems have to perform reliably in uncertain and dynamic environments. State-of-the-art cognitive robotic systems use an abstract symbolic representation of the real world for high-level reasoning. Some aspects of the world, such as object dynamics, are inherently difficult to capture in an abstract symbolic form, yet they influence whether the executed action will succeed or fail. This paper presents an integrated system that uses a physics-based simulation to predict robot action results and durations, combined with a Hierarchical Task Network (HTN) planner and semantic execution monitoring. We describe a fully integrated system in which a Semantic Execution Monitor (SEM) uses information from the planning domain to perform functional imagination. Based on information obtained from functional imagination, the robot control system decides whether it is necessary to adapt the plan currently being executed. As a proof of concept, we demonstrate a PR2 able to carry tall objects on a tray without the objects toppling. Our approach achieves this by simulating robot and object dynamics. A validation shows that robot action results in simulation can be transferred to the real world. The system improves on state-of-the-art AI plan-based systems by feeding simulated prediction results back into the execution system.

11 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A global overview of deliberation functions in robotics is presented and the main characteristics, design choices and constraints of these functions are discussed.
Abstract: Autonomous robots facing a diversity of open environments and performing a variety of tasks and interactions need explicit deliberation in order to fulfill their missions. Deliberation is meant to endow a robotic system with extended, more adaptable and robust functionalities, as well as reduce its deployment cost. The ambition of this survey is to present a global overview of deliberation functions in robotics and to discuss the state of the art in this area. The following five deliberation functions are identified and analyzed: planning, acting, monitoring, observing, and learning. The paper introduces a global perspective on these deliberation functions and discusses their main characteristics, design choices and constraints. The reviewed contributions are discussed with respect to this perspective. The survey focuses as much as possible on papers with a clear robotics content and with a concern on integrating several deliberation functions.

229 citations

Journal ArticleDOI
TL;DR: This paper advocates a change in focus to actors as the primary topic of investigation, and discusses open problems and research directions toward that objective in knowledge representations, model acquisition and verification, synthesis and refinement, monitoring, goal reasoning, and integration.
Abstract: Planning is motivated by acting. Most of the existing work on automated planning underestimates the reasoning and deliberation needed for acting; it is instead biased towards path-finding methods in a compactly specified state-transition system. Researchers in this AI field have developed many planners, but very few actors. We believe this is one of the main causes of the relatively low deployment of automated planning applications. In this paper, we advocate a change in focus to actors as the primary topic of investigation. Actors are not mere plan executors: they may use planning and other deliberation tools, before and during acting. This change in focus entails two interconnected principles: a hierarchical structure to integrate the actor@?s deliberation functions, and continual online planning and reasoning throughout the acting process. In the paper, we discuss open problems and research directions toward that objective in knowledge representations, model acquisition and verification, synthesis and refinement, monitoring, goal reasoning, and integration.

128 citations

Journal ArticleDOI
TL;DR: This paper presents the design guidelines of the cognitive architecture, its main functionalities, and outlines the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting.
Abstract: This paper addresses the problem of active object learning by a humanoid child-like robot, using a developmental approach. We propose a cognitive architecture where the visual representation of the objects is built incrementally through active exploration. We present the design guidelines of the cognitive architecture, its main functionalities, and we outline the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting. The robot actively explores the objects through manipulation, driven by a combination of social guidance and intrinsic motivation. Besides the robotics and engineering achievements, our experiments replicate some observations about the coupling of vision and manipulation in infants, particularly how they focus on the most informative objects. We discuss the further benefits of our architecture, particularly how it can be improved and used to ground concepts.

69 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks.
Abstract: This paper presents an artificial cognitive system tightly integrating object perception and manipulation for assistive robotics. This is necessary for assistive robots, not only to perform manipulation tasks in a reasonable amount of time and in an appropriate manner, but also to robustly adapt to new environments by handling new objects. In particular, this system includes perception capabilities that allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. To achieve these goals, it is critical to detect, track and recognize objects in the environment as well as to conceptualize experiences and learn novel object categories in an open-ended manner, based on human–robot interaction. Interaction capabilities were developed to enable human users to teach new object categories and instruct the robot to perform complex tasks. A naive Bayes learning approach with a Bag-of-Words object representation are used to acquire and refine object category models. Perceptual memory is used to store object experiences, feature dictionary and object category models. Working memory is employed to support communication purposes between the different modules of the architecture. A reactive planning approach is used to carry out complex tasks. To examine the performance of the proposed architecture, a quantitative evaluation and a qualitative analysis are carried out. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks.

58 citations

Journal ArticleDOI
TL;DR: An efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner, which is able to interact with human users, learning new object categories continuously over time is presented.
Abstract: 3D object detection and recognition is increasingly used for manipulation and navigation tasks in service robots. It involves segmenting the objects present in a scene, estimating a feature descriptor for the object view and, finally, recognizing the object view by comparing it to the known object categories. This paper presents an efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner. In this paper, “open-ended” implies that the set of object categories to be learned is not known in advance. The training instances are extracted from on-line experiences of a robot, and thus become gradually available over time, rather than at the beginning of the learning process. This paper focuses on two state-of-the-art questions: (1) How to automatically detect, conceptualize and recognize objects in 3D scenes in an open-ended manner? (2) How to acquire and use high-level knowledge obtained from the interaction with human users, namely when they provide category labels, in order to improve the system performance? This approach starts with a pre-processing step to remove irrelevant data and prepare a suitable point cloud for the subsequent processing. Clustering is then applied to detect object candidates, and object views are described based on a 3D shape descriptor called spin-image. Finally, a nearest-neighbor classification rule is used to predict the categories of the detected objects. A leave-one-out cross validation algorithm is used to compute precision and recall, in a classical off-line evaluation setting, for different system parameters. Also, an on-line evaluation protocol is used to assess the performance of the system in an open-ended setting. Results show that the proposed system is able to interact with human users, learning new object categories continuously over time.

44 citations