scispace - formally typeset
Search or ask a question

Showing papers by "Daniele Nardi published in 2016"


Journal ArticleDOI
TL;DR: A fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented, using Delaunay Triangulation to extract a binary mask of the lesion region, without the need of any training stage.

170 citations


Book ChapterDOI
03 Jul 2016
TL;DR: A novel unsupervised dataset summarization algorithm that automatically selects from a large dataset the most informative subsets that better describe the original one enables to streamline and speed-up the manual dataset labeling process, otherwise extremely time consuming, while preserving good classification performance.
Abstract: In this paper we present a perception system for agriculture robotics that enables an unmanned ground vehicle (UGV) equipped with a multi spectral camera to automatically perform the crop/weed detection and classification tasks in real-time. Our approach exploits a pipeline that includes two different convolutional neural networks (CNNs) applied to the input RGB+near infra-red (NIR) images. A lightweight CNN is used to perform a fast and robust, pixel-wise, binary image segmentation, in order to extract the pixels that represent projections of 3D points that belong to green vegetation. A deeper CNN is then used to classify the extracted pixels between the crop and weed classes. A further important contribution of this work is a novel unsupervised dataset summarization algorithm that automatically selects from a large dataset the most informative subsets that better describe the original one. This enables to streamline and speed-up the manual dataset labeling process, otherwise extremely time consuming, while preserving good classification performance. Experiments performed on different datasets taken from a real farm robot confirm the effectiveness of our approach.

125 citations


Proceedings Article
09 Jul 2016
TL;DR: A standard linguistic pipeline for semantic parsing is extended toward a form of perceptually informed natural language processing that combines discriminative learning and distributional semantics.
Abstract: Spoken Language Understanding in Interactive Robotics provides computational models of human-machine communication based on the vocal input. However, robots operate in specific environments and the correct interpretation of the spoken sentences depends on the physical, cognitive and linguistic aspects triggered by the operational environment. Grounded language processing should exploit both the physical constraints of the context as well as knowledge assumptions of the robot. These include the subjective perception of the environment that explicitly affects linguistic reasoning. In this work, a standard linguistic pipeline for semantic parsing is extended toward a form of perceptually informed natural language processing that combines discriminative learning and distributional semantics. Empirical results achieve up to a 40% of relative error reduction.

53 citations


Book ChapterDOI
30 Jun 2016
TL;DR: A novel approach for object detection and classification based on Convolutional Neural Networks (CNN) designed to be used by NAO robots and is made of two stages: image region segmentation, for reducing the search space, and Deep Learning, for validation.
Abstract: The use of identical robots in the RoboCup Standard Platform League (SPL) made software development the key aspect to achieve good results in competitions. In particular, the visual detection process is crucial for extracting information about the environment. In this paper, we present a novel approach for object detection and classification based on Convolutional Neural Networks (CNN). The approach is designed to be used by NAO robots and is made of two stages: image region segmentation, for reducing the search space, and Deep Learning, for validation. The proposed method can be easily extended to deal with different objects and adapted to be used in other RoboCup leagues. Quantitative experiments have been conducted on a data set of annotated images captured in real conditions from NAO robots in action. The used data set is made available for the community.

47 citations


Book ChapterDOI
03 Oct 2016
TL;DR: This paper presents an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models.
Abstract: Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms.

42 citations


Journal ArticleDOI
TL;DR: This paper presents a fully operational prototype system that is able to incrementally and on-line build a rich and specific representation of the environment, and proposes a shift in perspective, allowing non-expert users to shape robot knowledge through human-robot interaction.

30 citations


Book ChapterDOI
24 Oct 2016
TL;DR: This paper describes a procedure based on color segmentation, Histogram of Oriented Gradients, and Convolutional Neural Networks for detecting and classifying road signs and demonstrates the effectiveness of the proposed approach in terms of both classification accuracy and computational speed.
Abstract: The use of Computer Vision techniques for the automatic recognition of road signs is fundamental for the development of intelligent vehicles and advanced driver assistance systems. In this paper, we describe a procedure based on color segmentation, Histogram of Oriented Gradients (HOG), and Convolutional Neural Networks (CNN) for detecting and classifying road signs. Detection is speeded up by a preprocessing step to reduce the search space, while classification is carried out by using a Deep Learning technique. A quantitative evaluation of the proposed approach has been conducted on the well-known German Traffic Sign data set and on the novel Data set of Italian Traffic Signs (DITS), which is publicly available and contains challenging sequences captured in adverse weather conditions and in an urban scenario at night-time. Experimental results demonstrate the effectiveness of the proposed approach in terms of both classification accuracy and computational speed.

26 citations


Book ChapterDOI
30 Jun 2016
TL;DR: This paper describes activities that promote robot competitions in Europe, using and expanding RoboCup concepts and best practices, through two projects funded by the European Commission under its FP7 and Horizon2020 programmes.
Abstract: This paper describes activities that promote robot competitions in Europe, using and expanding RoboCup concepts and best practices, through two projects funded by the European Commission under its FP7 and Horizon2020 programmes. The RoCKIn project ended in December 2015 and its goal was to speed up the progress towards smarter robots through scientific competitions. Two challenges have been selected for the competitions due to their high relevance and impact on Europes societal and industrial needs: domestic service robots (RoCKIn@Home) and innovative robot applications in industry (RoCKIn@Work). RoCKIn extended the corresponding RoboCup leagues by introducing new and prevailing research topics, such as networking mobile robots with sensors and actuators spread over the environment, in addition to specifying objective scoring and benchmark criteria and methods to assess progress. The European Robotics League (ERL) started recently and includes indoor competitions related to domestic and industrial robots, extending RoCKIn’s rulebooks. Teams participating in the ERL must compete in at least two tournaments per year, which can take place either in a certified test bed (i.e., based on the rulebooks) located in a European laboratory, or as part of a major robot competition event. The scores accumulated by the teams in their best two participations are used to rank them over an year.

21 citations


Book ChapterDOI
01 Jan 2016
TL;DR: This work deals with the problem of effectively binding together the high-level semantic information with the low-level knowledge represented in the metric map by introducing an intermediate grid-based representation.
Abstract: Robots need a suitable representation of the surrounding world to operate in a structured but dynamic environment. State-of-the-art approaches usually rely on a combination of metric and topological maps and require an expert to provide the knowledge to the robot in a suitable format. Therefore, additional symbolic knowledge cannot be easily added to the representation in an incremental manner. This work deals with the problem of effectively binding together the high-level semantic information with the low-level knowledge represented in the metric map by introducing an intermediate grid-based representation. In order to demonstrate its effectiveness, the proposed approach has been experimentally validated on different kinds of environments.

18 citations


Journal ArticleDOI
TL;DR: This paper presents a design methodology together with a support tool aiming to streamline and improve the implementation of dedicated vocal interfaces for robots, and extends the existing vocal interface development framework to target robotic applications.
Abstract: The currently available speech technologies on mobile devices achieve effective performance in terms of both reliability and the language they are able to capture. The availability of performant speech recognition engines may also support the deployment of vocal interfaces in consumer robots. However, the design and implementation of such interfaces still requires significant work. The language processing chain and the domain knowledge must be built for the specific features of the robotic platform, the deployment environment and the tasks to be performed. Hence, such interfaces are currently built in a completely ad hoc way. In this paper, we present a design methodology together with a support tool aiming to streamline and improve the implementation of dedicated vocal interfaces for robots. This work was developed within an experimental project called Speaky for Robots. We extend the existing vocal interface development framework to target robotic applications. The proposed solution is built using a bottom-up approach by refining the language processing chain through the development of vocal interfaces for different robotic platforms and domains. The proposed approach is validated both in experiments involving several research prototypes and in tests involving end-users.

16 citations


Book ChapterDOI
01 Jan 2016
TL;DR: The results of the experiments not only provide the basis for a discussion of the features of the proposed approach, but also highlight the manifold issues that arise in the evaluation of semantic mapping.
Abstract: Robots that are launched in the consumer market need to provide more effective human robot interaction, and, in particular, spoken language interfaces However, in order to support the execution of high level commands as they are specified in natural language, a semantic map is required Such a map is a representation that enables the robot to ground the commands into the actual places and objects located in the environment In this paper, we present the experimental evaluation of a system specifically designed to build semantically rich maps, through the interaction with the user The results of the experiments not only provide the basis for a discussion of the features of the proposed approach, but also highlight the manifold issues that arise in the evaluation of semantic mapping

Proceedings ArticleDOI
01 Dec 2016
TL;DR: A novel approach is contributed to coordination within a team of cooperative autonomous robots that need to accomplish a common goal by dynamically adapting the underlying task assignment and distributed world representation, based on the current state of the environment.
Abstract: In this paper, we address coordination within a team of cooperative autonomous robots that need to accomplish a common goal. Our survey of the vast literature on the subject highlights two directions to further improve the performance of a multi-robot team. In particular, in a dynamic environment, coordination needs to be adapted to the different situations at hand (for example, when there is a dramatic loss of performance due to unreliable communication network). To this end, we contribute a novel approach for coordinating robots. Such an approach allows a robotic team to exploit environmental knowledge to adapt to various circumstances encountered, enhancing its overall performance. This result is achieved by dynamically adapting the underlying task assignment and distributed world representation, based on the current state of the environment. We demonstrate the effectiveness of our coordination system by applying it to the problem of locating a moving, non-adversarial target. In particular, we report on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment.

Book ChapterDOI
01 Jan 2016
TL;DR: An architectural framework is sketched which enables for an effective engineering of systems that use contextual knowledge, by including the acquisition, representation, and use of contextual information into a framework for information fusion.
Abstract: Robotics systems need to be robust and adaptable to multiple operational conditions, in order to be deployable in different application domains. Contextual knowledge can be used for achieving greater flexibility and robustness in tackling the main tasks of a robot, namely mission execution, adaptability to environmental conditions, and self-assessment of performance. In this chapter, we review the research work focusing on the acquisition, management, and deployment of contextual information in robotic systems. Our aim is to show that several uses of contextual knowledge (at different representational levels) have been proposed in the literature, regarding many tasks that are typically required for mobile robots. As a result of this survey, we analyze which notions and approaches are applicable to the design and implementation of architectures for information fusion. More specifically, we sketch an architectural framework which enables for an effective engineering of systems that use contextual knowledge, by including the acquisition, representation, and use of contextual information into a framework for information fusion.

Journal ArticleDOI
31 Jul 2016
TL;DR: In this paper, a semantic grammar with semantic actions is proposed to model typical commands expressed in scenarios that are specific to human service robotics, which improves the quality of ASR systems in situated scenarios, i.e., the transcription of robotic commands.
Abstract: Service robotics has been growing significantly in the last years, leading to several research results and to a number of consumer products. One of the essential features of these robotic platforms is represented by the ability of interacting with users through natural language. Spoken commands can be processed by a Spoken Language Understanding chain, in order to obtain the desired behavior of the robot. The entry point of such a process is represented by an Automatic Speech Recognition (ASR) module, that provides a list of transcriptions for a given spoken utterance. Although several well-performing ASR engines are available off-the-shelf, they operate in a general purpose setting. Hence, they may be not well suited in the recognition of utterances given to robots in specific domains. In this work, we propose a practical yet robust strategy to re-rank lists of transcriptions. This approach improves the quality of ASR systems in situated scenarios, i.e., the transcription of robotic commands. The proposed method relies upon evidences derived by a semantic grammar with semantic actions, designed to model typical commands expressed in scenarios that are specific to human service robotics. The outcomes obtained through an experimental evaluation show that the approach is able to effectively outperform the ASR baseline, obtained by selecting the first transcription suggested by the ASR.

Book ChapterDOI
01 Nov 2016
TL;DR: This work analyzes a subset of such variables as possible influencing factors of humans’ Collaboration Attitude in a Symbiotic Autonomy framework, namely: Proxemics setting, Activity Context, and Gender and Height as valuable features of the users.
Abstract: The presence of robots in everyday environments is increasing day by day, and their deployment spans over various applications: industrial and working scenarios, health care assistance in public areas or at home. However, robots are not yet comparable to humans in terms of capabilities; hence, in the so-called Symbiotic Autonomy, robots and humans help each other to complete tasks. Therefore, it is interesting to identify the factors that allow to maximize human-robot collaboration, which is a new point of view with respect to the HRI literature and very much leaning toward a social behavior. In this work, we analyze a subset of such variables as possible influencing factors of humans’ Collaboration Attitude in a Symbiotic Autonomy framework, namely: Proxemics setting, Activity Context, and Gender and Height as valuable features of the users. We performed a user study that takes place in everyday environments expressed as activity contexts, such as relaxing and working ones. A statistical analysis of the collected results shows a high dependence of the Collaboration Attitude in different Proxemics settings and Gender.

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter explores the use of competitions to accelerate robotics research and promote science, technology, engineering, and mathematics (STEM) education by arguing that the field of robotics is particularly well suited to innovation through competitions.
Abstract: This chapter explores the use of competitions to accelerate robotics research and promote science, technology, engineering, and mathematics (STEM) education. We argue that the field of robotics is particularly well suited to innovation through competitions. Two broad categories of robot competition are used to frame the discussion: human-inspired competitions and task-based challenges. Human-inspired robot competitions, of which the majority are sports contests, quickly move through platform development to focus on problem solving and test through game play. Task-based challenges attempt to attract participants by presenting a high aim for a robotic system. The contest can then be tuned, as required, to maintain motivation and ensure that the progress is made. Three case studies of robot competitions are presented, namely robot soccer, the UAV challenge, and the DARPA (Defense Advanced Research Projects Agency) grand challenges. The case studies serve to explore from the point of view of organizers and participants, the benefits and limitations of competitions, and what makes a good robot competition.

Proceedings ArticleDOI
09 Oct 2016
TL;DR: In this paper, a policy improvement with spatio-temporal affordance maps (π-STAM) algorithm is proposed to learn spatial affordances and generate robot behaviors for human-robot handovers.
Abstract: Human-robot handovers are characterized by high uncertainty and poor structure of the problem that make them difficult tasks. While machine learning methods have shown promising results, their application to problems with large state dimensionality, such as in the case of humanoid robots, is still limited. Additionally, by using these methods and during the interaction with the human operator, no guarantees can be obtained on the correct interpretation of spatial constraints (e.g., from social rules). In this paper, we present Policy Improvement with Spatio-Temporal Affordance Maps — π-STAM, a novel iterative algorithm to learn spatial affordances and generate robot behaviors. Our goal consists in generating a policy that adapts to the unknown action semantics by using affordances. In this way, while learning to perform a human-robot handover task, we can (1) efficiently generate good policies with few training episodes, and (2) easily encode action semantics and, if available, enforce prior knowledge in it. We experimentally validate our approach both in simulation and on a real NAO robot whose task consists in taking an object from the hands of a human. The obtained results show that our algorithm obtains a good policy while reducing the computational load and time duration of the learning process.

Proceedings Article
01 Jan 2016
TL;DR: This work proposes a practical yet robust strategy to re-rank lists of transcriptions, designed to improve the quality of ASR systems in situated scenarios, i.e., the transcription of robotic commands.
Abstract: Service robotics has been growing significantly in the last years, leading to several research results and to a number of consumer products. One of the essential features of these robotic platforms is represented by the ability of interacting with users through natural language. Spoken commands can be processed by a Spoken Language Understanding chain, in order to obtain the desired behavior of the robot. The entry point of such a process is represented by an Automatic Speech Recognition (ASR) module, that provides a list of transcriptions for a given spoken utterance. Although several well-performing ASR engines are available off-the-shelf, they operate in a general purpose setting. Hence, they may be not well suited in the recognition of utterances given to robots in specific domains. In this work, we propose a practical yet robust strategy to re-rank lists of transcriptions. This approach improves the quality of ASR systems in situated scenarios, i.e., the transcription of robotic commands. The proposed method relies upon evidences derived by a semantic grammar with semantic actions, designed to model typical commands expressed in scenarios that are specific to human service robotics. The outcomes obtained through an experimental evaluation show that the approach is able to effectively outperform the ASR baseline, obtained by selecting the first transcription suggested by the ASR.

01 Jan 2016
TL;DR: The ROVINA project aims at developing autonomous mobile robots to make faster, cheaper and safer the monitoring of archaeological sites.
Abstract: Monitoring and conservation of archaeological sites are important activities necessary to prevent damage or to perform restoration on cultural heritage. Standard techniques, like mapping and digitizing, are typically used to document the status of such sites. While these task are normally accomplished manually by humans, this is not possible when dealing with hard-to-access areas. For example, due to the possibility of structural collapses, underground tunnels like catacombs are considered highly unstable environments. Moreover, they are full of radioactive gas radon that limits the presence of people only for few minutes. The progress recently made in the artificial intelligence and robotics field opened new possibilities for mobile robots to be used in locations where humans are not allowed to enter. The ROVINA project aims at developing autonomous mobile robots to make faster, cheaper and safer the monitoring of archaeological sites. ROVINA will be evaluated on the catacombs of Priscilla (in Rome) and S. Gennaro (in Naples).

Book ChapterDOI
29 Nov 2016
TL;DR: A Spoken Language Understanding chain for the semantic parsing of robotic commands, designed according to a Client/Server architecture is described and a first evaluation of the proposed architecture in the automatic interpretation of commands expressed in Italian for a robot in a Service Robotics domain is reported.
Abstract: Robots operate in specific environments and the correct interpretation of linguistic interactions depends on physical, cognitive and language-dependent aspects triggered by the environment. In this work, we describe a Spoken Language Understanding chain for the semantic parsing of robotic commands, designed according to a Client/Server architecture. This work also reports a first evaluation of the proposed architecture in the automatic interpretation of commands expressed in Italian for a robot in a Service Robotics domain. The experimental results show that the proposed solution can be easily extended to other languages for a robust Spoken Language Understanding in Human-Robot Interaction.

Book ChapterDOI
30 Jun 2016
TL;DR: In this paper, a method based on a combination of Monte Carlo search and data aggregation (MCSDA) is proposed to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team.
Abstract: RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents’ goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved.

Posted Content
TL;DR: Policy Improvement with Spatio-Temporal Affordance Maps - π-STAM is presented, a novel iterative algorithm to learn spatial affordances and generate robot behaviors and shows that the algorithm obtains a good policy while reducing the computational load and time duration of the learning process.
Abstract: Human-robot handovers are characterized by high uncertainty and poor structure of the problem that make them difficult tasks. While machine learning methods have shown promising results, their application to problems with large state dimensionality, such as in the case of humanoid robots, is still limited. Additionally, by using these methods and during the interaction with the human operator, no guarantees can be obtained on the correct interpretation of spatial constraints (e.g., from social rules). In this paper, we present Policy Improvement with Spatio-Temporal Affordance Maps -- $\pi$-STAM, a novel iterative algorithm to learn spatial affordances and generate robot behaviors. Our goal consists in generating a policy that adapts to the unknown action semantics by using affordances. In this way, while learning to perform a human-robot handover task, we can (1) efficiently generate good policies with few training episodes, and (2) easily encode action semantics and, if available, enforce prior knowledge in it. We experimentally validate our approach both in simulation and on a real NAO robot whose task consists in taking an object from the hands of a human. The obtained results show that our algorithm obtains a good policy while reducing the computational load and time duration of the learning process.

01 Jan 2016
TL;DR: This work describes a formalism for representing multi-robot plan by using CPN and an algorithm to translate the CPN plan in a Petri Net Plan (PNP), a plan specification language based on Petri Nets that has been widely used for several robotics applications ranging from robotic soccer to search and rescue and service robotics.
Abstract: During last years the field of Multi-Robot Systems (MRS) has developed significantly growing in size and importance. There exist numerous areas where multi-robot systems have been used successfully and, in the majority of them, MRS must execute complex tasks in environments that are dynamic and unpredictable. This has led to the problem of synthesis and monitoring of complex plans that can provide high level commands to the system allowing the specification of parallel actions, interruptions of task in execution, synchronization between robots, and so forth. Petri Nets (PN) [1,2] have recently emerged as a promising approach for modeling either single-robot or multi-robot plans. This approach provides a clear graphical representation for modeling and developing systems which are concurrent, distributed, asynchronous, non-deterministic and/or stochastic. One of the issues of the approaches that uses Petri Nets is the space complexity associated to the specification of the plans, which can become very large (i.e., with many graphical elements), especially in the case of multi-robot systems. In this work, we analyse the use of Coloured Petri Net (CPN) [3] for the creation and validation of multi-robot systems. More specifically, we describe a formalism for representing multi-robot plan by using CPN and an algorithm to translate the CPN plan in a Petri Net Plan (PNP) [4]. PNP is a plan specification language based on Petri Nets that has been widely used for several robotics applications ranging from robotic soccer to search and rescue and service robotics. PNP are based on PNs and the support for multirobot plans is obtained by specifying the name of the robot or of the role within the description of each action. This features allows for easy implementation of centralized and distributed plans, but it is suitable for situations where the number of robots/roles is limited. CPNs differ from PNs in one significant respect; tokens can be of different types which are usually called colours. Hence, places in CPN can contain a multi-set of coloured tokens and the firing rules associated to transitions depend on such colours. As a consequence, Coloured Petri Nets are equivalent to Petri Nets with respect to descriptive power [7] but provide a more compact plan specification and are particularly well suited for multi-robot plans [6]. The use of CPN for modelling multi-robot plans has the advantage of using coloured tokens to represent different robots/roles and thus of improving its scalability.

Posted Content
TL;DR: A method based on a combination of Monte Carlo search and data aggregation to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents' goals.
Abstract: RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents' goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved.

Book ChapterDOI
15 Jun 2016
TL;DR: This work introduces the concept of Spatio-Temporal Affordances STA and Stochastic Affordance Map STAM and encodes action semantics related to the environment to improve task execution capabilities of an autonomous robot.
Abstract: Affordances have been introduced in literature as action opportunities that objects offer, and used in robotics to semantically represent their interconnection. However, when considering an environment instead of an object, the problem becomes more complex due to the dynamism of its state. To tackle this issue, we introduce the concept of Spatio-Temporal Affordances STA and Spatio-Temporal Affordance Map STAM. Using this formalism, we encode action semantics related to the environment to improve task execution capabilities of an autonomous robot. We experimentally validate our approach to support the execution of robot tasks by showing that affordances encode accurate semantics of the environment.

Proceedings ArticleDOI
TL;DR: In this article, the authors propose a standardization in the representation of semantic maps by defining an easily extensible formalism to be used on top of metric maps of the environments, and describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics.
Abstract: Semantic mapping is the incremental process of "mapping" relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.

Proceedings Article
29 Mar 2016
TL;DR: This paper focuses on specific contexts that can be embraced within Symbiotic Autonomy: Human Augmented Semantic Mapping, Task Teaching and Social Robotics, and sketches the view on the problem of knowledge acquisition in robotic platforms.
Abstract: Home environments constitute a main target location where to deploy robots, which are expected to help humans in completing their tasks. However, modern robots do not meet yet user's expectations in terms of both knowledge and skills. In this scenario, users can provide robots with knowledge and help them in performing tasks, through a continuous human-robot interaction. This human-robot cooperation setting in shared environments is known as Symbiotic Autonomy or Symbiotic Robotics. In this paper, we address the problem of an effective coexistence of robots and humans, by analyzing the proposed approaches in literature and by presenting our perspective on the topic. In particular, our focus is on specific contexts that can be embraced within Symbiotic Autonomy: Human Augmented Semantic Mapping, Task Teaching and Social Robotics. Finally, we sketch our view on the problem of knowledge acquisition in robotic platforms by introducing three essential aspects that are to be dealt with: environmental, procedural and social knowledge.

Posted Content
TL;DR: In this paper, the authors introduce the concept of spatiotemporal affordances (STA) and Spatio-Temporal Affordance Map (STAM) to encode action semantics related to the environment.
Abstract: Affordances have been introduced in literature as action opportunities that objects offer, and used in robotics to semantically represent their interconnection. However, when considering an environment instead of an object, the problem becomes more complex due to the dynamism of its state. To tackle this issue, we introduce the concept of Spatio-Temporal Affordances (STA) and Spatio-Temporal Affordance Map (STAM). Using this formalism, we encode action semantics related to the environment to improve task execution capabilities of an autonomous robot. We experimentally validate our approach to support the execution of robot tasks by showing that affordances encode accurate semantics of the environment.

Book ChapterDOI
01 Jan 2016
TL;DR: LU4R - adaptive spoken Language Understanding 4 Robots, a Spoken Language Understanding chain for the semantic interpretation of robotic commands, that is sensitive to the operational environment is presented.
Abstract: Robots operate in specific environments and the correct interpretation of linguistic interactions depends on physical, cognitive and language-dependent aspects triggered by the environment. In this work, we present LU4R - adaptive spoken Language Understanding 4 Robots, a Spoken Language Understanding chain for the semantic interpretation of robotic commands, that is sensitive to the operational environment. The system has been designed according to a Client/Server architecture in order to be easily integrated with the vast plethora of robotic platforms.