scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 2016"


Proceedings ArticleDOI
Dirk Rothenbücher1, Jamy Li1, David Sirkin1, Brian Mok1, Wendy Ju1 
01 Aug 2016
TL;DR: A novel method for performing observational field experiments to investigate interactions with driverless cars, and it is believed that this method contributes a valuable technique for safely acquiring empirical data and insights about driverless vehicle interactions.
Abstract: How will pedestrians and bicyclists interact with autonomous vehicles when there is no human driver? In this paper, we outline a novel method for performing observational field experiments to investigate interactions with driverless cars. We provide a proof-of-concept study (N=67), conducted at a crosswalk and a traffic circle, which applies this method. In the study, participants encountered a vehicle that appeared to have no driver, but which in fact was driven by a human confederate hidden inside. We constructed a car seat costume to conceal the driver, who was specially trained to emulate an autonomous system. Data included video recordings and participant responses to post-interaction questionnaires. Pedestrians who encountered the car reported that they saw no driver, yet they managed interactions smoothly, except when the car misbehaved by moving into the crosswalk just as they were about to cross. This method is the first of its kind, and we believe that it contributes a valuable technique for safely acquiring empirical data and insights about driverless vehicle interactions. These insights can then be used to design vehicle behaviors well in advance of the broad deployment of autonomous technology.

243 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: An overview of the requirements and design of the platform, the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids are presented.
Abstract: The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, featuring advanced sensing and speech synthesis technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.

96 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: A novel, object-aware projection technique that allows robots to visualize task information and intentions on physical objects in the environment taking into account the pose and shape of surrounding objects is presented.
Abstract: Trained human co-workers can often easily predict each other's intentions based on prior experience. When collaborating with a robot coworker, however, intentions are hard or impossible to infer. This difficulty of mental introspection makes human-robot collaboration challenging and can lead to dangerous misunderstandings. In this paper, we present a novel, object-aware projection technique that allows robots to visualize task information and intentions on physical objects in the environment. The approach uses modern object tracking methods in order to display information at specific spatial locations taking into account the pose and shape of surrounding objects. As a result, a human co-worker can be informed in a timely manner about the safety of the workspace, the site of next robot manipulation tasks, and next subtasks to perform. A preliminary usability study compares the approach to collaboration approaches based on monitors and printed text. The study indicates that, on average, the user effectiveness and satisfaction is higher with the projection based approach.

90 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper explored the benefits of an affective interaction, as opposed to a more efficient, less error prone but non-communicative one, in an omelet-making task, with a wide range of participants interacting directly with a humanoid robot assistant.
Abstract: Strategies are necessary to mitigate the impact of unexpected behavior in collaborative robotics, and research to develop solutions is lacking. Our aim here was to explore the benefits of an affective interaction, as opposed to a more efficient, less error prone but non-communicative one. The experiment took the form of an omelet-making task, with a wide range of participants interacting directly with BERT2, a humanoid robot assistant. Having significant implications for design, results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one, despite a considerable trade off in time taken to perform the task. Our findings also suggest that a robot exhibiting human-like characteristics may make users reluctant to ‘hurt its feelings’; they may even lie in order to avoid this.

80 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is concluded that lights could be generally used as an effective non-verbal communication modality for mobile robots in the absence of, or as a complement to, other modalities.
Abstract: In order to be successfully integrated into human-populated environments, mobile robots need to express relevant information about their state to the outside world. In particular, animated lights are a promising way to express hidden robot state information such that it is visible at a distance. In this work, we present an online study to evaluate the effect of robot communication through expressive lights on people's understanding of the robot's state and actions. In our study, we use the CoBot mobile service robot with our light interface, designed to express relevant robot information to humans. We evaluate three designed light animations on three corresponding scenarios for each, for a total of nine scenarios. Our results suggest that expressive lights can play a significant role in helping people accurately hypothesize about a mobile robot's state and actions from afar when minimal contextual clues are present. We conclude that lights could be generally used as an effective non-verbal communication modality for mobile robots in the absence of, or as a complement to, other modalities.

58 citations


Proceedings ArticleDOI
26 Aug 2016
TL;DR: Two navigation approaches based on the use of inverse reinforcement learning (IRL) from exemplar situations are presented to implement two path planners that take into account social norms for navigation towards isolated people.
Abstract: Robot navigation in human environments has been in the eyes of researchers for the last few years. Robots operating under these circumstances have to take human awareness into consideration for safety and acceptance reasons. Nonetheless, navigation have been often treated as going towards a goal point or avoiding people, without considering the robot engaging a person or a group of people in order to interact with them. This paper presents two navigation approaches based on the use of inverse reinforcement learning (IRL) from exemplar situations. This allow us to implement two path planners that take into account social norms for navigation towards isolated people. For the first planner, we learn an appropriate way to approach a person in an open area without static obstacles, this information is used to generate robot's path plan. As for the second planner, we learn the weights of a linear combination of continuous functions that we use to generate a costmap for the approach-behavior. This costmap is then combined with others, e.g. a costmap with higher cost around obstacles, and finally a path is generated with Dijkstra's algorithm.

50 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: A novel recursive neural network is proposed that uses growing self-organization for the efficient learning of body motion sequences and provides visual assistance to the person performing an exercise by displaying real-time feedback, thus enabling the user to correct inaccurate postures and motion intensity.
Abstract: The correct execution of well-defined movements plays a crucial role in physical rehabilitation and sports. While there is an extensive number of well-established approaches for human action recognition, the task of assessing the quality of actions and providing feedback for correcting inaccurate movements has remained an open issue in the literature. We present a learning-based method for efficiently providing feedback on a set of training movements captured by a depth sensor. We propose a novel recursive neural network that uses growing self-organization for the efficient learning of body motion sequences. The quality of actions is then computed in terms of how much a performed movement matches the correct continuation of a learned sequence. The proposed system provides visual assistance to the person performing an exercise by displaying real-time feedback, thus enabling the user to correct inaccurate postures and motion intensity. We evaluate our approach with a data set containing 3 powerlifting exercises performed by 17 athletes. Experimental results show that our novel architecture outperforms our previous approach for the correct prediction of routines and the detection of mistakes both in a single- and multiple-subject scenario.

45 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: The design process of “Vyo”, a personal assistant serving as a centralized interface for smart home devices, is described, which included simultaneous iterative development of the robot's morphology, nonverbal behavior and interaction schemas.
Abstract: We describe the design process of “Vyo”, a personal assistant serving as a centralized interface for smart home devices. Building on the concepts of ubiquitous and engaging computing in the domestic environment, we identified five design goals for the home robot: engaging, unobtrusive, device-like, respectful, and reassuring. These goals led our design process, which included simultaneous iterative development of the robot's morphology, nonverbal behavior and interaction schemas. We continued with user-centered design research using puppet prototypes of the robot to assess and refine our design choices. The resulting robot, Vyo, straddles the boundary between a monitoring device and a socially expressive agent, and presents a number of novel design outcomes: The combination of TUI “phicons” with social robotics; gesture-related screen exposure; and a non-anthropomorphic monocular expressive face. We discuss how our design goals are expressed in the elements of the robot's final design.

45 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work proposes verbalization as the process of converting route experiences into natural language, and highlights the importance of varying verbalizations based on user preferences.
Abstract: With a growing number of robots performing autonomously without human intervention, it is difficult to understand what the robots experience along their routes during execution without looking at execution logs. Rather than looking through logs, our goal is for robots to respond to queries in natural language about what they experience and what routes they have chosen. We propose verbalization as the process of converting route experiences into natural language, and highlight the importance of varying verbalizations based on user preferences. We present our verbalization space representing different dimensions that verbalizations can be varied, and our algorithm for automatically generating them on our CoBot robot. Then we present our study of how users can request different verbalizations in dialog. Using the study data, we learn a language model to map user dialog to the verbalization space. Finally, we demonstrate the use of the learned model within a dialog system in order for any user to request information about CoBot's route experience at varying levels of detail.

44 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: Using tablets to render mixed-reality visual environments that support human-robot collaboration for object manipulation using a mobile interface created on a tablet by integrating real-time vision, 3D graphics, touchscreen interaction, and wireless communication is explored.
Abstract: Although gesture-based input and augmented reality (AR) facilitate intuitive human-robot interactions (HRI), prior implementations have relied on research-grade hardware and software. This paper explores using tablets to render mixed-reality visual environments that support human-robot collaboration for object manipulation. A mobile interface is created on a tablet by integrating real-time vision, 3D graphics, touchscreen interaction, and wireless communication. This mobile interface augments a live video of physical objects in a robot's workspace with corresponding virtual objects that can be manipulated by a user to intuitively command the robot to manipulate the physical objects. By generating the mixed-reality environment on an exocentric view provided by the tablet camera, the interface establishes a common frame of reference for the user and the robot to effectively communicate spatial information for object manipulation. After addressing challenges due to limitations in mobile sensing and computation, the interface is evaluated with participants to examine the performance and user experience with the suggested approach.

44 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: The results show that the erroneous robot triggered more positive emotions but lead to a lower human performance than the perfect one, in a competitive scenario in which humans and robots solve reasoning tasks and memorize numbers.
Abstract: Perfect memory, strong reasoning abilities and flawless performance are typical cognitive traits associated with robots. In contrast, forgetting and erroneous reasoning are typical cognitive patterns of humans. This discrepancy may fundamentally affect the way how robots and humans interact and collaborate together and is today still little explored. In this paper, we investigate the effect of differences between erroneous and perfect robots in a competitive scenario in which humans and robots solve reasoning tasks and memorize numbers. Participants are randomly assigned to one of two groups: in the first group they interact with a perfect, flawless robot, while in the second, they interact with a human-like robot with occasional errors and imperfect memorizing abilities. Participants rate attitude, sympathy, and attributes of the robot in a questionnaire and we measure their task performance. The results show that the erroneous robot triggered more positive emotions but lead to a lower human performance than the perfect one. Effects of both conditions on the group of students with and without technical background are reported.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper presents the development and evaluation of a social robot that was created to play a card game with humans, playing the role of a partner and opponent and shows that trust is a multifaceted construct that develops differently for humans and robots.
Abstract: Robots are currently being developed to enter our lives and interact with us in different tasks. For humans to be able to have a positive experience of interaction with such robots, they need to trust them to some degree. In this paper, we present the development and evaluation of a social robot that was created to play a card game with humans, playing the role of a partner and opponent. This type of activity is especially important, since our target group is elderly people - a population that often suffers from social isolation. Moreover, the card game scenario can lead to the development of interesting trust dynamics during the interaction, in which the human that partners with the robot needs to trust it in order to succeed and win the game. The design of the robot's behavior and game dynamics was inspired in previous user-centered design studies in which elderly people played the same game. Our evaluation results show that the levels of trust differ according to the previous knowledge that players have of their partners. Thus, humans seem to significantly increase their trust level towards a robot they already know, whilst maintaining the same level of trust in a human that they also previously knew. Henceforth, this paper shows that trust is a multifaceted construct that develops differently for humans and robots.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The study results show that the presentation of task-based information in the tablet-based AR interface decreases the mental demand of the industrial robot programmers during the robot control process, but at the same time, the programmers' task completion time increases.
Abstract: Augmented reality (AR) can serve as a tool to provide helpful information in a direct way to industrial robot programmers throughout the teaching process. It seems obvious that AR support eases the programming process and increases the programmer's productivity and programming accuracy. However, additional information can also potentially increase the programmer's perceived workload. To explore the impact of augmented reality on robot teaching, as a first step we have chosen a Sphero robot control scenario and conducted a within-subject user study with 19 professional industrial robot programmers, including novices and experts. We focused on the perceived workload of industrial robot programmers and their task completion time when using a tablet-based AR approach with visualization of task-based information for controlling a robot. Each participant had to execute three typical robot programming tasks: tool center point teaching, trajectory teaching, and overlap teaching. We measured the programmers' workload in the dimensions of mental demand, physical demand, temporal demand, frustration, effort, and performance. The study results show that the presentation of task-based information in the tablet-based AR interface decreases the mental demand of the industrial robot programmers during the robot control process. At the same time, however, the programmers' task completion time increases.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is concluded that the investigated One-Shot PbD approach forms a vital tool for the use of robots in private households and gives important insights into general requirements for robot programming by non-experts.
Abstract: General purpose robots are established tools in a variety of industrial applications. Current household robots, however, are limited in capabilities and purpose. One reason for this is the programming effort required to utilize a general purpose robot. A popular strategy to reduce programming effort is Programming by Demonstration (PbD). Usually PbD requires several tedious demonstrations to teach a robot some behavior or skill. Notably, there are just few approaches where a single demonstration suffices. Such One-Shot approaches reduce programming effort even further and allow non-experts to quickly and intuitively program a general purpose robot. In this work we evaluate intuitiveness and robustness (i.e. the dependability when executing non-expert generated programs) of a One-Shot PbD system. To this end, we carried out a user study with 34 participants. Our participants used kinesthetic programming to complete various tasks with a light-weight robot. We instructed users with standard methods, either by speech, by graphical tutorial or by video. During the study, structured questionnaires and an observation sheet collected user impressions: The effectiveness when solving a task, the efficiency and (mental) effort during programming, the attitude towards the robot, and finally the satisfaction with the entire system. User ratings on all aspects confirm the high intuitiveness of the investigated system. In contrast to other approaches, the study did not depend on a specific instructional material to achieve high intuitiveness ratings. Overall, our findings support the advantages of a One-Shot programming system and give important insights into general requirements for robot programming by non-experts. We conclude that the investigated One-Shot PbD approach forms a vital tool for the use of robots in private households.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Wang et al. as discussed by the authors presented a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN, and trained 20 different CNN models and verified the performance of each network with test images from five different databases.
Abstract: We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Evidence is provided that data-driven haptic perception can be used to infer relationships between clothing and the human body during robot-assisted dressing, and hidden Markov models using only forces measured at the robot's end effector classified these outcomes with high accuracy.
Abstract: Dressing is an important activity of daily living (ADL) with which many people require assistance due to impairments. Robots have the potential to provide dressing assistance, but physical interactions between clothing and the human body can be complex and difficult to visually observe. We provide evidence that data-driven haptic perception can be used to infer relationships between clothing and the human body during robot-assisted dressing. We conducted a carefully controlled experiment with 12 human participants during which a robot pulled a hospital gown along the length of each person's forearm 30 times. This representative task resulted in one of the following three outcomes: the hand missed the opening to the sleeve; the hand or forearm became caught on the sleeve; or the full forearm successfully entered the sleeve. We found that hidden Markov models (HMMs) using only forces measured at the robot's end effector classified these outcomes with high accuracy. The HMMs' performance generalized well to participants (98.61% accuracy) and velocities (98.61% accuracy) outside of the training data. They also performed well when we limited the force applied by the robot (95.8% accuracy with a 2N threshold), and could predict the outcome early in the process. Despite the lightweight hospital gown, HMMs that used forces in the direction of gravity substantially outperformed those that did not. The best performing HMMs used forces in the direction of motion and the direction of gravity.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This project analyzed in detail the justifications people provide for three types of moral judgments (permissibility, wrongness, and blame) of robot and human agents, and found that people's moral judgments of both agents relied on the same conceptual and justificatory foundation.
Abstract: Robots will eventually perform norm-regulated roles in society (e.g. caregiving), but how will people apply moral norms and judgments to robots? By answering such questions, researchers can inform engineering decisions while also probing the scope of moral cognition. In previous work, we compared people's moral judgments about human and robot agents' behavior in moral dilemmas. We found that robots, compared with humans, were more commonly expected to sacrifice one person for the good of many, and they were blamed more than humans when they refrained from that decision. Thus, people seem to have somewhat different normative expectations of robots than of humans. In the current project we analyzed in detail the justifications people provide for three types of moral judgments (permissibility, wrongness, and blame) of robot and human agents. We found that people's moral judgments of both agents relied on the same conceptual and justificatory foundation: consequences and prohibitions undergirded wrongness judgments; attributions of mental agency undergirded blame judgments. For researchers, this means that people extend moral cognition to nonhuman agents. For designers, this means that robots with credible cognitive capacities will be considered moral agents but perhaps regulated by different moral norms.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Although the effects of gender, height and age did not yield significant results, the results revealed that robot posture has a significant impact on the interpersonal distances in human-robot interactions.
Abstract: In this paper we present a study that investigates human-robot interpersonal distances and the influence of posture, either sitting or standing on the interpersonal distances. The study is based on a human approaching a robot and a robot approaching a human, in which the human/robot maintain either a sitting or standing posture while being approached. We collected and analysed data from twenty-two participants and the results revealed that robot posture has a significant impact on the interpersonal distances in human-robot interactions. Previous interactions with a robot, and lower negative attitudes towards robots also impacted interpersonal distances. Although the effects of gender, height and age did not yield significant results, we discuss their influence on the interpersonal distances between humans and robots and how they are of interest for future research. We present design implications for human-robot interaction research and humanoid robot design.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The results show that participants who were asked task-focused questions had higher performance scores in the collaborative game than the other groups, however, had a lower perception of their performance than the participants who was asked relationally- focused questions.
Abstract: Despite the growing body of research in human-robot collaboration, there has been little focus on how social robots can support human-to-human teaming. In this paper, we investigate whether a social robot can improve human-human collaboration. We conducted a between-subjects study where pairs of children play a collaborative game with a social robot. During pauses in the game, the robot either (1) asks the children questions to better focus the participants on the task they are working on, (2) asks the children questions that are targeted at developing and reinforcing the relationship between the participants, or (3) doesn't ask any questions. Our results show that participants who were asked task-focused questions had higher performance scores in the collaborative game than the other groups, however, had a lower perception of their performance than the participants who were asked relationally-focused questions. We did not find any differences between the groups in interpersonal cohesiveness. Our findings suggest that social robots can be used to improve performance measures and perception of performance in groups of children.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Examination of how Contact Type and Contact Modality with a NAO robot would impact attitudes toward NAO showed that nearly any type of contact effectively reduced negative emotions compared to the control condition, however, for participants with preexisting negative emotions toward robots, contact sometimes produced more negative attitudes.
Abstract: Although it is widely accepted that robots will be used in everyday contexts in near future, many people feel anxious and hold negative attitudes toward robots. This negative reaction might be stronger when users come into direct physical contact with them, particularly when touch is required between robots and humans, (e.g., when using robots as assistants to help elderly people at home). Intergroup contact research in social psychology has proposed various forms of contact as a means to reduce negative feelings toward outgroup members. The present study examined how Contact Type (Actual vs. Imagined) and Contact Modality (Look vs. Touch) with a NAO robot would impact attitudes toward NAO compared to a no-contact control condition. Results showed that nearly any type of contact effectively reduced negative emotions compared to the control condition. However, for participants with preexisting negative emotions toward robots, contact sometimes produced more negative attitudes. We discuss these findings and the resulting implications for future research.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: In insight into how the interaction context can influence how children think about and respond to social robots, several subtle differences in children's gaze behavior between conditions may reflect children's perceptions of the robot's status as more, or less, of a social actor.
Abstract: The presentation or framing of a situation—such as how something or someone is introduced—can influence people's subsequent behavior. In this paper, we describe a study in which we manipulated how a robot was introduced, framing it as either a social agent or as a machine-like being. We asked whether framing the robot in these ways would influence young children's social behavior while playing a ten-minute game with the robot. We coded children's behavior during the robot interaction, including their speech, gaze, and various courteous, prosocial actions. We found several subtle differences in children's gaze behavior between conditions that may reflect children's perceptions of the robot's status as more, or less, of a social actor. In addition, more parents of children in the Social condition reported that their children acted less shy and more talkative with the robot that parents of children in the Machine condition. This study gives us insight into how the interaction context can influence how children think about and respond to social robots.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Evidence is found that the effectiveness of recovery strategies depends on the task, context, and severity of failure, including the severity of the failures, the context risk involved, and the effectivenessof recovery strategies.
Abstract: Human-robot interaction involving the failure of autonomous robots is not yet well understood. We conducted two online surveys with a total of 1200 participants in which people assessed situations where an autonomous robot experienced different kinds of failure. This information was used to construct a measurement scale of people's reaction to failure where positive values correspond with increasingly positive reactions and negative values with negative reactions. We then used this scale to compare different kinds of failure situations, including the severity of the failures, the context risk involved, and the effectiveness of different kinds of recovery strategies. We found evidence that the effectiveness of recovery strategies depends on the task, context, and severity of failure.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Results from an orally administered questionnaire to 60 children, split evenly between human and robotic interviewers, revealed that few significant differences in reporting were encountered between interviewer types.
Abstract: This article describes the results of a study that compares disclosure occurrences of bullying from children (ages 8 to 12) to either a human or a social robot. Results from an orally administered questionnaire to 60 children, split evenly between human and robotic interviewers, revealed that few significant differences in reporting were encountered between interviewer types. Overall 9 of 60 (15%) of participants reported being bullied in the past month. Participants were significantly more likely to report that fellow students were teased about their looks to the robot interviewer in comparison to the human interviewer. In addition to the examination of these results, a discussion of lessons learned for future studies of this nature are provided.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is concluded that sentence complexity (in terms of spatial relations and perspective taking) impacts understanding, and it is provided suggestions for automatic generation of spatial references.
Abstract: As humans and robots collaborate together on spatial tasks, they must communicate clearly about the objects they are referencing. Communication is clearer when language is unambiguous which implies the use of spatial references and explicit perspectives. In this work, we contribute two studies to understand how people instruct a partner to identify and pick up objects on a table. We investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans. We then focus our analysis on the clarity of instructions with respect to perspective taking and spatial references. We find that only about 42% of instructions contain perspective-independent spatial references. There is a strong correlation between participants' accuracy in executing instructions and the perspectives that the instructions are given in, as well between accuracy and the number of spatial relations that were required for the instruction. We conclude that sentence complexity (in terms of spatial relations and perspective taking) impacts understanding, and we provide suggestions for automatic generation of spatial references.

Proceedings ArticleDOI
15 Nov 2016
TL;DR: The DAGLOVE is presented, which addresses the mentioned limitations with a low-cost design that allows separate measurements of proximal and distal finger joint motions as well as position/orientation detection with an inertial measurement unit (IMU) and teleoperation of the iCub humanoid robot is investigated.
Abstract: Sensor gloves are widely adopted input devices for several kinds of human-robot interaction applications. Existing glove concepts differ in features and design, but include limitations concerning the captured finger kinematics, position/orientation sensing, wireless operation, and especially economical issues. This paper presents the DAGLOVE which addresses the mentioned limitations with a low-cost design (ca. 300 €). This new sensor glove allows separate measurements of proximal and distal finger joint motions as well as position/orientation detection with an inertial measurement unit (IMU). Those sensors and tactile feedback induced by coin vibration motors at the fingertips are integrated within a wireless, easy-to-use, and open-source system. The design and implementation of hardware and software as well as proof-of-concept experiments are presented. An experimental evaluation of the sensing capabilities shows that proximal and distal finger motions can be acquired separately and that hand position/orientation can be tracked. Further, teleoperation of the iCub humanoid robot is investigated as an exemplary application to highlight the potential of the extended low-cost glove in human-robot interaction.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work is based on extracting a static plan using A* search on the grid map by minimizing safety, disturbance and path length costs and then refining it by simulating humans' reaction using the Social Force Model and demonstrates that robots can exhibit social behaviors that is not possible to model with standard approaches.
Abstract: Robot path planning in human environments benefits significantly from considering more than obstacle avoidance, and recent works in this area proposed safety and comfort considerations. One shortcoming of current approaches is that humans' behavior is modeled as independent of robot's motions. In this work, we aim to give this anticipation ability to a robot by simulating people's reaction to robot's motion during planning. Our approach is based on extracting a static plan using A* search on the grid map by minimizing safety, disturbance and path length costs and then refining it by simulating humans' reaction using the Social Force Model. With two example scenarios in simulation and two on the real system, we provide qualitative examination of the resulting robot paths and demonstrate that robots can exhibit social behaviors that is not possible to model with standard approaches. This work serves as a primer for quantitative user studies, and we hope will urge future robot path planners to consider a richer set of social capabilities.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Teo is integrated with virtual worlds shown on large displays or projections and with external motion sensing devices to support various forms of full-body interaction and to engage DD persons in a variety of play activities that blend the digital and physical world.
Abstract: We propose a new emotional, huggable, mobile, and configurable robot (Teo), which can address some of the still open therapeutic needs in the treatment of Developmental Disability (DD). Teo has been designed in partnership with a team of DD specialists, and it is meant to be used as an efficient and easy-to-use tool for caregivers. Teo is integrated with virtual worlds shown on large displays or projections and with external motion sensing devices to support various forms of full-body interaction and to engage DD persons in a variety of play activities that blend the digital and physical world and can be fully customized by therapists to meet the requirements of each single subject. Exploratory studies have been performed at two rehabilitation centres to investigate the potential of our approach. The positive results of these studies pinpoint that our system endeavors promising opportunities to offer new forms of interventions for DD people.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is concluded that role assignment in educational HRI is a dynamic process in which the perceptions of children regarding the robot change over time as a consequence of continuous interactions.
Abstract: Human beings naturally assign roles to one another while interacting. Role assignment is a way to organize interpersonal encounters and can result in uncertainty decrease when facing a novel interaction with someone we just met, or even to rediscover new roles within previous relationships. When people interact with synthetic characters - such as robots - it seems they also assign roles to these agents, just as they do with humans. Within the field of human-robot interaction (HRI), robots are being developed to fulfill specific roles. This enables researchers to design concrete behaviors that match the desired role that a robot will play in a given task. It would then be expected that if a robot is developed with such a specific role, users too would assign the same role to that robot. In this paper, we study how children assign roles to an educational robot whose role is established from the beginning of the interaction. Our results show that although the role that the robot played was explicitly presented to children, they end up perceiving and assigning different roles for that robot. Moreover, we conclude that role assignment in educational HRI is a dynamic process in which the perceptions of children regarding the robot change over time as a consequence of continuous interactions.

Proceedings Article
01 Jan 2016
TL;DR: In this paper, the authors present the results of a rigorously-framed survey used to gather the views of both the general public and education professionals towards the use of robots in schools.
Abstract: Social robots are increasingly being applied in educational environments such as schools. It is important to understand the views of the general public as social acceptance will likely play a role in the adoption of such technology. Other literature suggests that teacher attitudes are a strong predictor of technology use in classrooms, so willingness to engage with social robots will influence application in practice. In this paper we present the results of a rigorously-framed survey used to gather the views of both the general public and education professionals towards the use of robots in schools. Overall, we find that the attitude towards social robots in schools is cautious, but potentially accepting. We discuss the reported set of perceived obstacles for the broader adoption of robots in the classroom in this context. Interestingly, concerns about appropriate social skills for the robots dominate over practical and ethical concerns, suggesting that this should remain a focus for child-robot interaction research.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A systematic review of existing studies and a conceptual framework is proposed based on the themes that emerged, namely the social interaction between the child and the robot, the social acceptance, possible emotional interactions, the learning process and the learning outcome is proposed.
Abstract: This review examines recent methodological approaches for the evaluation of child-robot interaction in learning settings. The main aims are to map existing work from a user-centered perspective, to identify possible trends related to evaluation methods for child-robot interaction, and to discuss potential future directions. We present a systematic review of existing studies, which have been thematically organized based on their research objectives. We then examine the evaluation methods that were used in these studies and we propose a conceptual framework based on the one hand on the themes that emerged, namely the social interaction between the child and the robot, the social acceptance, possible emotional interactions, the learning process and the learning outcome, and on the other hand on the corresponding measures. These methods have been considered in relation with the age ranges of the children, because of the relationship of their cognitive level to the choice of a developmentally appropriate evaluation method. We use this framework to highlight current trends and needs for the field and to contextualize the methodological directions for child-robot interaction. Finally, we discuss the challenges and limitations of the current methodological approaches as well as possible future directions for the evaluation methods of child-robot interaction in learning settings.