scispace - formally typeset
Search or ask a question

Showing papers on "Social robot published in 2016"


Journal ArticleDOI
TL;DR: This paper presents the basics of swarm robotics and introduces HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems and identifies the core concepts needed to design a human-swarm system.
Abstract: Recent advances in technology are delivering robots of reduced size and cost. A natural outgrowth of these advances are systems comprised of large numbers of robots that collaborate autonomously in diverse applications. Research on effective autonomous control of such systems, commonly called swarms, has increased dramatically in recent years and received attention from many domains, such as bioinspired robotics and control theory. These kinds of distributed systems present novel challenges for the effective integration of human supervisors, operators, and teammates that are only beginning to be addressed. This paper is the first survey of human–swarm interaction (HSI) and identifies the core concepts needed to design a human–swarm system. We first present the basics of swarm robotics. Then, we introduce HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems. Next, we introduce the interface between swarm and operator and identify challenges and solutions relating to human–swarm communication, state estimation and visualization, and human control of swarms. For the latter, we develop a taxonomy of control methods that enable operators to control swarms effectively. Finally, we synthesize the results to highlight remaining challenges, unanswered questions, and open problems for HSI, as well as how to address them in future works.

312 citations


Journal ArticleDOI
TL;DR: A systematic literature review of the studies on social robotics in autism therapy found many positive implications in the use of social robots in therapy, but it is necessary to clarify whether sex, intelligence quotient, and age of participants affect the outcome of therapy and whether any beneficial effects only occur during the robotic session.
Abstract: Social robotics could be a promising method for Autism Spectrum Disorders (ASD) treatment. The aim of this article is to carry out a systematic literature review of the studies on this topic that were published in the last 10 years. We tried to address the following questions: can social robots be a useful tool in autism therapy? We followed the PRISMA guidelines, and the protocol was registered within PROSPERO database (CRD42015016158). We found many positive implications in the use of social robots in therapy as for example: ASD subjects often performed better with a robot partner rather than a human partner; sometimes, ASD patients had, toward robots, behaviors that TD patients had toward human agents; ASDs had a lot of social behaviors toward robots; during robotic sessions, ASDs showed reduced repetitive and stereotyped behaviors and, social robots manage to improve spontaneous language during therapy sessions. Therefore, robots provide therapists and researchers a means to connect with autistic subjects in an easier way, but studies in this area are still insufficient. It is necessary to clarify whether sex, intelligence quotient, and age of participants affect the outcome of therapy and whether any beneficial effects only occur during the robotic session or if they are still observable outside the clinical/experimental context.

311 citations


Book ChapterDOI
01 Jan 2016
TL;DR: Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art in architectures for multirobot cooperation, exploring the alternative approaches that have been developed.
Abstract: Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

281 citations


Journal ArticleDOI
TL;DR: The principles and system components for navigation and manipulation in domestic environments, the interaction paradigm and its implementation in a multimodal user interface, the core robot tasks, as well as the results from the user studies are described.

263 citations


Journal ArticleDOI
TL;DR: This work proposes a framework for a user to teach a robot collaborative skills from demonstrations, and presents an approach that combines probabilistic learning, dynamical systems, and stiffness estimation to encode the robot behavior along the task.
Abstract: Robots are becoming safe and smart enough to work alongside people not only on manufacturing production lines, but also in spaces such as houses, museums, or hospitals. This can be significantly exploited in situations in which a human needs the help of another person to perform a task, because a robot may take the role of the helper. In this sense, a human and the robotic assistant may cooperatively carry out a variety of tasks, therefore requiring the robot to communicate with the person, understand his/her needs, and behave accordingly. To achieve this, we propose a framework for a user to teach a robot collaborative skills from demonstrations. We mainly focus on tasks involving physical contact with the user, in which not only position, but also force sensing and compliance become highly relevant. Specifically, we present an approach that combines probabilistic learning, dynamical systems, and stiffness estimation to encode the robot behavior along the task. Our method allows a robot to learn not only trajectory following skills, but also impedance behaviors. To show the functionality and flexibility of our approach, two different testbeds are used: a transportation task and a collaborative table assembly.

260 citations


Proceedings Article
12 Feb 2016
TL;DR: This integrated system of tablet-based educational content, Affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.
Abstract: Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.

249 citations


OtherDOI
TL;DR: This article explored whether projecting emotions onto objects could lead to an extension of limited legal protections to robotic companions analogous to animal abuse laws, and found that people tend to anthropomorphize robots that interact with humans on a social level.
Abstract: People tend to anthropomorphize robots that interact with humans on a social level. This Article explores whether projecting emotions onto objects could lead to an extension of limited legal protections to robotic companions, analogous to animal abuse laws.

187 citations


Proceedings ArticleDOI
07 Mar 2016
TL;DR: An anticipatory control method is presented that enables robots to proactively perform task actions based on anticipated actions of their human partners, and is implemented into a robot system that monitored its user's gaze, predicted his or her task intent based on observed gaze patterns, and performed anticipatory task actions according to its predictions.
Abstract: Efficient collaboration requires collaborators to monitor the behaviors of their partners, make inferences about their task intent, and plan their own actions accordingly. To work seamlessly and efficiently with their human counterparts, robots must similarly rely on predictions of their users' intent in planning their actions. In this paper, we present an anticipatory control method that enables robots to proactively perform task actions based on anticipated actions of their human partners. We implemented this method into a robot system that monitored its user's gaze, predicted his or her task intent based on observed gaze patterns, and performed anticipatory task actions accord- ing to its predictions. Results from a human-robot interaction experiment showed that anticipatory control enabled the robot to respond to user requests and complete the task faster-2.5 seconds on average and up to 3.4 seconds-compared to a robot using a reactive control method that did not anticipate user intent. Our findings highlight the promise of performing anticipatory actions for achieving efficient human-robot teamwork.

172 citations


Journal ArticleDOI
TL;DR: It is demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause, and a clearer insight is gained in the processes underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction.
Abstract: The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.

162 citations


Journal ArticleDOI
TL;DR: There are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best.
Abstract: Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children's' privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children's responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children's behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.

149 citations


Proceedings ArticleDOI
07 Mar 2016
TL;DR: A framework is developed which allows a robot to estimate the other agents mental states not only about the environment but also about the state of goals, plans and actions and to take them into account when executing human-robot shared plans.
Abstract: When a robot has to execute a shared plan with a human, a number of unexpected situations and contingencies can happen due, essentially, to human initiative. For instance, a temporary absence or inattention of the human can entail a partial, and potentially not sufficient, knowledge about the current situation. To ensure a successful and fluent execution of the shared plan the robot might need to detect such situations and be able to provide the information to its human partner about what he missed without being annoying or intrusive. To do so, we have developed a framework which allows a robot to estimate the other agents mental states not only about the environment but also about the state of goals, plans and actions and to take them into account when executing human-robot shared plans.

Journal ArticleDOI
TL;DR: The long term evaluation of the Sacarino robot is presented, aimed to improve the robot's capabilities as a bellboy in a hotel; walking alongside the guests, providing information about the city and the hotel and providing hotel-related services.

Journal ArticleDOI
TL;DR: If robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community.
Abstract: Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot's moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.

Journal ArticleDOI
TL;DR: An experiment found knowledge recall varied based on how the lecturer was presented, and robotic and virtual agents may be low-cost, accessible options for video instruction.

Journal ArticleDOI
TL;DR: Children ranging from 3 to 5 years were introduced to two anthropomorphic robots that provided them with information about unfamiliar animals and treated them as interlocutors, with children especially attentive and receptive to whichever robot displayed the greater non-verbal contingency.
Abstract: Children ranging from 3 to 5 years were introduced to two anthropomorphic robots that provided them with information about unfamiliar animals. Children treated the robots as interlocutors. They supplied information to the robots and retained what the robots told them. Children also treated the robots as informants from whom they could seek information. Consistent with studies of children's early sensitivity to an interlocutor's non-verbal signals, children were especially attentive and receptive to whichever robot displayed the greater non-verbal contingency. Such selective information seeking is consistent with recent findings showing that although young children learn from others, they are selective with respect to the informants that they question or endorse.

Proceedings ArticleDOI
07 Mar 2016
TL;DR: It is suggested that in this short-term interaction context, additional effort in developing social aspects of a robot's verbal behaviour may not return the desired positive impact on learning gains.
Abstract: An increasing amount of research is being conducted to determine how a robot tutor should behave socially in educational interactions with children. Both human-human and human-robot interaction literature predicts an increase in learning with increased social availability of a tutor, where social availability has verbal and nonverbal components. Prior work has shown that greater availability in the nonverbal behaviour of a robot tutor has a positive impact on child learning. This paper presents a study with 67 children to explore how social aspects of a tutor robot's speech influences their perception of the robot and their language learning in an interaction. Children perceive the difference in social behaviour between `low' and `high' verbal availability conditions, and improve significantly between a pre- and a post-test in both conditions. A longer-term retention test taken the following week showed that the children had retained almost all of the information they had learnt. However, learning was not affected by which of the robot behaviours they had been exposed to. It is suggested that in this short-term interaction context, additional effort in developing social aspects of a robot's verbal behaviour may not return the desired positive impact on learning gains.

Journal ArticleDOI
TL;DR: In this article, a two-time measurement model experiment was conducted to explore perceptions of interacting with either a robot or human, and the results showed that individuals would be more uncertain, have less liking and anticipate less social presence when they were told that they will be interacting with a social robot as opposed to another person.
Abstract: As social robotics becomes more utilized and routine in everyday situations, individuals will be interacting with social robots in a variety of contexts. Centered on the use of human-to-human interaction scripts, the current study hypothesized that individuals would be more uncertain, have less liking and anticipate less social presence when they are told that they will be interacting with a social robot as opposed to another person. Additionally, the current study utilized a two-time measurement model experiment to explore perceptions of interacting with either a robot or human. Data were consistent with hypotheses. Research questions examined perceptions from Time 1 to Time 2 for the robot condition on the dependent variables. Findings are discussed in light of future research studies.

Journal ArticleDOI
TL;DR: This study was one of the first to examine college students communication-related perceptions of robots being used in an instructional capacity, and generally support the MAIN model and the Computers are Social Actors paradigm, but suggest that future work needs to be done in this area.

Journal ArticleDOI
TL;DR: In this paper, the authors report on the collected questionnaire data from 102 people living in these houses and report that participants evaluated the robot and their user experiences at six points in time, and observed a mere-exposure effect which causes people to evaluate novel stimuli more positively when they gain experience and get familiar with it.
Abstract: As the employment of robots for long-term evaluations in home settings are just starting to be robust enough for research purposes, our study aims at contributing to humanrobot interaction research by adding longitudinal findings to a limited number of long-term social robotics home studies. We placed 70 commercially available robots within people’s homes for a period up to six months. In this paper, we report on the collected questionnaire data from 102 people living in these houses. The participants evaluated the robot and their user experiences at six points in time. We observed a mere-exposure effect which causes people to evaluate a novel stimuli more positively when they gain experience and get familiar with it. The participants evaluated several aspects of the robot. We found user experience initially dropped before rising again when the robot was used over a longer period of time.

Journal ArticleDOI
TL;DR: This survey paper presents an encompassing review of existing automated affect recognition and classification systems for social robots engaged in various HRI settings and discusses pertinent future research directions for promoting the development of socially intelligent robots capable of recognizing, classifying and responding to human affective states during real-time HRI.
Abstract: In Human-Robot Interactions (HRI), robots should be socially intelligent They should be able to respond appropriately to human affective and social cues in order to effectively engage in bi-directional communications Social intelligence would allow a robot to relate to, understand, and interact and share information with people in real-world human-centered environments This survey paper presents an encompassing review of existing automated affect recognition and classification systems for social robots engaged in various HRI settings Human-affect detection from facial expressions, body language, voice, and physiological signals are investigated, as well as from a combination of the aforementioned modes The automated systems are described by their corresponding robotic and HRI applications, the sensors they employ, and the feature detection techniques and affect classification strategies utilized This paper also discusses pertinent future research directions for promoting the development of socially intelligent robots capable of recognizing, classifying and responding to human affective states during real-time HRI

Proceedings ArticleDOI
01 Oct 2016
TL;DR: Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm, and its run-time platform is designed to be laid on top of other frameworks, such as the Robot Operating System.
Abstract: We present Buzz, a novel programming language for heterogeneous robot swarms. Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm. Single-robot primitives include robot-specific instructions and manipulation of neighborhood data. Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as the Robot Operating System. We showcase the capabilities of Buzz by providing code examples, and analyze scalability and robustness of the run-time platform through realistic simulated experiments with representative swarm algorithms.

Journal ArticleDOI
TL;DR: This survey provides an entry point for interested researchers, including a general overview of affordance research, classification and critical analysis of existing work, and discussion of how affordances are useful in developmental robotics.
Abstract: Affordances capture the relationships between a robot and the environment in terms of the actions that the robot is able to perform. The notable characteristic of affordance-based perception is that an object is perceived by what it affords (e.g., graspable and rollable), instead of identities (e.g., name, color, and shape). Affordances play an important role in basic robot capabilities such as recognition, planning, and prediction. The key challenges in affordance research are: 1) how to automatically discover the distinctive features that specify an affordance in an online and incremental manner and 2) how to generalize these features to novel environments. This survey provides an entry point for interested researchers, including: 1) a general overview; 2) classification and critical analysis of existing work; 3) discussion of how affordances are useful in developmental robotics; 4) some open questions about how to use the affordance concept; and 5) a few promising research directions.

Proceedings ArticleDOI
07 Mar 2016
TL;DR: It is found that people collaborate best with a proactive robot, yielding better team fluency and high subjective ratings, rather than working with a reactive robot that only helps when it is needed.
Abstract: Collaborative robots are quickly gaining momentum in real-world settings. This has motivated many new research questions in human-robot collaboration. In this paper, we address the questions of whether and when a robot should take initiative during joint human-robot task execution. We develop a system capable of autonomously tracking and performing table-top object manipulation tasks with humans and we implement three different initiative models to trigger robot actions. Human initiated help gives control of robot action timing to the user; robot-initiated reactive help triggers robot assistance when it detects that the user needs help; and robot-initiated proactive help makes the robot help whenever it can. We performed a user study (N=18) to compare these trigger mechanisms in terms of task performance, usage characteristics, and subjective preference. We found that people collaborate best with a proactive robot, yielding better team fluency and high subjective ratings. However, they prefer having control of when the robot should help, rather than working with a reactive robot that only helps when it is needed.

Journal ArticleDOI
TL;DR: The design approach to the teaching, learning, robot, and smart home systems as an integrated unit is described and results indicated that participants thought that this approach to robot personalization was easy to use, useful, and that they would be capable of using it in real-life situations both for themselves and for others.
Abstract: Care issues and costs associated with an increasing elderly population are becoming a major concern for many countries. The use of assistive robots in “smart-home” environments has been suggested as a possible partial solution to these concerns. A challenge is the personalization of the robot to meet the changing needs of the elderly person over time. One approach is to allow the elderly person, or their carers or relatives, to make the robot learn activities in the smart home and teach it to carry out behaviors in response to these activities. The overriding premise being that such teaching is both intuitive and “nontechnical.” To evaluate these issues, a commercially available autonomous robot has been deployed in a fully sensorized but otherwise ordinary suburban house. We describe the design approach to the teaching, learning, robot, and smart home systems as an integrated unit and present results from an evaluation of the teaching component with 20 participants and a preliminary evaluation of the learning component with three participants in a human–robot interaction experiment. Participants reported findings using a system usability scale and ad-hoc Likert questionnaires. Results indicated that participants thought that this approach to robot personalization was easy to use, useful, and that they would be capable of using it in real-life situations both for themselves and for others.

Journal ArticleDOI
TL;DR: Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders and interaction of the children with both partners did not differ apart from the eye-contact.
Abstract: Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child.

Journal ArticleDOI
TL;DR: This study investigated how the combination of bodily appearance and movement characteristics of a robot can alter people's attributions of animacy, likability, trustworthiness, and unpleasantness.
Abstract: One of robot designers' main goals is to make robots as sociable as possible. Aside from improving robots' actual social functions, a great deal of effort is devoted to making them appear lifelike. This is often achieved by endowing the robot with an anthropomorphic body. However, psychological research on the perception of animacy suggests another crucial factor that might also contribute to attributions of animacy: movement characteristics. In the current study, we investigated how the combination of bodily appearance and movement characteristics of a robot can alter people's attributions of animacy, likability, trustworthiness, and unpleasantness. Participants played games of Tic-Tac-Toe against a robot which (1) either possessed a human form or did not, and (2) either exhibited smooth, lifelike movement or did not. Naturalistic motion was judged to be more animate than mechanical motion, but only when the robot resembled a human form. Naturalistic motion improved likeability regardless of the robot's appearance. Finally, a robot with a human form was rated as more disturbing when it moved naturalistically. Robot designers should be aware that movement characteristics play an important role in promoting robots' apparent animacy. HighlightsMovement characteristics influenced the robot's animacy, likability, and unpleasantness.Baxter was considered to be more likeable when it exhibited naturalistic motion.A full-bodied robot executing mechanistic movement was considered particularly inanimate.A full-bodied robot executing naturalistic movements was particularly unpleasant.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: In this article, a multi-modal deep Q-network (MDQN) is proposed to enable a robot to learn human-like interaction skills through a trial and error method.
Abstract: For robots to coexist with humans in a social world like ours, it is crucial that they possess human-like social interaction skills. Programming a robot to possess such skills is a challenging task. In this paper, we propose a Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like interaction skills through a trial and error method. This paper aims to develop a robot that gathers data during its interaction with a human, and learns human interaction behavior from the high dimensional sensory information using end-to-end reinforcement learning. This paper demonstrates that the robot was able to learn basic interaction skills successfully, after 14 days of interacting with people.

Journal ArticleDOI
TL;DR: In this article, the Transferability-based Behavioral Repertoire Evolution (TBR-Evolution) algorithm is proposed to evolve hundreds of simple walking controllers, one for each possible direction.
Abstract: Numerous algorithms have been proposed to allow legged robots to learn to walk. However, most of these algorithms are devised to learn walking in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm TBR-Evolution, a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: 1 novelty search with local competition, which searches for both high-performing and diverse solutions, and 2 the transferability approach, which combines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of controllers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution introduced a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.

Proceedings ArticleDOI
04 Oct 2016
TL;DR: Methods of conveying perception information and motion intention from self driving vehicles to the surrounding environment are described and the performance of the autonomous vehicles as social robots is improved by building trust and engagement with interacting pedestrians.
Abstract: In this paper, we describe methods of conveying perception information and motion intention from self driving vehicles to the surrounding environment. One method is by equipping autonomous vehicles with Light-Emitting Diode (LED) strips to convey perception information; typical pedestrian-driver acknowledgement is replaced by visual feedback via lights which change color to signal the presence of obstacles in the surrounding environment. Another method is by broadcasting audio cues of the vehicle's motion intention to the environment. The performance of the autonomous vehicles as social robots is improved by building trust and engagement with interacting pedestrians. The software and hardware systems are detailed, and a video demonstrates the working system in real application. Further extension of the work for multi-class mobility in human environments is discussed.

Proceedings ArticleDOI
16 May 2016
TL;DR: An approach for definition and execution of complex robot behaviors based on hierarchical state machines is presented, allowing to flexibly change the structure of behaviors on the fly during runtime through assistance of a remote operator.
Abstract: Motivated by the DARPA Robotics Challenge (DRC), the application of operator assisted (semi-)autonomous robots with highly complex locomotion and manipulation abilities is considered for solving complex tasks in potentially unknown and unstructured environments. Because of the limited a priori knowledge about the state of the environment and tasks needed to achieve a complex mission, a sufficiently complete a priori design of high level robot behaviors is not possible. Most of the situational knowledge required for such behavior design is gathered only during runtime and needs to be interpreted by a human operator. However, current behavior control approaches only allow for very limited adaptation at runtime and no flexible operator interaction. In this paper an approach for definition and execution of complex robot behaviors based on hierarchical state machines is presented, allowing to flexibly change the structure of behaviors on the fly during runtime through assistance of a remote operator. The efficiency of the proposed approach is demonstrated and evaluated not only in an example scenario, but also by application in two robot competitions.