scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Social Robotics in 2018"


Book ChapterDOI
Omar Mubin1, Muneeb Imtiaz Ahmad1, Simranjit Kaur1, Wen Shi1, Aila M Khan1 
28 Nov 2018
TL;DR: This paper overviews a range of research works that utilized NAO and Pepper robot in the public settings and shows that Education scenarios are one of the more popular and most interaction is centered around providing information.
Abstract: Social robots can prove to be an effective medium of instruction and communication to users in a public setting. However their range of interaction in current research is not known. In this paper, we overview a range of research works that utilized NAO and Pepper robot in the public settings. Our results show that Education scenarios are one of the more popular and most interaction is centered around providing information. In conclusion, we present key design implications that researchers can employ whilst designing social robot interactions in the public space.

34 citations


Book ChapterDOI
28 Nov 2018
TL;DR: Investigation of factors that contribute to cognitive and affective trust of social robots found Familiarity attitude appeared to relate with both cognitive and affectsive trust, while other sub-dimensions of robot attitudes such as Interest, Negative attitude, and Utility appeared to relating with affectiveTrust.
Abstract: The purpose of this study is to investigate the factors that contribute to cognitive and affective trust of social robots. Also investigated were the changes within two different types of trust over time and variables that influence trust. Elements of trust extracted from literature were used to evaluate people’s trust of social robot in an experiment. As a result of a factor analysis, ten factors that construct trust were extracted. These factors were further analyzed in relations with both cognitive and affective trust. Factors such as Security, Teammate, and Performance were found to relate with cognitive trust, while factors such as Teammate, Performance, Autonomy, and Friendliness appeared to relate with affective trust. Furthermore, changes in cognitive and affective trust over the time phases of the interaction were investigated. Affective trust appeared to develop in the earlier phase, while cognitive trust appeared to develop over the whole period of the interaction. Conversation topics had influence on affective trust, while robot’s mistakes had influence on the cognitive trust. On the other hand, prior experiences with social robots did now show any significant relations with neither cognitive nor affective trust. Finally, Familiarity attitude appeared to relate with both cognitive and affective trust, while other sub-dimensions of robot attitudes such as Interest, Negative attitude, and Utility appeared to relate with affective trust.

28 citations


Book ChapterDOI
28 Nov 2018
TL;DR: The type of architecture required for dialogue capability of interactive social robots is discussed, emphasizing the need for robot communication to afford natural interaction and provide complementarity to standard cognitive architectures.
Abstract: Dialogue capability is an important functionality of robot agents: interactive social robots must not only help humans in their everyday tasks, they also need to explicate their own actions, instruct human partners about practical tasks, provide requested information, and maintain interesting chat about a wide range of topics. This paper discusses the type of architecture required for such dialogue capability, emphasizing the need for robot communication to afford natural interaction and provide complementarity to standard cognitive architectures.

24 citations


Journal ArticleDOI
01 Jan 2018
TL;DR: It was found that the results did not show a statistically significant difference in participants’ performance between the human or robot collaborators, and it was suggested that using robot collaborators might be as efficient as using human ones, in the context of serious game collaborative tasks.
Abstract: The aim of this paper is to investigate performance in a collaborative human–robot interaction on a shared serious game task. Furthermore, the effect of elicited emotions and perceived social behav ...

19 citations


Book ChapterDOI
28 Nov 2018
TL;DR: The results show that the delayed trust repair is more effective than the early case, which is consistent with the previous results, and there seem to be a strong influence of attention on the participants’ decision to follow the robot.
Abstract: If robots are to occupy a space in the human social sphere, then the importance of trust naturally extends to human-robot interactions. Past research has examined human-robot interaction from a number of perspectives, ranging from overtrust in human robot interactions to trust repair. Studies by [15] have suggested a relationship between the success of a trust repair method and the time at which it is employed. Additionally, studies have shown a potentially dangerous tendency in humans to trust robotic systems beyond their operational capacity. It therefore becomes essential to explore the factors that affect trust in greater depth. The study presented in this paper is aimed at building upon previous work to gain insight into the reasons behind the success of trust repair methods and their relation to timing. Our results show that the delayed trust repair is more effective than the early case, which is consistent with the previous results. In the absence of an emergency, the participant’s decision were similar to those of a random selection. Additionally, there seem to be a strong influence of attention on the participants’ decision to follow the robot.

19 citations


Book ChapterDOI
28 Nov 2018
TL;DR: This work evaluates the feasibility of a wrist-wearable sensor in detecting challenging behaviors in a child with autism prior to any visible signs through the monitoring of the child’s heart rate, electrodermal activity, and movements and investigates wearable sensors on the wrist and on the ankle of a neurotypical child.
Abstract: Young individuals with ASD may exhibit challenging behaviors. Among these, self-injurious behavior (SIB) is the most devastating for a person’s physical health and inclusion within the community. SIB refers to a class of behaviors that an individual inflicts upon himself or herself, which may potentially result in physical injury (e.g. hitting one’s own head with the hand or the wrist, banging one’s head on the wall, biting oneself and pulling out one’s own hair). We evaluate the feasibility of a wrist-wearable sensor in detecting challenging behaviors in a child with autism prior to any visible signs through the monitoring of the child’s heart rate, electrodermal activity, and movements. Furthermore, we evaluate the feasibility of such sensor to be used on an ankle instead of the wrist to reduce harm due to hitting oneself by hands and to improve wearable tolerance. Thus, we conducted two pilot tests. The first test involved a wearable sensor on the wrist of a child with autism. In a second test, we investigated wearable sensors on the wrist and on the ankle of a neurotypical child. Both pilot test results showed that the readings from the wearable sensors correlated with the children’s behaviors that were obtained from the videos taken during the tests. Wearable sensors could provide additional information that can be passed to social robots or to the caregivers for mitigating SIBs.

17 citations


Book ChapterDOI
28 Nov 2018
TL;DR: Findings show that robot proxemics for passing by differ from approaching a person, and the implications for modelling human-aware navigation and personal space models are discussed.
Abstract: If autonomous robots are expected to operate in close proximity with people, they should be able to deal with human proxemics and social rules. Earlier research has shown that robots should respect personal space when approaching people, although the quantitative details vary with robot model and direction of approach. It would seem that similar considerations apply when a robot is only passing by, but direct measurement of the comfort of the passing distance is still missing. Therefore the current study measured the perceived comfort of varying passing distances of the robot on each side of a person in a corridor. It was expected that comfort would increase with distance until an optimum was reached, and that people would prefer a left passage over a right passage. Results showed that the level of comfort did increase with distance up to about 80 cm, but after that it remained constant. There was no optimal distance. Surprisingly, the side of passage had no effect on perceived comfort. These findings show that robot proxemics for passing by differ from approaching a person. The implications for modelling human-aware navigation and personal space models are discussed.

16 citations


Book ChapterDOI
28 Nov 2018
TL;DR: Results showed that when participants successfully stopped the balloon, they rated their SoA lower in the Joint than in the Individual condition, independently of the amount of lost points, suggesting that interacting with robots affects SoA, similarly to interacting with other humans.
Abstract: In the presence of others, sense of agency (SoA), i.e. the perceived relationship between our own actions and external events, is reduced. This effect is thought to contribute to diffusion of responsibility. The present study aimed at examining humans’ SoA when interacting with an artificial embodied agent. Young adults participated in a task alongside the Cozmo robot (Anki Robotics). Participants were asked to perform costly actions (i.e. losing various amounts of points) to stop an inflating balloon from exploding. In 50% of trials, only the participant could stop the inflation of the balloon (Individual condition). In the remaining trials, both Cozmo and the participant were in charge of preventing the balloon from bursting (Joint condition). The longer the players waited before pressing the “stop” key, the smaller amount of points that was subtracted. However, in case the balloon burst, participants would lose the largest amount of points. In the joint condition, no points were lost if Cozmo stopped the balloon. At the end of each trial, participants rated how much control they perceived over the outcome of the trial. Results showed that when participants successfully stopped the balloon, they rated their SoA lower in the Joint than in the Individual condition, independently of the amount of lost points. This suggests that interacting with robots affects SoA, similarly to interacting with other humans.

12 citations


Book ChapterDOI
28 Nov 2018
TL;DR: A new platform for a virtual reality social robot (VR-social robot) which could be used as an auxiliary device or a replacement for real social robots, and suggests that the acceptance of a VR-robot is fairly compatible to the real robot.
Abstract: The role of technology in education and clinical therapy cannot be disregarded. Employing robots and computer-based devices as competent and advanced learning tools for children indicates that there is a role for technology in overcoming certain weaknesses of common therapy and educational procedures. In this paper, we present a new platform for a virtual reality social robot (VR-social robot) which could be used as an auxiliary device or a replacement for real social robots. To support the idea, a VR-robot, based on the real social robot Arash, is designed and developed in the virtual reality environment. “Arash” is a social robot buddy particularly designed and realized to improve learning, educating, entertaining, and clinical therapy for children with chronic disease. The acceptance and eligibility of the actual robot among these children have been previously investigated. In the present study, we investigated the acceptability and eligibility of a virtual model of the Arash robot among twenty children. To have a fair comparison a similar experiment was also performed utilizing the real Arash robot. The experiments were conducted in the form of storytelling. The initial results are promising and suggest that the acceptance of a VR-robot is fairly compatible to the real robot since the performance of the VR-robot did not have significant differences with the performance of the real Arash robot. Thereby, this platform has the potential to be a substitute or an auxiliary solution for the real social robot.

11 citations


Book ChapterDOI
28 Nov 2018
TL;DR: A new model of robot-patient communication is proposed and a research agenda is put forward for advancing knowledge of how robots can communicate effectively with patients to influence health outcomes.
Abstract: Socially assistive robots need to be able to communicate effectively with patients in healthcare applications. This paper outlines research on doctor-patient communication and applies the principles to robot-patient communication. Effective communication skills for physicians include information sharing, relationship building, and shared decision making. Little research to date has systematically investigated the components of physician communication skills as applied to robots in healthcare domains. We propose a new model of robot-patient communication and put forward a research agenda for advancing knowledge of how robots can communicate effectively with patients to influence health outcomes.

10 citations


Book ChapterDOI
28 Nov 2018
TL;DR: Care workers’ opinions on robot assistance in elderly services are analyzed and reflect them to the idea of embodied relationship between a caregiver, care receiver and technology to understand the envisioned robot-human constellations in care work.
Abstract: Care robots are often seen to introduce a risk to human, touch based care. In this study, we analyze care workers’ opinions on robot assistance in elderly services and reflect them to the idea of embodied relationship between a caregiver, care receiver and technology. Our empirical data consists of a survey for professional care workers (n = 3800), including registered and practical nurses working in elderly care. The questionnaire consisted scenarios of robot assistance in care work and in elderly services and the respondents were asked to evaluate whether they see them as desirable. The care workers were significantly more approving of robot assistance in lifting heavy materials compared to moving patients. Generally, the care workers were reserved towards the idea of utilizing autonomous robots in tasks that typically involve human touch, such as assisting the elderly in the bathroom. Stressing the importance of presence and touch in human care, we apply the ideas of phenomenology of the body to understand the envisioned robot-human constellations in care work.

Book ChapterDOI
28 Nov 2018
TL;DR: Investigating how the embodiment of an exercising partner influences the exercising motivation to persist on an abdominal plank exercise shows that the participants had longer exercising times when paired with a robot companion compared to virtual agents, but not compared to a human partner.
Abstract: Preventing diseases of affluence is one of the major challenges for our future society. Researchers introduced robots as a tool to support people on dieting or rehabilitation tasks. However, deploying robots as exercising companions is cost-intensive. Therefore, in our current work, we are investigating how the embodiment of an exercising partner influences the exercising motivation to persist on an abdominal plank exercise. We analyzed and compared data from previous experiments on exercising with robots and virtual agents. The results show that the participants had longer exercising times when paired with a robot companion compared to virtual agents, but not compared to a human partner. However, participants perceived the robots partner as more likable than a human partner. This results have implications for SAR practitioners and are important for the usage of SAR to promote physical activity.

Book ChapterDOI
28 Nov 2018
TL;DR: The robot system includes a sensor manager and a robot system for a general doctor’s practice, which enables multiple robots to serve multiple patients at one time, by sharing vital signs devices.
Abstract: This paper presents a robot system for a healthcare environment, especially for a family doctor practice. The robot system includes a sensor manager and a robot system for a general doctor’s practice, which enables multiple robots to serve multiple patients at one time, by sharing vital signs devices. A receptionist robot assigns one patient to one nurse assistant robot using a patient identification system. Our previous work included three subsystems: a receptionist robot, a nurse assistant robot, and a medical server. However, this could only serve one patient and one vital signs device at any one time, which means we can use only one of vital signs devices prepared for patients and wastes their time waiting. In addition, patients should enter their identification data to robot by themselves, which takes another long time as well as can make errors on the data. We implemented the new system with multiple robots and new patient identification system using QR code, and did a pilot study to confirm the new system’s functionalities. The results show the new system talks well with multiple robots to support multiple patients by identifying them using QR codes, and measures their vital signs well by sharing the devices.

Book ChapterDOI
28 Nov 2018
TL;DR: The purpose of this study was to evaluate how movement by a non-humanoid robot could affect participant experience and the current framework is designed for this particular task but could be built upon to provide a base for various collaborative studies.
Abstract: Researching human-robot interaction “in the wild” can sometimes require insight from different fields. Experiments that involve collaborative tasks are valuable opportunities for studying HRI and developing new tools. The following describes a framework for an “in the wild” experiment situated in a public museum that involved a Wizard of OZ (WOZ) controlled robot. The UR10 is a non-humanoid collaborative robot arm and was programmed to engage in a collaborative drawing task. The purpose of this study was to evaluate how movement by a non-humanoid robot could affect participant experience. While the current framework is designed for this particular task, the control architecture could be built upon to provide a base for various collaborative studies.

Book ChapterDOI
28 Nov 2018
TL;DR: In this article, the authors used the verticality metric computed from motion capture data to animate virtual characters and found that imitation of the characters' movements was effective compared to pseudo-random motion profiles.
Abstract: Imitating human motion on robotic platforms is a task which requires ignoring some information about the original human mover as robots have fewer degrees of freedom than a human. In an effort to generate low degree of freedom motion profiles based on human movement, this paper utilizes verticality, computed from motion capture data, to animate virtual characters. After creating correspondences between the verticality metrics and the movement of three and four degree of freedom virtual characters, lay users were asked whether the imitation of the characters’ movements was effective compared to pseudo-random motion profiles. The results showed a statistically significant preference for the verticality method for the higher DOF character and for the higher DOF character over the lower DOF character. Future work includes extending the verticality method to more virtual characters and developing other methodologies of motion generation for users to evaluate a more diverse set of motion profiles. This work can help create automated protocols for replicating human motion, and intent, on artificial systems.

Book ChapterDOI
28 Nov 2018
TL;DR: This work presents a two-step framework implementing a new strategy for the detection of ADL anomalies that aims at identifying those that are divergent from normal ones, while offering the significant advantage that it is much easier to create dataset of normal ADL.
Abstract: The ability to recognize and model human Activities of Daily Living (ADL) and to detect possible deviations from regular patterns, or anomalies, constitutes an enabling technology for developing effective Socially Assistive Robots. Traditional approaches aim at recognizing an anomaly behavior by means of machine-learning techniques trained on anomalies’ dataset, like subject’s falls. The main problem with these approaches lies in the difficulty to generate these dataset. In this work, we present a two-step framework implementing a new strategy for the detection of ADL anomalies. Indeed, rather than detecting anomaly behaviors, we aim at identifying those that are divergent from normal ones. This is achieved by a first step, where a deep learning technique determine the most probable ADL class related to the action performed by the subject. In a second step, a Gaussian Mixture Model is used to compute the likelihood that the action is normal or not, within that class. We performed an experimental validation of the proposed framework on a public dataset. Results are very close to the best traditional approaches, while at the same time offering the significant advantage that it is much easier to create dataset of normal ADL.

Book ChapterDOI
28 Nov 2018
TL;DR: This paper delves into the surprisingly under-considered convergence between Hollywood animation and ‘Big Tech’ in the field of social robotics, exploring the implications of character animation for human-robot interaction, and highlighting the emergence of a robotic character archetype.
Abstract: This paper delves into the surprisingly under-considered convergence between Hollywood animation and ‘Big Tech’ in the field of social robotics, exploring the implications of character animation for human-robot interaction, and highlighting the emergence of a robotic character archetype. We explore the significance and possible effects of a Hollywood-based approach to character design for human-robot sociality, and, at a wider level, consider the possible impact of this for human relationality and the concept of ‘companionship’ itself. We conclude by arguing for greater consideration of the socio-political and ethical consequences of importing and perpetuating relational templates that are drawn from powerful media conglomerates like Disney. In addition to facing a possible degradation of social relations, we may also be facing a possible delimitation of social relationality, based on the values, affects, and ideologies circulating in popular Hollywood animation.

Book ChapterDOI
28 Nov 2018
TL;DR: This paper addresses the question of how and when a robot should interrupt a meeting-style conversation between humans and compared different approaches to interruption and found that users liked the interruptibility estimation system better than a baseline system which doesn’t pay attention to the state of the speakers.
Abstract: This paper addresses the question of how and when a robot should interrupt a meeting-style conversation between humans. First, we observed one-to-one human-human conversations. We then employed raters to estimate how easy it was to interrupt each participant in the video. At the same time, we gathered behavioral information about the collocutors (presence of speech, head pose and gaze direction). After establishing that the raters’ ratings were similar, we trained a neural network with the behavioral data as input and the interruptibility measure as output of the system. Once we validated the similarity between the output of our estimator and the actual interruptiblitiy ratings, we proceeded to implement this system on our desktop social robot, CommU. We then used CommU in a human-robot interaction environment, to investigate how the robot should barge-in into a conversation between multiple humans. We compared different approaches to interruption and found that users liked the interruptibility estimation system better than a baseline system which doesn’t pay attention to the state of the speakers. They also preferred the robot to give advance non-verbal notifications of its intention to speak.

Book ChapterDOI
28 Nov 2018
TL;DR: A fast and robust method of real-time grasp detection is presented based on morphological image processing and machine learning and it is really helpful to grasp objects on the condition that a robot is surrounded by obstacles.
Abstract: This paper proposes a new approach to grasp novel objects while avoiding real-time obstacles. The general idea is to perform grasping of novel objects and do collision avoidance at the same time. There are two main contributions. Firstly, a fast and robust method of real-time grasp detection is presented based on morphological image processing and machine learning. Secondly, we integrate our robotic grasping algorithms with some existing collision prediction strategies. It is really helpful to grasp objects on the condition that a robot is surrounded by obstacles. Additionally, it is very practical, runs in real-time and can be easily adaptable with respect to different robots and working conditions. We demonstrate our approaches using the Kinect sensor and the Baxter robot with a series of experiments.

Book ChapterDOI
28 Nov 2018
TL;DR: This paper demonstrates how a robot can be taught the win conditions for the game Connect Four using a single demonstration and a few trial examples with a question and answer session led by the robot.
Abstract: Teaching robots new skills using minimal time and effort has long been a goal of artificial intelligence. This paper investigates the use of game theoretic representations to represent interactive games and learn their win conditions by interacting with a person. Game theory provides the formal underpinnings needed to represent the structure of a game including the goal conditions. Learning by demonstration, has long sought to leverage a robot’s interactions with a person to foster learning. This paper combines these two approaches allowing a robot to learn a game-theoretic representation by demonstration. This paper demonstrates how a robot can be taught the win conditions for the game Connect Four using a single demonstration and a few trial examples with a question and answer session led by the robot. Our results demonstrate that the robot can learn any win condition for the standard rules of the Connect Four game, after demonstration by a human, irrespective of the color or size of the board and the chips. Moreover, if the human demonstrates a variation of the win conditions, we show that the robot can learn the respective changed win condition.

Book ChapterDOI
28 Nov 2018
TL;DR: A multilayer scheme for the cooperative control of heterogeneous mobile manipulators that allows to transport an object in common in a coordinated way is proposed, for which the kinematic modeling of each mobile manipulator robot is performed.
Abstract: This paper proposes a multilayer scheme for the cooperative control of \( n \ge 2 \) heterogeneous mobile manipulators that allows to transport an object in common in a coordinated way; for which the kinematic modeling of each mobile manipulator robot is performed. Stability and robustness are demonstrated using the Lyapunov theory in order to obtain asymptotically stable control. Finally, the results are presented to evaluate the performance of the proposed control, which confirms the scope of the controller to solve different movement problems.

Book ChapterDOI
28 Nov 2018
TL;DR: The design process of a robot is carried out and its validation as a platform to study the thermo-emotional expression of the robot as a medium to express its emotional state.
Abstract: The thermal sensation can be used by humans to interpret emotions. Hence, a series of questions arise as to whether the robot can express its emotional state through the temperature of its body. Therefore, in this study, we carry out the design process of a robot and its validation as a platform to study the thermo-emotional expression. The designed robot can vary the temperature of its skin between 10–55 °C. In this range, it is possible to perform thermal stimuli already studied that have an emotional interpretation, and also to study new ones where the pain receptors are activated. The robot’s shape is designed to look like the body of a creature that is neither human nor animal. In addition, it was designed in such a way that the physical interaction occurs mainly in its head. This is because it was decided to locate the robot’s thermal system there. The results of an experiment with a free interaction showed that the main regions to be caressed were the superior, lateral and upper diagonal faces of the cranium. These regions coincide with the location of the robot’s thermal system. Therefore, the robot can transmit different thermal stimuli to the human when a physical interaction occurs. Consequently, the designed robot will be appropriate to study the body temperature of the robot as a medium to express its emotional state.

Book ChapterDOI
28 Nov 2018
TL;DR: A novel approach for fast prediction of human reaching motion in the context of human-robot collaboration in manipulation tasks by training a recurrent neural network to process the three-dimensional hand trajectory and predict the intended target along with its certainty about the position.
Abstract: We present a novel approach for fast prediction of human reaching motion in the context of human-robot collaboration in manipulation tasks. The method trains a recurrent neural network to process the three-dimensional hand trajectory and predict the intended target along with its certainty about the position. The network then updates its estimate as it receives more observations while advantaging the positions it is more certain about. To assess the proposed algorithm, we build a library of human hand trajectories reaching targets on a fine grid. Our experiments show the advantage of our algorithm over the state of the art in terms of classification accuracy.

Book ChapterDOI
28 Nov 2018
TL;DR: A novel cognitive architecture specialized for Teaching Assistant (TA) social robotic platforms that is modular, minimalistic, extendable, and ROS compatible and observed that the architecture’s capabilities adequately matched RASA, a social robotic platform aimed at teaching Persian Sign Language to hearing-impaired children.
Abstract: This paper endeavors to propose a novel cognitive architecture specialized for Teaching Assistant (TA) social robotic platforms. Designing such architectures could lead to a more systematic approach in using TA robots. The proposed architecture consists of four main blocks: Perception, Memory, Logic and Action Units. The designed cognitive architecture would help robots to perform a variety of visual, acoustic, and spatial sub-tasks based on cognitive theories and modern educational methods. It also provides a way to enable an operator to control the robot with defined plans and teaching scenarios. The proposed architecture is modular, minimalistic, extendable, and ROS compatible. This architecture can help teaching-assistant robots to be involved in common/expected educational scenarios, systematically. Our preliminary exploratory study was a case study that adopted the proposed architecture for RASA, a social robotic platform aimed at teaching Persian Sign Language (PSL) to hearing-impaired children. The last step is the evaluation. We observed that the architecture’s capabilities adequately matched RASA’s needs for its applications in teaching sign language.

Book ChapterDOI
28 Nov 2018
TL;DR: An interaction method that utilizes the eye gazing behaviours of an in-car driving agent platform that reflects the intentions of a simulated autonomous car that holds the potential of enabling human operators to perceive the autonomous car as a social entity is proposed.
Abstract: Autonomous cars have been gaining attention as a future transportation option due to an envisioning of a reduction in human error and achieving a safer, more energy efficient and more comfortable mode of transportation. However, eliminating human involvement may impact the usage of autonomous cars negatively because of the impairment of perceived safety, and the enjoyment of driving. In order to achieve a reliable interaction between an autonomous car and a human operator, the car should evince intersubjectivity, implying that it possesses the same intentions as those of the human operator. One critical social cue for human to understand the intentions of others is eye gaze behaviour. This paper proposes an interaction method that utilizes the eye gazing behaviours of an in-car driving agent platform that reflects the intentions of a simulated autonomous car that holds the potential of enabling human operators to perceive the autonomous car as a social entity. We conducted a preliminary experiment to investigate whether an autonomous car will be perceived as possessing the same intentions as a human operator through gaze following behaviours of the driving agents as compared to the conditions of random gazing as well as when not using the driving agents at all. The results revealed that gaze-following behaviour of the driving agents induces an increase in the perception of intersubjectivity. Also, the proposed interaction method demonstrated that the autonomous system was perceived as safer and more enjoyable.

Book ChapterDOI
28 Nov 2018
TL;DR: A flexible, expressive prototype that augments an existing mobile robot platform in order to create intentional attribution through a previously developed design methodology, resulting in an altered perception of the non-anthropomorphic robotic system.
Abstract: In viewing and interacting with robots in social settings, users attribute character traits to the system. This attribution often occurs by coincidence as a result of past experiences, and not by intentional design. This paper presents a flexible, expressive prototype that augments an existing mobile robot platform in order to create intentional attribution through a previously developed design methodology, resulting in an altered perception of the non-anthropomorphic robotic system. The prototype allows customization through five modalities: customizable eyes, a simulated breath motion, movement, color, and form. Initial results with human subject audience members show that, while participants found the robot likable, they did not consider it anthropomorphic. Moreover, individual viewers saw shifts in perception according to performer interactions. Future work will leverage this prototype to modulate the reactions viewers might have to a mobile robot in a variety of environments.

Book ChapterDOI
28 Nov 2018
TL;DR: An application for teaching behaviors to support conditions closer to the real-world: it supports spoken instructions, and remain compatible the robot’s other purposes, and introduces a novel architecture to enable 5 distinct algorithms to compete with each other.
Abstract: By enabling users to teach behaviors to robots, social robots become more adaptable, and therefore more acceptable. We improved an application for teaching behaviors to support conditions closer to the real-world: it supports spoken instructions, and remain compatible the robot’s other purposes. We introduce a novel architecture to enable 5 distinct algorithms to compete with each other, and a novel teaching algorithm that remain robust with these constraints: using linguistics and semantics, it can recognize when the dialogue context is adequate. We carry out an adaptation of a previous experiment, so that to produce comparable results, demonstrate that all participants managed to teach new behaviors, and partially verify our hypotheses about how users naturally break down the teaching instructions.

Book ChapterDOI
28 Nov 2018
TL;DR: The results showed that the robotic hand was able to improve grasping strategies based on characteristics perceived by the algorithm, and achieve 99.96% accuracy on material recognition.
Abstract: An important challenge in dexterous grasping and manipulation is to perceive the characteristics of an object such as fragility, rigidity, texture, mass and density etc. In this paper, a novel way is proposed to find these important characteristics that help in deciding grasping strategies. We collected Near-infrared (NIR) spectra of objects, classified the spectra to perceive their materials and then looked up the characteristics of the perceived material in a material-to-characteristics table. NIR spectra of six materials including ceramic, stainless steel, wood, cardboard, plastic and glass were collected using SCiO sensor. A Multi-Layer Perceptron (MLP) Neural Networks was implemented to classify the spectra. Also a material-to-characteristics table was established to map the perceived material to their characteristics. The experiment results achieve 99.96% accuracy on material recognition. In addition, a grasping experiment was performed, a robotic hand was trying to grasp two objects which shared similar shapes but made of different materials. The results showed that the robotic hand was able to improve grasping strategies based on characteristics perceived by our algorithm.

Book ChapterDOI
28 Nov 2018
TL;DR: Initial results with human subjects demonstrate an increased likelihood to rate a robot and a robotic shadow as algorithmically controlled, versus a human performer and a human-shaped VR avatar which were more likely rated as human actor controlled or split between algorithm/human actor.
Abstract: Robots in human facing environments will move alongside human beings. This movement has both functional and expressive meaning and plays a crucial role in human perception of robots. Secondarily, how the robot is controlled – through methods like movement or programming and drivers like oneself or an algorithm – factors into human perceptions. This paper outlines the use of an embodied movement installation, “The Loop”, to understand perceptions generated between humans and various technological agents, including a NAO robot and a virtual avatar. Participants were questioned about their perceptions of control in the various agents. Initial results with human subjects demonstrate an increased likelihood to rate a robot and a robotic shadow as algorithmically controlled, versus a human performer and a human-shaped VR avatar which were more likely rated as human actor controlled or split between algorithm/human actor. Participants also showed a tendency to rate their own performance in the exercise as needing improvement. Qualitative data, collected in the form of text and drawings, was open-ended and abstract. Drawings of humans and geometric shapes frequently appeared, as did the words “mirror”, “movement”, and variations on the word “awareness”.

Book ChapterDOI
28 Nov 2018
TL;DR: Investigating people’s evaluations of a drone with eyes versus one without implies that adding eyes to a drone that is designed to interact with humans may make this interaction more natural, and as such enable a successful introduction of social drones.
Abstract: Drones are often used in a context where they interact with human users. They, however, lack the social cues that their robotic counterparts have. If drones would possess such cues, would people respond to them more positively? This paper investigates people’s evaluations of a drone with eyes versus one without. Results show mainly positive effects, i.e. a drone with eyes is seen as more social and human-like than a drone without eyes, and that people are more willing to interact with it. These findings imply that adding eyes to a drone that is designed to interact with humans may make this interaction more natural, and as such enable a successful introduction of social drones.