scispace - formally typeset
Search or ask a question

Showing papers on "Social robot published in 2018"


Journal ArticleDOI
15 Aug 2018
TL;DR: The potential of social robots in education is reviewed, the technical challenges are discussed, and how the robot’s appearance and behavior affect learning outcomes are considered.
Abstract: Social robots can be used in education as tutors or peer learners. They have been shown to be effective at increasing cognitive and affective outcomes and have achieved outcomes similar to those of human tutoring on restricted tasks. This is largely because of their physical presence, which traditional learning technologies lack. We review the potential of social robots in education, discuss the technical challenges, and consider how the robot's appearance and behavior affect learning outcomes.

747 citations


Journal ArticleDOI
TL;DR: It is posited that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.
Abstract: As robotics technology evolves, we believe that personal social robots will be one of the next big expansions in the robotics sector. Based on the accelerated advances in this multidisciplinary domain and the growing number of use cases, we can posit that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.

342 citations


Posted Content
TL;DR: SoPhie is presented; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene.
Abstract: This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present \textit{SoPhie}; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.

245 citations


Book
13 Nov 2018
TL;DR: The book begins with a study of mobile robot drives and corresponding kinematic and dynamic models, and discusses the sensors used in mobile robotics, and examines a variety of model-based, model-free, and vision-based controllers with unified proof of their stabilization and tracking performance.
Abstract: Introduction to Mobile Robot Control provides a complete and concise study of modeling, control, and navigation methods for wheeled non-holonomic and omnidirectional mobile robots and manipulators. The book begins with a study of mobile robot drives and corresponding kinematic and dynamic models, and discusses the sensors used in mobile robotics. It then examines a variety of model-based, model-free, and vision-based controllers with unified proof of their stabilization and tracking performance, also addressing the problems of path, motion, and task planning, along with localization and mapping topics. The book provides a host of experimental results, a conceptual overview of systemic and software mobile robot control architectures, and a tour of the use of wheeled mobile robots and manipulators in industry and society. Introduction to Mobile Robot Control is an essential reference, and is also a textbook suitable as a supplement for many university robotics courses. It is accessible to all and can be used as a reference for professionals and researchers in the mobile robotics field. It clearly and authoritatively presents mobile robot concepts. Features: richly illustrated throughout with figures and examples; key concepts demonstrated with a host of experimental and simulation examples; and no prior knowledge of the subject is required; each chapter commences with an introduction and background.

228 citations


Journal ArticleDOI
22 Aug 2018
TL;DR: Children with ASD show improved joint attention after 1 month of in-home social skills training with an autonomous robot, and children showed improvement on joint attention skills with adults when not in the presence of the robot.
Abstract: Social robots can offer tremendous possibilities for autism spectrum disorder (ASD) interventions. To date, most studies with this population have used short, isolated encounters in controlled laboratory settings. Our study focused on a 1-month, home-based intervention for increasing social communication skills of 12 children with ASD between 6 and 12 years old using an autonomous social robot. The children engaged in a triadic interaction with a caregiver and the robot for 30 min every day to complete activities on emotional storytelling, perspective-taking, and sequencing. The robot encouraged engagement, adapted the difficulty of the activities to the child’s past performance, and modeled positive social skills. The system maintained engagement over the 1-month deployment, and children showed improvement on joint attention skills with adults when not in the presence of the robot. These results were also consistent with caregiver questionnaires. Caregivers reported less prompting over time and overall increased communication.

205 citations


Proceedings ArticleDOI
26 Feb 2018
TL;DR: New light is shed on what makes a robot look human, and the ABOT (Anthropomorphic roBOT) Database is introduced, which makes publicly accessible a powerful new tool for future research on robots’ human-likeness.
Abstract: Anthropomorphic robots, or robots with human-like appearance features such as eyes, hands, or faces, have drawn considerable attention in recent years. To date, what makes a robot appear human-like has been driven by designers» and researchers» intuitions, because a systematic understanding of the range, variety, and relationships among constituent features of anthropomorphic robots is lacking. To fill this gap, we introduce the ABOT (Anthropomorphic roBOT) Database---a collection of 200 images of real-world robots with one or more human-like appearance features (http://www.abotdatabase.info). Harnessing this database, Study 1 uncovered four distinct appearance dimensions (i.e., bundles of features) that characterize a wide spectrum of anthropomorphic robots and Study 2 identified the dimensions and specific features that were most predictive of robots» perceived human-likeness. With data from both studies, we then created an online estimation tool to help researchers predict how human-like a new robot will be perceived given the presence of various appearance features. The present research sheds new light on what makes a robot look human, and makes publicly accessible a powerful new tool for future research on robots» human-likeness.

162 citations


Journal ArticleDOI
TL;DR: It is concluded that research agendas need to be diversified and the diversity of research participants needs to be broadened and robotics-intensified knowledge, skills, and attitudes in defining robotics education in connection to computer science education.
Abstract: This study conducted a systematic and thematic review on existing literature in robotics education using robotics kits (not social robots) for young children (Pre-K and kindergarten through 5th grade). This study investigated: (1) the definition of robotics education; (2) thematic patterns of key findings; and (3) theoretical and methodological traits. The results of the review present a limitation of previous research in that it has focused on robotics education only as an instrumental means to support other subjects or STEM education. This study identifies that the findings of the existing research are weighted toward outcome-focused research. Lastly, this study addresses the fact that most of the existing studies used constructivist and constructionist frameworks not only to design and implement robotics curricula but also to analyze young children’s engagement in robotics education. Relying on the findings of the review, this study suggests clarifying and specifying robotics-intensified knowledge, skills, and attitudes in defining robotics education in connection to computer science education. In addition, this study concludes that research agendas need to be diversified and the diversity of research participants needs to be broadened. To do this, this study suggests employing social and cultural theoretical frameworks and critical analytical lenses by considering children’s historical, cultural, social, and institutional contexts in understanding young children’s engagement in robotics education.

131 citations


Journal ArticleDOI
TL;DR: A theoretical perspective is proposed that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots.
Abstract: Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favoured cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artefacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally-based ethical approach to social robotics, “synthetic ethics”, which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.

124 citations


Journal ArticleDOI
TL;DR: This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights” and identifies four modalities concerning social robots and the question of rights.
Abstract: This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.

122 citations


Journal ArticleDOI
TL;DR: A novel method for human–robot collaboration, where the robot physical behaviour is adapted online to the human motor fatigue, is proposed, which is based on the human muscle activity measured by the electromyography.
Abstract: In this paper, we propose a novel method for human---robot collaboration, where the robot physical behaviour is adapted online to the human motor fatigue. The robot starts as a follower and imitates the human. As the collaborative task is performed under the human lead, the robot gradually learns the parameters and trajectories related to the task execution. In the meantime, the robot monitors the human fatigue during the task production. When a predefined level of fatigue is indicated, the robot uses the learnt skill to take over physically demanding aspects of the task and lets the human recover some of the strength. The human remains present to perform aspects of collaborative task that the robot cannot fully take over and maintains the overall supervision. The robot adaptation system is based on the Dynamical Movement Primitives, Locally Weighted Regression and Adaptive Frequency Oscillators. The estimation of the human motor fatigue is carried out using a proposed online model, which is based on the human muscle activity measured by the electromyography. We demonstrate the proposed approach with experiments on real-world co-manipulation tasks: material sawing and surface polishing.

122 citations


Journal ArticleDOI
15 Aug 2018
TL;DR: This study replicated the finding that adults are influenced by their peers but showed that they resist social pressure from a group of small humanoid robots, and showed that children conform to the robots.
Abstract: People are known to change their behavior and decisions to conform to others, even for obviously incorrect facts. Because of recent developments in artificial intelligence and robotics, robots are increasingly found in human environments, and there, they form a novel social presence. It is as yet unclear whether and to what extent these social robots are able to exert pressure similar to human peers. This study used the Asch paradigm, which shows how participants conform to others while performing a visual judgment task. We first replicated the finding that adults are influenced by their peers but showed that they resist social pressure from a group of small humanoid robots. Next, we repeated the study with 7- to 9-year-old children and showed that children conform to the robots. This raises opportunities as well as concerns for the use of social robots with young and vulnerable cross-sections of society; although conforming can be beneficial, the potential for misuse and the potential impact of erroneous performance cannot be ignored.

Journal ArticleDOI
TL;DR: A novel learning algorithm called “Reset-free Trial-and-Error” (RTE) is introduced that breaks the complexity by pre-generating hundreds of possible behaviors with a dynamics simulator of the intact robot, and allows complex robots to quickly recover from damage while completing their tasks and taking the environment into account.

Journal ArticleDOI
TL;DR: This paper proposes a method for implementing ethical behaviour in robots inspired by the simulation theory of cognition and implements a version of this architecture on a humanoid NAO robot so that it behaves according to Asimov’s laws of robotics.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: It is discovered that during times of tension, human teammates in a group with a robot making vulnerable statements were more likely to explain their failure to the group, console team members who had made mistakes, and laugh together, all actions that reduce the amount of tension experienced by the team.
Abstract: Successful teams are characterized by high levels of trust between team members, allowing the team to learn from mistakes, take risks, and entertain diverse ideas. We investigated a robot’s potential to shape trust within a team through the robot’s expressions of vulnerability. We conducted a between-subjects experiment (N = 35 teams, 105 participants) comparing the behavior of three human teammates collaborating with either a social robot making vulnerable statements or with a social robot making neutral statements. We found that, in a group with a robot making vulnerable statements, participants responded more to the robot’s comments and directed more of their gaze to the robot, displaying a higher level of engagement with the robot. Additionally, we discovered that during times of tension, human teammates in a group with a robot making vulnerable statements were more likely to explain their failure to the group, console team members who had made mistakes, and laugh together, all actions that reduce the amount of tension experienced by the team. These results suggest that a robot’s vulnerable behavior can have "ripple effects" on their human team members’ expressions of trust-related behavior.

Journal ArticleDOI
TL;DR: This paper suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors.
Abstract: In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human-human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors.

Journal ArticleDOI
TL;DR: An overview of the current state of emotion expression and perception during interactions with artificial agents is provided, as well as a clear articulation of the challenges and guiding principles to be addressed as the authors move ever closer to truly emotional artificial agents.
Abstract: Given recent technological developments in robotics, artificial intelligence, and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here, we provide an overview of the current state of emotion expression and perception during interactions with artificial agents, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents.

Journal ArticleDOI
TL;DR: This paper aims to provide a general overview of recent attempts to enable robots to recognize human emotions and interact properly and which applications are the most suitable for the robot to have such sensitivity.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: Survey results indicate preferences for varying levels of realism and detail in robot faces based on context, and indicate how the presence or absence of specific features affects perception of the face and the types of jobs the face would be appropriate for.
Abstract: Faces are critical in establishing the agency of social robots; however, building expressive mechanical faces is costly and difficult. Instead, many robots built in recent years have faces that are rendered onto a screen. This gives great flexibility in what a robot’s face can be and opens up a new design space with which to establish a robot’s character and perceived properties. Despite the prevalence of robots with rendered faces, there are no systematic explorations of this design space. Our work aims to fill that gap. We conducted a survey and identified 157 robots with rendered faces and coded them in terms of 76 properties. We present statistics, common patterns, and observations about this data set of faces. Next, we conducted two surveys to understand people’s perceptions of rendered robot faces and identify the impact of different face features. Survey results indicate preferences for varying levels of realism and detail in robot faces based on context, and indicate how the presence or absence of specific features affects perception of the face and the types of jobs the face would be appropriate for.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: A set of 28 different uni- and multimodal expressions for the basic emotions joy, sadness, fear and anger were designed and validated using the most common output modalities color, motion and sound and modalities differed in their degree of effectiveness to communicate single emotions.
Abstract: Artificial emotion display is a key feature of social robots to communicate internal states and behaviors in familiar human terms. While humanoid robots can draw on signals such as facial expressions or voice, emotions in appearance-constrained robots can only be conveyed through less-anthropomorphic output channels. While previous work focused on identifying specific expressional designs to convey a particular emotion, little work has been done to quantify the information content of different modalities and how they become effective in combination. Based on emotion metaphors that capture mental models of emotions, we systematically designed and validated a set of 28 different uni- and multimodal expressions for the basic emotions joy, sadness, fear and anger using the most common output modalities color, motion and sound. Classification accuracy and users’ confidence of emotion assignment were evaluated in an empirical study with 33 participants and a robot probe. The findings are distilled into a set of recommendations about which modalities are most effective in communicating basic artificial emotion. Combining color with planar motion offered the overall best cost/benefit ratio by making use of redundant multi-modal coding. Furthermore, modalities differed in their degree of effectiveness to communicate single emotions. Joy was best conveyed via color and motion, sadness via sound, fear via motion and anger via color.

Proceedings Article
17 May 2018
TL;DR: A review of the literature on personality and robots highlights three major research areas, identifies gaps to be addressed and presents major conclusions from the literature.
Abstract: Personality have been identified as an important facilitator of human robot interaction. Despite this, the research on personality in the human robot interaction literature remains fragmented and lacks a coherent framework. This makes it difficult for scholars to comprehend what is known and what is not. This paper reviews the literature on personality and robots. This review: (1) highlights three major research areas, (2) identifies gaps to be addressed and (3) presents major conclusions from the literature.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a wearable affective robot that integrates the affective robots, social robots, brain wearable, and Wearable 2.0 wearable devices, which can improve the human health on the spirit level, meeting the fashion requirements at the same time.
Abstract: With the development of the artificial intelligence (AI), the AI applications have influenced and changed people’s daily life greatly. Here, a wearable affective robot that integrates the affective robot, social robot, brain wearable, and Wearable 2.0 is proposed for the first time. The proposed wearable affective robot is intended for a wide population, and we believe that it can improve the human health on the spirit level, meeting the fashion requirements at the same time. In this paper, the architecture and design of an innovative wearable affective robot, which is dubbed as Fitbot, are introduced in terms of hardware and algorithm’s perspectives. In addition, the important functional component of the robot-brain wearable device is introduced from the aspect of the hardware design, EEG data acquisition and analysis, user behavior perception, and algorithm deployment. Then, the EEG-based cognition of user’s behavior is realized. Through the continuous acquisition of the in-depth, in-breadth data, the Fitbot we present can gradually enrich user’s life modeling and enable the wearable robot to recognize user’s intention and further understand the behavioral motivation behind the user’s emotion. The learning algorithm for the life modeling embedded in Fitbot can achieve better user’s experience of affective social interaction. Finally, the application service scenarios and some challenging issues of a wearable affective robot are discussed.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: The results of a qualitative study with therapists are presented to inform social robotics and human robot interaction for engagement in rehabilitative therapies and propose how SARs might augment or offer more pro-active assistance over existing technologies also designed to tackle this issue.
Abstract: In this paper we present the results of a qualitative study with therapists to inform social robotics and human robot interaction (HRI) for engagement in rehabilitative therapies. Our results add to growing evidence that socially assistive robots (SARs) could play a role in addressing patients' low engagement with self-directed exercise programmes. Specifically, we propose how SARs might augment or offer more pro-active assistance over existing technologies such as smartphone applications, computer software and fitness trackers also designed to tackle this issue. In addition, we present a series of design implications for such SARs based on therapists' expert knowledge and best practices extracted from our results. This includes an initial set of SAR requirements and key considerations concerning personalised and adaptive interaction strategies.

Journal ArticleDOI
TL;DR: Staff care staff perceptions of Paro and a look-alike non-robotic animal, including benefits and limitations in dementia care, identified that Paro had the potential to improve quality of life for people with dementia, whereas the Plush Toy had limitations when compared to Paro.
Abstract: Objectives: Social robots such as Paro, a therapeutic companion robot, have recently been introduced into dementia care as a means to reduce behavioural and psychological symptoms of dementia. The ...

Journal ArticleDOI
TL;DR: The results show that carefully designed expressive lights on a mobile robot help humans better understand robot states and actions and can have a desirable impact on a collaborative human–robot behavior.
Abstract: We consider mobile service robots that carry out tasks with, for, and around humans in their environments. Speech combined with on-screen display are common mechanisms for autonomous robots to communicate with humans, but such communication modalities may fail for mobile robots due to spatio-temporal limitations. To enable a better human understanding of the robot given its mobility and autonomous task performance, we introduce the use of lights to reveal the dynamic robot state. We contribute expressive lights as a primary modality for the robot to communicate to humans useful robot state information. Such lights are persistent, non-invasive, and visible at a distance, unlike other existing modalities. Current programmable light arrays provide a very large animation space, which we address by introducing a finite set of parametrized signal shapes while still maintaining the needed animation design flexibility. We present a formalism for light animation control and an architecture to map the representation of robot state to the parametrized light animation space. The mapping generalizes to multiple light strips and even other expression modalities. We demonstrate our approach on CoBot, a mobile multi-floor service robot, and evaluate its validity through several user studies. Our results show that carefully designed expressive lights on a mobile robot help humans better understand robot states and actions and can have a desirable impact on a collaborative human–robot behavior.

Journal ArticleDOI
TL;DR: Three studies presented promising outcomes for reducing depressive symptoms in older adults following social robot interventions, and three studies showed decreased, but nonsignificant, trends in depression scores.
Abstract: Purpose In recent years, there has been an increase in the number of studies using social robots to improve psychological well-being This systematic review investigates the effect of social robot interventions for depression in older adults Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) method was used to identify and select existing studies Nine electronic databases were searched for relevant studies Methodological quality was assessed using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument Screening, data extraction, and synthesis were performed by three reviewers Inclusion criteria covered original quantitative studies investigating social robots for depression in older adults Findings Seven studies were identified-six randomized controlled trials and one comparison study-with all classified as good quality Social robot interventions consisted of companion, communication, and health-monitoring robots Three studies presented promising outcomes for reducing depressive symptoms in older adults following social robot interventions, and three studies showed decreased, but nonsignificant, trends in depression scores Conclusions The results highlight the potential of social robot interventions for reducing depression in older adults However, the evidence is not strong enough to formulate recommendations on clinical effectiveness Clinical relevance Social robots are being used with increasing frequency to potentially provide personal support to older adults living in long-term care facilities Social robots can be used to help alleviate depressive symptoms when used in group activities

Journal ArticleDOI
TL;DR: The goal directed model accounts for a considerable amount of the explained variance of behavioral intention, underlining the importance of affective and motivational factors, in the intention to work with a social robot.

Journal ArticleDOI
21 Jun 2018
TL;DR: Results suggest robots that are intended to influence human behavior should be designed to have facial characteristics the authors trust in humans and could be personalized to have the same gender as the user.
Abstract: The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible) gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans) is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans). Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot.

Journal ArticleDOI
TL;DR: In this article, questions have arisen regarding how automation influences the way that news is processed and evaluated by audi-cation systems and how this affects the quality of news articles.
Abstract: As more news articles are written via collaboration between journalists and algorithms, questions have arisen regarding how automation influences the way that news is processed and evaluated by aud...

Proceedings ArticleDOI
26 Feb 2018
TL;DR: This paper proposes a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations, and successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.
Abstract: Research in robotics and human-robot interaction is becoming more and more mature. Additionally, more affordable social robots are being released commercially. Thus, industry is currently demanding ideas for viable commercial applications to situate social robots in public spaces and enhance customers experience. However, present literature in human-robot interaction does not provide a clear set of guidelines and a methodology to (i) identify commercial applications for robotic platforms able to position the users’ needs at the centre of the discussion and (ii) ensure the creation of a positive user experience. With this paper we propose to fill this gap by providing a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations. As we will show in this paper, we successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.

Journal ArticleDOI
TL;DR: A critical look is taken at the case of a robotic tutor implemented in an elementary school for 3.5 months, where children repeatedly took turns interacting with the robot individually as well as in pairs, to explore what caused breakdowns in children's interactions with the robotic tutor.