scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2020"


Journal ArticleDOI
TL;DR: This work uses reinforcement learning (RL) to learn dexterous in-hand manipulation policies that can perform vision-based object reorientation on a physical Shadow Dexterous Hand, and these policies transfer to the physical robot despite being trained entirely in simulation.
Abstract: We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies that can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed...

1,428 citations


Journal ArticleDOI
TL;DR: The results indicate that a robot's capacity to feel elicits stronger feelings of eeriness than a robots' capacity to plan ahead and to exert self-control, which elicits more eerness than a robot without mind.

74 citations


Journal ArticleDOI
09 Jan 2020-BMJ Open
TL;DR: The available evidence related to implementation factors of socially assistive humanoid robots for older adults is limited, mainly focusing on aspects at individual level, and exploring acceptance of this technology.
Abstract: Objectives Socially assistive humanoid robots are considered a promising technology to tackle the challenges in health and social care posed by the growth of the ageing population. The purpose of our study was to explore the current evidence on barriers and enablers for the implementation of humanoid robots in health and social care. Design Systematic review of studies entailing hands-on interactions with a humanoid robot. Setting From April 2018 to June 2018, databases were searched using a combination of the same search terms for articles published during the last decade. Data collection was conducted by using the Rayyan software, a standardised predefined grid, and a risk of bias and a quality assessment tool. Participants Post-experimental data were collected and analysed for a total of 420 participants. Participants comprised: older adults (n=307) aged ≥60 years, with no or some degree of age-related cognitive impairment, residing either in residential care facilities or at their home; care home staff (n=106); and informal caregivers (n=7). Primary outcomes Identification of enablers and barriers to the implementation of socially assistive humanoid robots in health and social care, and consequent insights and impact. Future developments to inform further research. Results Twelve studies met the eligibility criteria and were included. None of the selected studies had an experimental design; hence overall quality was low, with high risks of biases. Several studies had no comparator, no baseline, small samples, and self-reported measures only. Within this limited evidence base, the enablers found were enjoyment, usability, personalisation and familiarisation. Barriers were related to technical problems, to the robots’ limited capabilities and the negative preconceptions towards the use of robots in healthcare. Factors which produced mixed results were the robot’s human-like attributes, previous experience with technology and views of formal and informal carers. Conclusions The available evidence related to implementation factors of socially assistive humanoid robots for older adults is limited, mainly focusing on aspects at individual level, and exploring acceptance of this technology. Investigation of elements linked to the environment, organisation, societal and cultural milieu, policy and legal framework is necessary. PROSPERO registration number CRD42018092866.

73 citations


Proceedings ArticleDOI
21 Apr 2020
TL;DR: The study revealed that such humanoid robots can work in a care home but that there is a moderating person needed, that is in control of the robot.
Abstract: Ageing societies and the associated pressure on the care systems are major drivers for new developments in socially assistive robotics. To understand better the real-world potential of robot-based assistance, we undertook a 10-week case study in a care home involving groups of residents, caregivers and managers as stakeholders. We identified both, enablers and barriers to the potential implementation of robot systems. The study employed the robot platform Pepper, which was deployed with a view to understanding better multi-domain interventions with a robot supporting physical activation, cognitive training and social facilitation. We employed the robot in a group setting in a care facility over the course of 10 weeks and 20 sessions, observing how stakeholders, including residents and caregivers, appropriated, adapted to, and perceived the robot. We also conducted interviews with 11 residents and caregivers. Our results indicate that the residents were positively engaged in the training sessions that were moderated by the robot. The study revealed that such humanoid robots can work in a care home but that there is a moderating person needed, that is in control of the robot.

65 citations


Journal ArticleDOI
27 Jun 2020-Sensors
TL;DR: Various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task are discussed.
Abstract: A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI This is majorly beneficial for those who have severe motor disabilities Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task The paper also includes a review of the methods and system design used in the discussed applications

58 citations


Proceedings ArticleDOI
09 Mar 2020
TL;DR: This study suggests that perceived occupational competency is a better predictor for human trust than robot gender or participant gender, and gendering in robot design should be considered critically in the context of the application by designers.
Abstract: The attribution of human-like characteristics onto humanoid robots has become a common practice in Human-Robot Interaction by designers and users alike. Robot gendering, the attribution of gender onto a robotic platform via voice, name, physique, or other features is a prevalent technique used to increase aspects of user acceptance of robots. One important factor relating to acceptance is user trust. As robots continue to integrate themselves into common societal roles, it will be critical to evaluate user trust in the robot’s ability to perform its job. This paper examines the relationship among occupational gender-roles, user trust and gendered design features of humanoid robots. Results from the study indicate that there was no significant difference in the perception of trust in the robot’s competency when considering the gender of the robot. This expands the findings found in prior efforts that suggest performance-based factors have larger influences on user trust than the robot’s gender characteristics. In fact, our study suggests that perceived occupational competency is a better predictor for human trust than robot gender or participant gender. As such, gendering in robot design should be considered critically in the context of the application by designers. Such precautions would reduce the potential for robotic technologies to perpetuate societal gender stereotypes. CCS CONCEPTS Human-centered computing → Empirical studies in HCI. ACM Reference Format: De’Aira Bryant, Jason Borenstein and Ayanna Howard. 2020. Why Should We Gender? The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency. In Proceedings of 2020 ACM Conference on Human-Robot Interaction (HRI’20), March 23-26, 2020, Cambridge, UK. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3319502.3374778

55 citations


Book ChapterDOI
01 Jan 2020
TL;DR: The novelty of this work lies in the creation of a systematic approach for developing dynamic walking gaits on 3D humanoid robots: from formulating the hybrid system model to gait optimization to experimental validation refined to produce multi-contact 3D walking in experiment.
Abstract: This paper presents the meta-algorithmic approach used to realize multi-contact walking on the humanoid robot, DURUS. This systematic methodology begins by decomposing human walking into a sequence of distinct events (e.g. heel-strike, toe-strike, and toe push-off). These events are converted into an alternating sequence of domains and guards, resulting in a hybrid system model of the locomotion. Through the use of a direct collocation based optimization framework, a walking gait is generated for the hybrid system model emulating human-like multi-contact walking behaviors – additional constraints are iteratively added and shaped from experimental evaluation to reflect the machine’s practical limitations. The synthesized gait is analyzed directly on hardware wherein feedback regulators are introduced which stabilize the walking gait, e.g., modulating foot placement. The end result is an energyoptimized walking gait that is physically implementable on hardware. The novelty of this work lies in the creation of a systematic approach for developing dynamic walking gaits on 3D humanoid robots: from formulating the hybrid system model to gait optimization to experimental validation refined to produce multi-contact 3D walking in experiment.

55 citations


Journal ArticleDOI
TL;DR: An intrinsically stable Model Predictive Control framework for humanoid gait generation that incorporates a stability constraint in the formulation is presented and it is proved that recursive feasibility guarantees stability of the CoM/ZMP dynamics.
Abstract: In this article, we present an intrinsically stable Model Predictive Control (IS-MPC) framework for humanoid gait generation that incorporates a stability constraint in the formulation. The method uses as prediction model a dynamically extended Linear Inverted Pendulum with Zero Moment Point (ZMP) velocities as control inputs, producing in real time a gait (including footsteps with timing) that realizes omnidirectional motion commands coming from an external source. The stability constraint links future ZMP velocities to the current state so as to guarantee that the generated Center of Mass (CoM) trajectory is bounded with respect to the ZMP trajectory. Being the MPC control horizon finite, only part of the future ZMP velocities are decision variables; the remaining part, called tail , must be either conjectured or anticipated using preview information on the reference motion. Several options for the tail are discussed, each corresponding to a specific terminal constraint. A feasibility analysis of the generic MPC iteration is developed and used to obtain sufficient conditions for recursive feasibility. Finally, we prove that recursive feasibility guarantees stability of the CoM/ZMP dynamics. Simulation and experimental results on NAO and HRP-4 are presented to highlight the performance of IS-MPC.

52 citations


Journal ArticleDOI
06 May 2020
TL;DR: A fresh vision of artificial intelligence (AI) research is offered by suggesting a simplification to two goals: emulation to understand human abilities to build systems that perform tasks as well as or better than humans and application of AI methods to build widely used products and services.
Abstract: Researchers’ goals shape the questions they raise, collaborators they choose, methods they use, and outcomes of their work. This article offers a fresh vision of artificial intelligence (AI) research by suggesting a simplification to two goals: 1) emulation to understand human abilities to build systems that perform tasks as well as or better than humans and 2) application of AI methods to build widely used products and services. Researchers and developers for each goal can fruitfully work along their desired paths, but this article is intended to limit the problems that arise when assumptions from one goal are used to drive work on the other goal. For example, autonomous humanoid robots are prominent with emulation researchers, but application developers avoid them, in favor of tool-like appliances or teleoperated devices for widely used commercial products and services. This article covers four such mismatches in goals that affect AI-guided application development: 1) intelligent agent or powerful tool; 2) simulated teammate or teleoperated device; 3) autonomous system or supervisory control; and 4) humanoid robot or mechanoid appliance. This article clarifies these mismatches to facilitate the discovery of workable compromise designs that will accelerate human-centered AI applications research. A greater emphasis on human-centered AI could reduce AI’s existential threats and increase benefits for users and society, such as in business, education, healthcare, environmental preservation, and community safety.

52 citations


Journal ArticleDOI
TL;DR: Investigating the acquisition, loss, and restoration of trust when preschool and school-age children played with either a human or a humanoid robot in vivo found a dichotomy between attributions of mental states to the human and robot and children’s behavior.
Abstract: Studying trust in the context of human-robot interaction is of great importance given the increasing relevance and presence of robotic agents in the social sphere, including educational and clinical. We investigated the acquisition, loss, and restoration of trust when preschool and school-age children played with either a human or a humanoid robot in vivo. The relationship between trust and the representation of the quality of attachment relationships, Theory of Mind, and executive function skills was also investigated. Additionally, to outline children's beliefs about the mental competencies of the robot, we further evaluated the attribution of mental states to the interactive agent. In general, no substantial differences were found in children's trust in the play partner as a function of agency (human or robot). Nevertheless, 3-year-olds showed a trend toward trusting the human more than the robot, as opposed to 7-year-olds, who displayed the reverse pattern. These findings align with results showing that, for 3- and 7-year-olds, the cognitive ability to switch was significantly associated with trust restoration in the human and the robot, respectively. Additionally, supporting previous findings, we found a dichotomy between attributions of mental states to the human and robot and children's behavior: while attributing to the robot significantly lower mental states than the human, in the Trusting Game, children behaved in a similar way when they related to the human and the robot. Altogether, the results of this study highlight that similar psychological mechanisms are at play when children are to establish a novel trustful relationship with a human and robot partner. Furthermore, the findings shed light on the interplay - during development - between children's quality of attachment relationships and the development of a Theory of Mind, which act differently on trust dynamics as a function of the children's age as well as the interactive partner's nature (human vs. robot).

51 citations


Journal ArticleDOI
TL;DR: Safe Reinforcement Learning assumes the existence of a safe baseline policy that permits the humanoid to walk, and probabilistically reuse such a policy to learn a better one, which is represented following a case based approach.

Journal ArticleDOI
TL;DR: A conservative reformulation of this trajectory generation problem as a convex 3-D linear program, named convex resolution of centroidal dynamic trajectories (CROC), which demonstrates that the solution space covered by CROC is large enough to achieve the automated planning of a large variety of locomotion tasks for different robots.
Abstract: Synthesizing legged locomotion requires planning one or several steps ahead (literally): when and where, and with which effector should the next contact(s) be created between the robot and the environment? Validating a contact candidate implies a minima the resolution of a slow, nonlinear optimization problem, to demonstrate that a center of mass (CoM) trajectory, compatible with the contact transition constraints, exists. We propose a conservative reformulation of this trajectory generation problem as a convex 3-D linear program, named convex resolution of centroidal dynamic trajectories (CROC). It results from the observation that if the CoM trajectory is a polynomial with only one free variable coefficient, the nonlinearity of the problem disappears. This has two consequences. On the positive side, in terms of computation times, CROC outperforms the state of the art by at least one order of magnitude, and allows to consider interactive applications (with a planning time roughly equal to the motion time). On the negative side, in our experiments, our approach finds a majority of the feasible trajectories found by a nonlinear solver, but not all of them. Still, we demonstrate that the solution space covered by CROC is large enough to achieve the automated planning of a large variety of locomotion tasks for different robots, demonstrated in simulation and on the real HRP-2 robot, several of which were rarely seen before. Another significant contribution is the introduction of a Bezier curve representation of the problem, which guarantees that the constraints of the CoM trajectory are verified continuously, and not only at discrete points as traditionally done. This formulation is lossless, and results in more robust trajectories. It is not restricted to CROC, but could rather be integrated with any method from the state of the art.

Journal ArticleDOI
TL;DR: Overall, the study shows that students do not have the intention to rely on social robots for learning purposes at the current level of state-of-the-art technology: behavioural intention reaches only 36.6% of the theoretical maximum.
Abstract: This study investigates the acceptance of social robots by higher education students in the social sciences. Pepper, a humanoid social robot from SoftBank Robotics, provided a sample of its capabilities during a first semester, large-scale, university course, ?Introduction to academic writing.?From this course, 462 freshmen participated in our survey. The unified theory of acceptance and use of technology (UTAUT) acts as the conceptual framework, and partial least squares structural equation modelling (PLS-SEM) as the method for data analysis. The four perceived characteristics?trustworthiness, adaptiveness, social presence and appearance?all predict the intention to use the robot for learning purposes; anxiety regarding making mistakes in handling the robot and about privacy issues are not significant predictors. An importance-performance map analysis indicated adaptiveness as the robot?s most important characteristic for predicting student behavioural intention. Overall, however, the study shows that students do not have the intention to rely on social robots for learning purposes at the current level of state-of-the-art technology: behavioural intention reaches only 36.6% of the theoretical maximum.

Journal ArticleDOI
TL;DR: The proposed hybridization of the Dynamic Window Approach and the Teaching–Learning-Based Optimization technique and its implementation on the NAO humanoid robot for navigation have been presented and it is evident that the proposed technique is robust and efficient for the path planning of humanoid robots.

Journal ArticleDOI
01 Jan 2020
TL;DR: In this paper, a graph network classifier is trained using symbolic spatial object relations from raw RGB-D video data captured from the robot's point of view in order to build graph-based scene representations.
Abstract: Recognizing human actions is a vital task for a humanoid robot, especially in domains like programming by demonstration. Previous approaches on action recognition primarily focused on the overall prevalent action being executed, but we argue that bimanual human motion cannot always be described sufficiently with a single action label. We present a system for framewise action classification and segmentation in bimanual human demonstrations. The system extracts symbolic spatial object relations from raw RGB-D video data captured from the robot's point of view in order to build graph-based scene representations. To learn object-action relations, a graph network classifier is trained using these representations together with ground truth action labels to predict the action executed by each hand. We evaluated the proposed classifier on a new RGB-D video dataset showing daily action sequences focusing on bimanual manipulation actions. It consists of 6 subjects performing 9 tasks with 10 repetitions each, which leads to 540 video recordings with 2 hours and 18 minutes total playtime and per-hand ground truth action labels for each frame. We show that the classifier is able to reliably identify (action classification macro F1-score of 0.86) the true executed action of each hand within its top 3 predictions on a frame-by-frame basis without prior temporal action segmentation.

Journal ArticleDOI
10 Feb 2020
TL;DR: In this article, a memory of motion based on a database of robot paths is proposed to provide good initial guesses for motion planning, which can be used as a metric to choose between several possible goals and using an ensemble method to combine different function approximators results in a significantly improved warm-starting performance.
Abstract: Trajectory optimization for motion planning requires good initial guesses to obtain good performance. In our proposed approach, we build a memory of motion based on a database of robot paths to provide good initial guesses. The memory of motion relies on function approximators and dimensionality reduction techniques to learn the mapping between the tasks and the robot paths. Three function approximators are compared: $k$ -Nearest Neighbor, Gaussian Process Regression, and Bayesian Gaussian Mixture Regression. In addition, we show that the memory can be used as a metric to choose between several possible goals, and using an ensemble method to combine different function approximators results in a significantly improved warm-starting performance. We demonstrate the proposed approach with motion planning examples on the dual-arm robot PR2 and the humanoid robot Atlas.

Journal ArticleDOI
TL;DR: Comparisons of the attribution of mental states to two humanoid robots, NAO and Robovie, which differed in the degree of anthropomorphism show that children tend to anthropomorphize humanoid robots that also present some mechanical characteristics, such as Robovie.
Abstract: Recent technological developments in robotics has driven the design and production of different humanoid robots. Several studies have highlighted that the presence of human-like physical features could lead both adults and children to anthropomorphize the robots. In the present study we aimed to compare the attribution of mental states to two humanoid robots, NAO and Robovie, which differed in the degree of anthropomorphism. Children aged 5, 7, and 9 years were required to attribute mental states to the NAO robot, which presents more human-like characteristics compared to the Robovie robot, whose physical features look more mechanical. The results on mental state attribution as a function of children's age and robot type showed that 5-year-olds have a greater tendency to anthropomorphize robots than older children, regardless of the type of robot. Moreover, the findings revealed that, although children aged 7 and 9 years attributed a certain degree of human-like mental features to both robots, they attributed greater mental states to NAO than Robovie compared to younger children. These results generally show that children tend to anthropomorphize humanoid robots that also present some mechanical characteristics, such as Robovie. Nevertheless, age-related differences showed that they should be endowed with physical characteristics closely resembling human ones to increase older children's perception of human likeness. These findings have important implications for the design of robots, which also needs to consider the user's target age, as well as for the generalizability issue of research findings that are commonly associated with the use of specific types of robots.

Journal ArticleDOI
Jung-Hoon Kim1
TL;DR: This comprehensive review of multi-axis force-torque sensor used in current state-of-the-art humanoid robots based on the understanding of biped walking, zero-moment point, and ground reaction force will facilitate the development of force-Torque sensors in humanoid robots and will be helpful in extending their application in the various fields of service robots.
Abstract: Recent advances in mobility, manipulation, and intelligence of robots have promoted the usability of humanoid robots to support humans in their daily lives in the future. The multi-axis force-torque sensor is an essential sensor for the biped humanoid robot to maintain balance during walking and running since it is used to calculate the zero-moment point, the criterion of dynamic stability. Force-torque sensors will be widely used in the future because they are essential for service robots to interact with people in unstructured environments. However, due to special design considerations and requirements, it is difficult to find a suitable commercial force-torque sensor for biped humanoid robots and the price is very expensive. This paper reviews the multi-axis force-torque sensor used in current state-of-the-art humanoid robots based on the understanding of biped walking, zero-moment point, and ground reaction force. From an in-depth analysis of relevant information, sensor requirements are discussed with the robot performance. In addition, the structural design of the sensors is classified into four types and described in detail. This comprehensive review will facilitate the development of force-torque sensors in humanoid robots and will be helpful in extending their application in the various fields of service robots.

Journal ArticleDOI
TL;DR: It is argued that it is not necessary to optimize walking over several steps to ensure gait viability and it is sufficient to merely select the next step timing and location, and proposed a novel walking pattern generator that optimally selects step location and timing at every control cycle.
Abstract: Step adjustment can improve the gait robustness of biped robots; however, the adaptation of step timing is often neglected as it gives rise to nonconvex problems when optimized over several footsteps. In this article, we argue that it is not necessary to optimize walking over several steps to ensure gait viability and show that it is sufficient to merely select the next step timing and location. Using this insight, we propose a novel walking pattern generator that optimally selects step location and timing at every control cycle. Our approach is computationally simple compared to standard approaches in the literature, yet guarantees that any viable state will remain viable in the future. We propose a swing foot adaptation strategy and integrate the pattern generator with an inverse dynamics controller that does not explicitly control the center of mass nor the foot center of pressure. This is particularly useful for biped robots with limited control authority over their foot center of pressure, such as robots with point feet or passive ankles. Extensive simulations on a humanoid robot with passive ankles demonstrate the capabilities of the approach in various walking situations, including external pushes and foot slippage, and emphasize the importance of step timing adaptation to stabilize walking.

Journal ArticleDOI
TL;DR: In this paper, the authors present an experimental study with 81 kindergarten children on memorizations of two tales narrated by a humanoid robot, and the variables of the study are the content of the tales (knowledge or emotional) and the different social behaviour of the narrators: static human, static robot, expressive human, and expressive robot.
Abstract: Robots are versatile devices that are promising tools for supporting teaching and learning in the classroom or at home. In fact, robots can be engaging and motivating, especially for young children. This paper presents an experimental study with 81 kindergarten children on memorizations of two tales narrated by a humanoid robot. The variables of the study are the content of the tales (knowledge or emotional) and the different social behaviour of the narrators: static human, static robot, expressive human, and expressive robot. Results suggest a positive effect of the expressive behaviour in robot storytelling, whose effectiveness is comparable to a human with the same behaviour and better when compared with a static inexpressive human. Higher efficacy is achieved by the robot in the tale with knowledge content, while the limited capability to express emotions made the robot less effective in the tale with emotional content.

Journal ArticleDOI
10 Feb 2020
TL;DR: This work investigated the application of haptic feedback control and deep reinforcement learning to robot-assisted dressing and found that training policies for specific impairments dramatically improved performance; that controller execution speed could be scaled after training to reduce the robot's speed without steep reductions in performance.
Abstract: We investigated the application of haptic feedback control and deep reinforcement learning (DRL) to robot-assisted dressing. Our method uses DRL to simultaneously train human and robot control policies as separate neural networks using physics simulations. In addition, we modeled variations in human impairments relevant to dressing, including unilateral muscle weakness, involuntary arm motion, and limited range of motion. Our approach resulted in control policies that successfully collaborate in a variety of simulated dressing tasks involving a hospital gown and a T-shirt. In addition, our approach resulted in policies trained in simulation that enabled a real PR2 robot to dress the arm of a humanoid robot with a hospital gown. We found that training policies for specific impairments dramatically improved performance; that controller execution speed could be scaled after training to reduce the robot's speed without steep reductions in performance; that curriculum learning could be used to lower applied forces; and that multi-modal sensing, including a simulated capacitive sensor, improved performance.

Journal ArticleDOI
05 Jun 2020
TL;DR: The proposed estimation system, called Pronto, is an Extended Kalman Filter that fuses IMU and Leg Odometry sensing for pose and velocity estimation that can integrate pose corrections from visual and LIDAR and odometry to correct pose drift in a loosely coupled manner.
Abstract: In this paper, we present a modular and flexible state estimation framework for legged robots operating in real-world scenarios, where environmental conditions, such as occlusions, low light, rough terrain, and dynamic obstacles can severely impair estimation performance. At the core of the proposed estimation system, called Pronto, is an Extended Kalman Filter (EKF) that fuses IMU and Leg Odometry sensing for pose and velocity estimation. We also show how Pronto can integrate pose corrections from visual and LIDAR and odometry to correct pose drift in a loosely coupled manner. This allows it to have a real-time proprioceptive estimation thread running at high frequency (250-1,000 Hz) for use in the control loop while taking advantage of occasional (and often delayed) low frequency (1-15 Hz) updates from exteroceptive sources, such as cameras and LIDARs. To demonstrate the robustness and versatility of the approach, we have tested it on a variety of legged platforms, including two humanoid robots (the Boston Dynamics Atlas and NASA Valkyrie) and two dynamic quadruped robots (IIT HyQ and ANYbotics ANYmal) for more than 2 h of total runtime and 1.37 km of distance traveled. The tests were conducted in a number of different field scenarios under the conditions described above. The algorithms presented in this paper are made available to the research community as open-source ROS packages.

Journal ArticleDOI
10 Feb 2020
TL;DR: This letter presents a new learning framework that leverages the knowledge from imitation learning, deep reinforcement learning, and control theories to achieve human-style locomotion that is natural, dynamic, and robust for humanoids.
Abstract: This letter presents a new learning framework that leverages the knowledge from imitation learning, deep reinforcement learning, and control theories to achieve human-style locomotion that is natural, dynamic, and robust for humanoids. We proposed novel approaches to introduce human bias, i.e. motion capture data and a special Multi-Expert network structure. We used the Multi-Expert network structure to smoothly blend behavioral features, and used the augmented reward design for the task and imitation rewards. Our reward design is composable, tunable, and explainable by using fundamental concepts from conventional humanoid control. We rigorously validated and benchmarked the learning framework which consistently produced robust locomotion behaviors in various test scenarios. Further, we demonstrated the capability of learning robust and versatile policies in the presence of disturbances, such as terrain irregularities and external pushes.

Journal ArticleDOI
TL;DR: A new framework for how autonomous social robots approach and accompany people in urban environments is presented and various surveys and user studies are carried out to indicate the social acceptability of the robots performance of the accompanying, approaching and positioning tasks.
Abstract: This paper presents a new framework for how autonomous social robots approach and accompany people in urban environments. The method discussed allows the robot to accompany a person and approach to other one, by adapting its own navigation in anticipation of future interactions with other people or contact with static obstacles. The contributions of the paper are manifold: firstly, we extended the Social Force model and the Anticipative Kinodynamic Planner (Ferrer and Sanfeliu, in: IEEE/RSJ international conference on intelligent robots and systems. IEEE, 2014) to the case of an adaptive side-by-side navigation; secondly, we enhance side-by-side navigation with an approaching task and a final positioning that allows the robot to interact with both people; and finally, we use findings from experiments of real-life observations of people walking in pairs to define the parameters of the human–robot interaction in our case of adaptive side-by-side. The method was validated by a large set of simulations; we also conducted real-life experiments with our robot, Tibi, to validate the framework described for the interaction process. In addition, we carried out various surveys and user studies to indicate the social acceptability of the robots performance of the accompanying, approaching and positioning tasks.

Proceedings ArticleDOI
23 Mar 2020
TL;DR: An ethnographic study with the humanoid robot Pepper at a central train station indicates that people are not yet accustomed to talking to robots, and people seem to expect that the robot does not talk, that it is a queue ticket machine, or that, one should interact with it by using the tablet on the robot's chest.
Abstract: Recent developments in robotics are potentially changing the nature of service, and research in human-robot interaction has previously shown that humanoid robots could possibly work in public spaces. We conducted an ethnographic study with the humanoid robot Pepper at a central train station. The results indicate that people are not yet accustomed to talking to robots, and people seem to expect that the robot does not talk, that it is a queue ticket machine, or that, one should interact with it by using the tablet on the robot's chest.

Journal ArticleDOI
TL;DR: In this paper, a combination of whole-body control and model-based walking controllers was used for feedback control of loco-manipulation behaviors in humanoid robots. But the combination of WBC and model based walking controllers is not suitable for walking control.
Abstract: Whole-body control (WBC) is a generic task-oriented control method for feedback control of loco-manipulation behaviors in humanoid robots. The combination of WBC and model-based walking controllers...

Journal ArticleDOI
TL;DR: It is proposed that the use of humanoid robots in interactive protocols is a particularly promising avenue for targeting the mechanisms of joint attention in the domains of healthcare applications and human–robot interaction in general.
Abstract: This article reviews methods to investigate joint attention and highlights the benefits of new methodological approaches that make use of the most recent technological developments, such as humanoid robots for studying social cognition. After reviewing classical approaches that address joint attention mechanisms with the use of controlled screen-based stimuli, we describe recent accounts that have proposed the need for more natural and interactive experimental protocols. Although the recent approaches allow for more ecological validity, they often face the challenges of experimental control in more natural social interaction protocols. In this context, we propose that the use of humanoid robots in interactive protocols is a particularly promising avenue for targeting the mechanisms of joint attention. Using humanoid robots to interact with humans in naturalistic experimental setups has the advantage of both excellent experimental control and ecological validity. In clinical applications, it offers new techniques for both diagnosis and therapy, especially for children with autism spectrum disorder. The review concludes with indications for future research, in the domains of healthcare applications and human–robot interaction in general.

Journal ArticleDOI
TL;DR: A novel multimodal emotional HRI architecture that can appropriately determine its own emotional response based on the situation at hand and induce more user positive valence and less negative arousal than the Neutral Robot.
Abstract: For social robots to effectively engage in human-robot interaction (HRI), they need to be able to interpret human affective cues and to respond appropriately via display of their own emotional behavior. In this article, we present a novel multimodal emotional HRI architecture to promote natural and engaging bidirectional emotional communications between a social robot and a human user. User affect is detected using a unique combination of body language and vocal intonation, and multimodal classification is performed using a Bayesian Network. The Emotionally Expressive Robot utilizes the user's affect to determine its own emotional behavior via an innovative two-layer emotional model consisting of deliberative (hidden Markov model) and reactive (rule-based) layers. The proposed architecture has been implemented via a small humanoid robot to perform diet and fitness counseling during HRI. In order to evaluate the Emotionally Expressive Robot's effectiveness, a Neutral Robot that can detect user affects but lacks an emotional display, was also developed. A between-subjects HRI experiment was conducted with both types of robots. Extensive results have shown thsdgfdsfatat both robots can effectively detect user affect during the real-time HRI. However, the Emotionally Expressive Robot can appropriately determine its own emotional response based on the situation at hand and, therefore, induce more user positive valence and less negative arousal than the Neutral Robot.

Journal ArticleDOI
22 Apr 2020-Sensors
TL;DR: A more detailed concept of Human-Robot Interaction systems architecture is presented and a more detailed analysis of one of the external subsystems—Bluetooth Human Identification Smart Subsystem—was also included.
Abstract: This paper presents a more detailed concept of Human-Robot Interaction systems architecture. One of the main differences between the proposed architecture and other ones is the methodology of information acquisition regarding the robot's interlocutor. In order to obtain as much information as possible before the actual interaction took place, a custom Internet-of-Things-based sensor subsystems connected to Smart Infrastructure was designed and implemented, in order to support the interlocutor identification and acquisition of initial interaction parameters. The Artificial Intelligence interaction framework of the developed robotic system (including humanoid Pepper with its sensors and actuators, additional local, remote and cloud computing services) is being extended with the use of custom external subsystems for additional knowledge acquisition: device-based human identification, visual identification and audio-based interlocutor localization subsystems. These subsystems were deeply introduced and evaluated in this paper, presenting the benefits of integrating them into the robotic interaction system. In this paper a more detailed analysis of one of the external subsystems-Bluetooth Human Identification Smart Subsystem-was also included. The idea, use case, and a prototype, integration of elements of Smart Infrastructure systems and the prototype implementation were performed in a small front office of the Weegree company as a decent test-bed application area.

Journal ArticleDOI
TL;DR: The prime challenge in a humanoid robot is its stability on two feet due to the presence of an underactuated system and the complete dynamics of the humanoid robot has been described in this paper.
Abstract: The prime challenge in a humanoid robot is its stability on two feet due to the presence of an underactuated system. In this paper, the complete dynamics of the humanoid robot has been described in...