scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2019"


Posted Content
TL;DR: It is demonstrated that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot, made possible by a novel algorithm, which is called automatic domain randomization (ADR), and a robot platform built for machine learning.
Abstract: We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik's cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: this https URL

774 citations


Journal ArticleDOI
TL;DR: In this article, a humanoid service robot with a human-like morphology such as a face, arms, and legs is presented for interactions between consumers and humanoid service robots (HSRs).
Abstract: Interactions between consumers and humanoid service robots (HSRs; i.e., robots with a human-like morphology such as a face, arms, and legs) will soon be part of routine marketplace experiences. It ...

443 citations


Journal ArticleDOI
TL;DR: The world of mobile robots is explored including the new trends led by artificial intelligence, autonomous driving, network communication, cooperative work, nanorobotics, friendly human–robot interfaces, safe human-robot interaction, and emotion expression and perception.
Abstract: Humanoid robots, unmanned rovers, entertainment pets, drones, and so on are great examples of mobile robots. They can be distinguished from other robots by their ability to move autonomously, with ...

287 citations


Journal ArticleDOI
TL;DR: In this paper, anthropomorphic human-like characteristics seem critical component to consumers accepting robotic service (rS) and they should play an increasing role in hospitality and tourism services. But they are difficult to implement in practice.
Abstract: Humanoid robots should play an increasing role in hospitality and tourism services. Anthropomorphic – human like – characteristics seem critical component to consumers accepting robotic service (rS...

212 citations


Journal ArticleDOI
TL;DR: In this paper, a humanoid service robot displayed gaze cues in the form of changing eye color in one condition and static eye colour in the other, but not in the way that humans do.
Abstract: Service robots can offer benefits to consumers (e.g. convenience, flexibility, availability, efficiency) and service providers (e.g. cost savings), but a lack of trust hinders consumer adoption. To enhance trust, firms add human-like features to robots; yet, anthropomorphism theory is ambiguous about their appropriate implementation. This study therefore aims to investigate what is more effective for fostering trust: appearance features that are more human-like or social functioning features that are more human-like.Design/methodology/approachIn an experimental field study, a humanoid service robot displayed gaze cues in the form of changing eye colour in one condition and static eye colour in the other. Thus, the robot was more human-like in its social functioning in one condition (displaying gaze cues, but not in the way that humans do) and more human-like in its appearance in the other (static eye colour, but no gaze cues). Self-reported data from 114 participants revealing their perceptions of trust, anthropomorphism, interaction comfort, enjoyment and intention to use were analysed using partial least squares path modelling.FindingsInteraction comfort moderates the effect of gaze cues on anthropomorphism, insofar as gaze cues increase anthropomorphism when comfort is low and decrease it when comfort is high. Anthropomorphism drives trust, intention to use and enjoyment.Research limitations/implicationsTo extend human???robot interaction literature, the findings provide novel theoretical understanding of anthropomorphism directed towards humanoid robots.Practical implicationsBy investigating which features influence trust, this study gives managers insights into reasons for selecting or optimizing humanoid robots for service interactions.Originality/valueThis study examines the difference between appearance and social functioning features as drivers of anthropomorphism and trust, which can benefit research on self-service technology adoption.

191 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: In this paper, an end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures, including iconic, metaphoric, deictic, and beat gestures.
Abstract: Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Most existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from 52 h of TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. In a subjective evaluation, participants reported that the gestures were human-like and matched the speech content. We also demonstrate a co-speech gesture with a NAO robot working in real time.

109 citations


Journal ArticleDOI
27 Feb 2019
TL;DR: In this article, the authors proposed Latent Sampling-Based Motion Planning (L-SBMP) to learn a plannable latent representation for complex robotic systems by learning a latent representation through an autoencoding network, a dynamics network and a collision checking network.
Abstract: This letter presents latent sampling-based motion planning (L-SBMP), a methodology toward computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this letter, we combine these recent advances with techniques from sampling-based motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system's states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space —we refer to this exploration algorithm as learned latent RRT. This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.

105 citations


Journal ArticleDOI
TL;DR: These findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way and offer recommendations regarding design choices.
Abstract: Humanoid social robots have an increasingly prominent place in today’s world. Their acceptance in social and emotional human–robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human–computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot’s reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express “empathic” behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.

94 citations


Journal ArticleDOI
TL;DR: A new tool to explore if people adopt the intentional stance toward an artificial agent (humanoid robot) is proposed, consisting in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation of a behavior of a robot iCub depicted in a naturalistic scenario.
Abstract: In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others' behavior with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple's Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance toward an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants' stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behavior of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased toward the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance toward artificial agents, at least in some contexts.

94 citations


Journal ArticleDOI
30 Jan 2019
TL;DR: This letter introduces the humanoid robot HRP-5P, which stands for Humanoid Robotics Platform–5 Prototype, developed as a prototype for the next generation humanoid robotics platform, aiming to realize the use of practical humanoid robots in place of humans within large-scale assembly industries such as construction sites, aircraft facilities, and shipyards.
Abstract: This letter introduces the humanoid robot HRP-5P, which stands for Humanoid Robotics Platform–5 Prototype. We have been developing the HRP series humanoid robots since 2000, and HRP-5P is the latest version of the HRP series as of 2018. It is developed as a prototype for our next generation humanoid robotics platform, aiming to realize the use of practical humanoid robots in place of humans within large-scale assembly industries such as construction sites, aircraft facilities, and shipyards. To realize it, electrically actuated high-power joints with wide movable range have been newly designed. Also, the arm configuration has also been redesigned to improve the physical ability of working on actual sites. The mechanism and the electrical systems are presented with its basic specification in this letter.

92 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: This work extends walking stabilization based on linear inverted pendulum tracking by quadratic programming-based wrench distribution and a whole-body admittance controller that applies both end-effector and CoM strategies.
Abstract: We consider dynamic stair climbing with the HRP-4 humanoid robot as part of an Airbus manufacturing use-case demonstrator. We share experimental knowledge gathered so as to achieve this task, which HRP-4 had never been challenged to before. In particular, we extend walking stabilization based on linear inverted pendulum tracking [1] by quadratic programming-based wrench distribution and a whole-body admittance controller that applies both end-effector and CoM strategies. While existing stabilizers tend to use either one or the other, our experience suggests that the combination of these two approaches improves tracking performance. We demonstrate this solution in an on-site experiment where HRP-4 climbs an industrial staircase with 18.5 cm high steps, and release our walking controller as open source software.11https://github.com/stephane-caron/lipm_walking-controller/

Journal ArticleDOI
TL;DR: The results showed that robot use self-efficacy is associated with the acceptance to use humanoid, pet, and telepresence robots, and the strongest connection was found between robot use Selfefficacy and the functional and social acceptance of a humanoid robot.

Proceedings ArticleDOI
11 Mar 2019
TL;DR: Micbot is presented, a peripheral robotic object designed to promote participant engagement and ultimately performance using nonverbal implicit interactions that was effective in promoting not only increased group engagement but also improved problem solving performance.
Abstract: Many of the problems we face are solved in small groups. Using decades of research from psychology, HRI research is increasingly trying to understand how robots impact the dynamics and outcomes of these small groups. Current work almost exclusively uses humanoid robots that take on the role of an active group participant to influence interpersonal dynamics. We argue that this has limitations and propose an alternative design approach of using a peripheral robotic object. This paper presents Micbot, a peripheral robotic object designed to promote participant engagement and ultimately performance using nonverbal implicit interactions. The robot is evaluated in a 3 condition (no movement, engagement behaviour, random movement) laboratory experiment with 36 three-person groups $(\mathbf{N}=108)$ . Results showed that the robot was effective in promoting not only increased group engagement but also improved problem solving performance. In the engagement condition, participants displayed more even backchanneling toward one another, compared to no movement, but not to the random movement. This more even distribution of backchanneling predicted better problem solving performance.

Journal ArticleDOI
TL;DR: Results attest the importance of social factors in predicting responses to robots to the degree that they are seen as humanlike and feminine.
Abstract: Previous research has shown that features of synthetic robot faces suggesting social categories produce predictable and consequential social judgments. Artificial robot faces that are feminine (versus masculine) and humanlike (versus machinelike) have been shown to be judged as warmer and to produce relatively higher levels of comfort, resulting in positive evaluations and a greater desire for engagement. Two studies pursued these questions using images of real robots. In Study 1, images of existing robots were used to manipulate gendered features and machineness. Study 2 used an assortment of images of real robots including non-humanoid exemplars that vary naturally in gendered features and machineness. Consistent results emerged from the two studies. In both studies, robots were evaluated more positively and produced a greater desire for contact to the degree that they were seen as humanlike and feminine. These results attest to the importance of social factors in predicting responses to robots. Implications for robot design and future research are discussed.

Journal ArticleDOI
TL;DR: The task-space multiobjective controllers that write as quadratic programs (QPs) to handle multirobot systems as a single centralized control are extended to assemble all the “robots” models and their interaction task constraints into a single QP formulation.
Abstract: We have extended the task-space multiobjective controllers that write as quadratic programs (QPs) to handle multirobot systems as a single centralized control. The idea is to assemble all the “robots” models and their interaction task constraints into a single QP formulation. By multirobot, we mean that whatever entities a given robot will interact with (solid or articulated systems, actuated, partially or not at all, fixed-base or floating-base), we model them as clusters of robots and the controller computes the state of each cluster as an overall system and their interaction forces in a physically consistent way. By doing this, the tasks specification simplifies substantially. At the heart of the interactions between the systems are the contact forces; methodologies are provided to achieve reliable force tracking by our multirobot QP controller. The approach is assessed by a large panel of experiments on real complex robotic platforms (full-size humanoid, dexterous robotic hand, fixed-base anthropomorphic arm) performing whole-body manipulations, dexterous manipulations, and robot–robot comanipulations of rigid floating objects and articulated mechanisms, such as doors, drawers, boxes, or even smaller mechanisms like a spring-loaded click pen.

Proceedings Article
01 Jan 2019
TL;DR: The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP.
Abstract: We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging robot control problems such as path tracking for wheeled vehicles and humanoid robot balancing.

Journal ArticleDOI
TL;DR: A novel magnetic angular rate gravity sensor fusion algorithm for inertial measurement that fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demonstrating algorithm efficacy and capacity to interface with other physiological sensors.

Book ChapterDOI
TL;DR: This research is the first attempt to use cellular automaton to understand the complexity of bipedal walk and designed the cellular automata rules which will predict the next gait state of bipingal steps based on the previous two neighbor states.
Abstract: In this research article, we have reported periodic cellular automata rules for different gait state prediction and classification of the gait data using Extreme Machine Leaning (ELM). This research is the first attempt to use cellular automaton to understand the complexity of bipedal walk. Due to nonlinearity, varying configurations throughout the gait cycle and the passive joint located at the unilateral foot-ground contact in bipedal walk resulting variation of dynamic descriptions and control laws from phase to phase for human gait is making difficult to predict the bipedal walk states. We have designed the cellular automata rules which will predict the next gait state of bipedal steps based on the previous two neighbor states. We have designed cellular automata rules for normal walk. The state prediction will help to correctly design the bipedal walk. The normal walk depends on next two states and has total eight states. We have considered the current and previous states to predict next state. So we have formulated 16 rules using cellular automata, eight rules for each leg. The priority order maintained using the fact that if right leg in swing phase then left leg will be in stance phase. To validate the model we have classified the gait Data using ELM (Huang et al. Proceedings of 2004 IEEE international joint conference on neural networks, vol 2. IEEE, 2004, [1]) and achieved accuracy 60%. We have explored the trajectories and compares with another gait trajectories. Finally we have presented the error analysis for different joints.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a nanocomposite-based pressure sensor that exhibits a high sensitivity of 46.8% in the range of 0.01"N and 46.5"N.
Abstract: . Flexible tactile pressure sensor arrays based on multiwalled carbon nanotubes (MWCNT) and polydimethylsiloxane (PDMS) are gaining importance, especially in the field of robotics because of the high demand for stable, flexible and sensitive sensors. Some existing concepts of pressure sensors based on nanocomposites exhibit complicated fabrication techniques and better sensitivity than the conventional pressure sensors. In this article, we propose a nanocomposite-based pressure sensor that exhibits a high sensitivity of 25 % N −1 , starting with a minimum load range of 0–0.01 N and 46.8 % N −1 in the range of 0–1 N. The maximum pressure sensing range of the sensor is approximately 570 kPa. A concept of a 4×3 tactile sensor array, which could be integrated to robot fingers, is demonstrated. The high sensitivity of the pressure sensor enables precision grasping, with the ability to sense small objects with a size of 5 mm and a weight of 1 g. Another application of the pressure sensor is demonstrated as a gait analysis for humanoid robots. The pressure sensor is integrated under the foot of a humanoid robot to monitor and evaluate the gait of the robot, which provides insights for optimizing the robot's self-balancing algorithm in order to maintain the posture while walking.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making, which is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one's owns.
Abstract: Trust is a critical issue in human-robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one's owns. Our work is focused on a humanoid robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. We tested our architecture on an established developmental psychological experiment, achieving the same results obtained by children, thus demonstrating a new method to enhance the quality of human and robot collaborations. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.

Proceedings ArticleDOI
02 May 2019
TL;DR: The potential for using trigger-action programming to make robot behaviour personalization possible even to people who are not professional software developers is shown.
Abstract: In the coming years humanoid robots will be increasingly used in a variety of contexts, thereby presenting many opportunities to exploit their capabilities in terms of what they can sense and do. One main challenge is to design technologies that enable those who are not programming experts to personalize robot behaviour. We propose an end user development solution based on trigger-action personalization rules. We describe how it supports editing such rules and its underlying software architecture, and report on a user test that involved end user developers. The test results show that users were able to perform the robot personalization tasks with limited effort, and found the trigger-action environment usable and suitable for the proposed tasks. Overall, we show the potential for using trigger-action programming to make robot behaviour personalization possible even to people who are not professional software developers.

Journal ArticleDOI
TL;DR: A new whole-body locomotion controller is devised, dubbed the WBLC, that can achieve experimental dynamic walking on unsupported passive-ankle biped robots and an in-depth analysis of uncertainty for the dynamic walking algorithm called the time-to-velocity-reversal (TVR) planner is conducted.
Abstract: Whole-body control (WBC) is a generic task-oriented control method for feedback control of loco-manipulation behaviors in humanoid robots. The combination of WBC and model-based walking controllers has been widely utilized in various humanoid robots. However, to date, the WBC method has not been employed for unsupported passive-ankle dynamic locomotion. As such, in this paper, we devise a new WBC, dubbed whole-body locomotion controller (WBLC), that can achieve experimental dynamic walking on unsupported passive-ankle biped robots. A key aspect of WBLC is the relaxation of contact constraints such that the control commands produce reduced jerk when switching foot contacts. To achieve robust dynamic locomotion, we conduct an in-depth analysis of uncertainty for our dynamic walking algorithm called time-to-velocity-reversal (TVR) planner. The uncertainty study is fundamental as it allows us to improve the control algorithms and mechanical structure of our robot to fulfill the tolerated uncertainty. In addition, we conduct extensive experimentation for: 1) unsupported dynamic balancing (i.e. in-place stepping) with a six degree-of-freedom (DoF) biped, Mercury; 2) unsupported directional walking with Mercury; 3) walking over an irregular and slippery terrain with Mercury; and 4) in-place walking with our newly designed ten-DoF viscoelastic liquid-cooled biped, DRACO. Overall, the main contributions of this work are on: a) achieving various modalities of unsupported dynamic locomotion of passive-ankle bipeds using a WBLC controller and a TVR planner, b) conducting an uncertainty analysis to improve the mechanical structure and the controllers of Mercury, and c) devising a whole-body control strategy that reduces movement jerk during walking.

Proceedings ArticleDOI
19 Jul 2019
TL;DR: A new A * footstep planner is presented that utilizes a planar region representation of the environment to enable footstep planning over rough terrain and allows the use of partial footholds during the planning process.
Abstract: To increase the speed of operation and reduce operator burden, humanoid robots must be able to function autonomously, even in complex, cluttered environments. For this to be possible, they must be able to quickly and efficiently compute desired footsteps to reach a goal. In this work, we present a new A * footstep planner that utilizes a planar region representation of the environment enable footstep planning over rough terrain. To increase the number of available footholds, we present an approach to allow the use of partial footholds during the planning process. The footstep plan solutions are then post-processed to capture better solutions that lie between the lattice discretization of the footstep graph. We then demonstrate this planner over a variety of virtual and real world environments, including some that require partial footholds and rough terrain using the Atlas and Valkyrie humanoid robots.

Proceedings ArticleDOI
20 May 2019
TL;DR: Several Julia packages developed by the authors are presented, which together enable roughly 2× realtime simulation of the Boston Dynamics Atlas humanoid robot balancing on flat ground using a quadratic-programming-based controller.
Abstract: Robotics applications often suffer from the ‘two-language problem’, requiring a low-level language for performance-sensitive components and a high-level language for interactivity and experimentation, which tends to increase software complexity. We demonstrate the use of the Julia programming language to solve this problem by being fast enough for online control of a humanoid robot and flexible enough for prototyping. We present several Julia packages developed by the authors, which together enable roughly 2× realtime simulation of the Boston Dynamics Atlas humanoid robot balancing on flat ground using a quadratic-programming-based controller. Benchmarks show a sufficiently low variation in control frequency to make deployment on the physical robot feasible. We also show that Julia’s naturally generic programming style results in versatile packages that are easy to compose and adapt to a wide variety of computational tasks in robotics.

Journal ArticleDOI
TL;DR: A novel robotic platform designed and constructed to facilitate teaching Persian Sign Language (PSL) to children with hearing disabilities, which has a relatively low development cost for a robot in its category.
Abstract: This paper introduces a novel robotic platform, called RASA (Robot Assistant for Social Aims). This educational social robot is designed and constructed to facilitate teaching Persian Sign Language (PSL) to children with hearing disabilities. There are three predominant characteristics from which design guidelines of the robot are generated. First, the robot is designed as a fully functional interactive social robot with children as its social service recipients. Second, it comes with the ability to perform PSL, which demands a dexterous upper-body of 29 actuated degrees of freedom. Third, it has a relatively low development cost for a robot in its category. This funded project, addresses the challenges resulting from the at times divergent requirements of these characteristics. Accordingly, the hardware design of the robot is discussed, and an evaluation of its sign language realization performance has been carried out. The inspected recognition rates of certain signs of PSL, performed by RASA, have also been reported.

Journal ArticleDOI
30 Oct 2019
TL;DR: A fundamental solution to seamlessly combine human innate motor control proficiency with the physical endurance and strength of humanoid robots is proposed.
Abstract: Despite remarkable progress in artificial intelligence, autonomous humanoid robots are still far from matching human-level manipulation and locomotion proficiency in real applications. Proficient robots would be ideal first responders to dangerous scenarios such as natural or man-made disasters. When handling these situations, robots must be capable of navigating highly unstructured terrain and dexterously interacting with objects designed for human workers. To create humanoid machines with human-level motor skills, in this work, we use whole-body teleoperation to leverage human control intelligence to command the locomotion of a bipedal robot. The challenge of this strategy lies in properly mapping human body motion to the machine while simultaneously informing the operator how closely the robot is reproducing the movement. Therefore, we propose a solution for this bilateral feedback policy to control a bipedal robot to take steps, jump, and walk in synchrony with a human operator. Such dynamic synchronization was achieved by (i) scaling the core components of human locomotion data to robot proportions in real time and (ii) applying feedback forces to the operator that are proportional to the relative velocity between human and robot. Human motion was sped up to match a faster robot, or drag was generated to synchronize the operator with a slower robot. Here, we focused on the frontal plane dynamics and stabilized the robot in the sagittal plane using an external gantry. These results represent a fundamental solution to seamlessly combine human innate motor control proficiency with the physical endurance and strength of humanoid robots.

Journal ArticleDOI
TL;DR: This study considers a natural human–robot interaction setting to design a data-acquisition protocol for visual object recognition on the iCub humanoid robot, and confirms the remarkable improvements yield by deep learning in this setting.

Journal ArticleDOI
TL;DR: This article presents ARMAR-6, a new high-performance humanoid robot for various tasks, including but not limited to grasping, mobile manipulation, integrated perception, bimanual collaboration, compliant-motion execution, and natural language understanding.
Abstract: A major goal of humanoid robotics is to enable safe and reliable humannrobot collaboration in realworld scenarios. In this article, we present ARMAR-6, a new high-performance humanoid robot for various tasks, including but not limited to grasping, mobile manipulation, integrated perception, bimanual collaboration, compliant-motion execution, and natural language understanding. We describe how the requirements arising from these tasks influenced our major design decisions, resulting in vertical integration during the joint hardware and software development phases. In particular, the entire hardwaremincluding its structure, sensor-actuator units, and low-level controllersmas well as its perception, grasping and manipulation skills, task coordination, and the entire software architecture were all developed by one team of engineers. Component interaction is facilitated by our software framework ArmarX, which further facilitates the seamless integration and interchange of third-party contributions. To showcase the robot's capabilities, we present its performance in a challenging industrial maintenance scenario that requires humannrobot collaboration, where the robot autonomously recognizes the human's need of help and offers said help in a proactive way.

Journal ArticleDOI
TL;DR: A collaborative project that investigated the deployment of humanoid robotic solutions in air-craft manufacturing for several assembly operations where access by wheeled or railported robotic platforms is not possible found that humanoids could be a plausible solution for automation, given the specific requirements in large-scale manufacturing sites.
Abstract: We report on the results of a collaborative project that investigated the deployment of humanoid robotic solutions in air-craft manufacturing for several assembly op erations where access by wheeled or railported robotic platforms is not possible. Recent de velopments in multicontact planning and control, bipedal walking, embedded simultaneous localization and mapping (SLAM), whole-body multisensory task-space optimization control, and contact detection and safety suggest that humanoids could be a plausible solution for automation, given the specific requirements in large-scale manufacturing sites. The main challenge is the integration of these scientific and technological advances into two existing humanoid platforms: the position-controlled Human Robotics Project (HRP-4) and the torque-controlled robot (TORO). This integration effort was demonstrated during a bracketassembly operation inside a 1:1-scale A350 mockup of the front part of the fuselage at the Airbus Saint-Nazaire site. We present and discuss the main results achieved in this project and provide recommendations for future work.

Journal ArticleDOI
TL;DR: A planning procedure that allows an anthropomorphic dual-arm robotic system to perform a manipulation task in a natural human-like way by using demonstrated human movements is presented and a path-quality measure, based on first-order synergies obtained from real human movements, is proposed and used for evaluation and comparison purposes.
Abstract: This paper presents a planning procedure that allows an anthropomorphic dual-arm robotic system to perform a manipulation task in a natural human-like way by using demonstrated human movements. The key idea of the proposal is to convert the demonstrated trajectories into attractive potential fields defined over the configuration space and then use an RRT*-based planning algorithm that minimizes a path-cost function designed to bias the tree growth toward the human-demonstrated configurations. This paper presents a description of the proposed approach as well as results from a conceptual and a real application example, the latter using a real anthropomorphic dual-arm robotic system. A path-quality measure, based on first-order synergies (correlations between joint velocities) obtained from real human movements, is also proposed and used for evaluation and comparison purposes. The obtained results show that the paths obtained with the proposed procedure are more human-like.