scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2017"


Journal ArticleDOI
TL;DR: The so-called DEAs are introduced emphasizing the key points of working principle, key components and electromechanical modeling approaches, and different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed.
Abstract: Conventional industrial robots with the rigid actuation technology have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing number of robots needing to interact with humans and unstructured environments, there is a need for soft robots capable of sustaining large deformation while inducing little pressure or damage when maneuvering through confined spaces. The emergence of soft robotics offers the prospect of applying soft actuators as artificial muscles in robots, replacing traditional rigid actuators. Dielectric elastomer actuators (DEAs) are recognized as one of the most promising soft actuation technologies due to the facts that: i) dielectric elastomers are kind of soft, motion-generating materials that resemble natural muscle of humans in terms of force, strain (displacement per unit length or area) and actuation pressure/density; ii) dielectric elastomers can produce large voltage-induced deformation. In this survey, we first introduce the so-called DEAs emphasizing the key points of working principle, key components and electromechanical modeling approaches. Then, different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed. Lastly, we summarize the challenges and opportunities for the further studies in terms of mechanism design, dynamics modeling and autonomous control.

301 citations


Journal ArticleDOI
TL;DR: The capability of the robot and the performance of the individual motion control and perception modules were validated during the DRC in which the robot was able to demonstrate exceptional physical resilience and execute some of the tasks during the competition.
Abstract: In this work, we present WALK-MAN, a humanoid platform that has been developed to operate in realistic unstructured environment, and demonstrate new skills including powerful manipulation, robust balanced locomotion, high-strength capabilities, and physical sturdiness. To enable these capabilities, WALK-MAN design and actuation are based on the most recent advancements of series elastic actuator drives with unique performance features that differentiate the robot from previous state-of-the-art compliant actuated robots. Physical interaction performance is benefited by both active and passive adaptation, thanks to WALK-MAN actuation that combines customized high-performance modules with tuned torque/velocity curves and transmission elasticity for high-speed adaptation response and motion reactions to disturbances. WALK-MAN design also includes innovative design optimization features that consider the selection of kinematic structure and the placement of the actuators with the body structure to maximize the robot performance. Physical robustness is ensured with the integration of elastic transmission, proprioceptive sensing, and control. The WALK-MAN hardware was designed and built in 11 months, and the prototype of the robot was ready four months before DARPA Robotics Challenge (DRC) Finals. The motion generation of WALK-MAN is based on the unified motion-generation framework of whole-body locomotion and manipulation (termed loco-manipulation). WALK-MAN is able to execute simple loco-manipulation behaviors synthesized by combining different primitives defining the behavior of the center of gravity, the motion of the hands, legs, and head, the body attitude and posture, and the constrained body parts such as joint limits and contacts. The motion-generation framework including the specific motion modules and software architecture is discussed in detail. A rich perception system allows the robot to perceive and generate 3D representations of the environment as well as detect contacts and sense physical interaction force and moments. The operator station that pilots use to control the robot provides a rich pilot interface with different control modes and a number of teleoperated or semiautonomous command features. The capability of the robot and the performance of the individual motion control and perception modules were validated during the DRC in which the robot was able to demonstrate exceptional physical resilience and execute some of the tasks during the competition.

211 citations


Journal ArticleDOI
01 Apr 2017
TL;DR: A practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability.
Abstract: We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The “Nextage Open” humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8% success rate for the object folding task.

183 citations


Journal ArticleDOI
TL;DR: A model-free robust adaptive controller for control of humanoid robots with flexible joints uses a time-delay estimation technique to estimate and cancel nonlinear terms in robot dynamics including disturbance torques due to the joint flexibility, and assigns desired dynamics specified by a sliding variable.
Abstract: A model-free robust adaptive controller is proposed for control of humanoid robots with flexible joints. The proposed controller uses a time-delay estimation technique to estimate and cancel nonlinear terms in robot dynamics including disturbance torques due to the joint flexibility, and assigns desired dynamics specified by a sliding variable. A gain-adaptation law is developed to dynamically update the gain of the proposed controller using the magnitude of the sliding variable and the gain itself. The gain-adaptation law uses a leakage term to prevent overestimation of the gain value, and offers stable and chattering-free control action. The effectiveness of the proposed adaptive controller is experimentally verified on a humanoid robot equipped with flexible joints. Tracking performances of the autotuned adaptive gain are better than those of the manually tuned constant gains. The proposed control algorithm is model-free, adaptive, robust, and highly accurate.

170 citations


Proceedings ArticleDOI
15 Nov 2017
TL;DR: A new humanoid robot capable of interacting with a human environment and targeting industrial applications, equipped with torque sensors to measure joint effort and high resolution encoders to measure both motor and joint positions is introduced.
Abstract: The upcoming generation of humanoid robots will have to be equipped with state-of-the-art technical features along with high industrial quality, but they should also offer the prospect of effective physical human interaction. In this paper we introduce a new humanoid robot capable of interacting with a human environment and targeting industrial applications. Limitations are outlined and used together with the feedback from the DARPA Robotics Challenge, and other teams leading the field in creating new humanoid robots. The resulting robot is able to handle weights of 6 kg with an out-stretched arm, and has powerful motors to carry out fast movements. Its kinematics have been specially designed for screwing and drilling motions. In order to make interaction with human operators possible, this robot is equipped with torque sensors to measure joint effort and high resolution encoders to measure both motor and joint positions. The humanoid robotics field has reached a stage where robustness and repeatability is the next watershed. We believe that this robot has the potential to become a powerful tool for the research community to successfully navigate this turning point, as the humanoid robot HRP-2 was in its own time.

135 citations


Journal ArticleDOI
01 Jan 2017
TL;DR: The particle swarm optimization method has been employed to optimize the trajectory of each joint, such that satisfied parameter estimation can be obtained and the estimated inertia parameters are taken as the initial values for the RNE-based adaptive control design to achieve improved tracking performance.
Abstract: In this paper, model identification and adaptive control design are performed on Devanit-Hartenberg model of a humanoid robot. We focus on the modeling of the 6 degree-of-freedom upper limb of the robot using recursive Newton-Euler (RNE) formula for the coordinate frame of each joint. To obtain sufficient excitation for modeling of the robot, the particle swarm optimization method has been employed to optimize the trajectory of each joint, such that satisfied parameter estimation can be obtained. In addition, the estimated inertia parameters are taken as the initial values for the RNE-based adaptive control design to achieve improved tracking performance. Simulation studies have been carried out to verify the result of the identification algorithm and to illustrate the effectiveness of the control design.

127 citations


Posted Content
TL;DR: A metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps, and provides a concrete metric for measuring the strength of such hierarchies.
Abstract: We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps. Specifically, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.

127 citations


Proceedings ArticleDOI
01 Sep 2017
TL;DR: FROST is presented, an open-source MATLAB toolkit for modeling, trajectory optimization and simulation of hybrid dynamical systems with a particular focus in dynamic locomotion, which has been successfully used to synthesize dynamic walking in multiple bipedal robots.
Abstract: This paper presents FROST, an open-source MATLAB toolkit for modeling, trajectory optimization and simulation of hybrid dynamical systems with a particular focus in dynamic locomotion. The design objective of FROST is to provide a unified software environment for developing model-based control and motion planning algorithms for robotic systems whose dynamics is hybrid in nature. In particular, FROST uses directed graphs to describe the underlying discrete structure of hybrid system models, which renders it capable of representing a wide variety of robotic systems. Equipped with a custom symbolic math toolbox in MATLAB using Wolfram Mathematica, one can rapidly prototype the mathematical model of robot kinematics and dynamics and generate optimized code of symbolic expressions to boost the speed of optimization and simulation in FROST. In favor of agile and dynamic behaviors, we utilize virtual constraint based motion planning and feedback controllers for robotic systems to exploit the full-order dynamics of the model. Moreover, FROST provides a fast and tractable framework for planning optimal trajectories of hybrid dynamical systems using advanced direct collocation algorithms. FROST has been successfully used to synthesize dynamic walking in multiple bipedal robots. Case studies of such applications are considered in this paper, wherein different types of walking gaits are generated for two specific humanoid robots and validated in simulation.

126 citations


Journal ArticleDOI
TL;DR: A robotic shopping assistant, designed with a cognitive architecture, grounded in machine learning systems, is presented in order to study how the human-robot interaction (HRI) is changing the shopping behavior in smart technological stores.

121 citations


Journal ArticleDOI
TL;DR: Design, fabrication and characterization of a biomimetic, compact, low-cost and lightweight 3D printed humanoid hand that is actuated by twisted and coiled polymeric (TCP) artificial muscles are focused on.
Abstract: This paper focuses on design, fabrication and characterization of a biomimetic, compact, low-cost and lightweight 3D printed humanoid hand (TCP Hand) that is actuated by twisted and coiled polymeric (TCP) artificial muscles. The TCP muscles were recently introduced and provided unprecedented strain, mechanical work, and lifecycle (Haines et al 2014 Science 343 868-72). The five-fingered humanoid hand is under-actuated and has 16 degrees of freedom (DOF) in total (15 for fingers and 1 at the palm). In the under-actuated hand designs, a single actuator provides coupled motions at the phalanges of each finger. Two different designs are presented along with the essential elements consisting of actuators, springs, tendons and guide systems. Experiments were conducted to investigate the performance of the TCP muscles in response to the power input (power magnitude, type of wave form such as pulsed or square wave, and pulse duration) and the resulting actuation stroke and force generation. A kinematic model of the flexor tendons was developed to simulate the flexion motion and compare with experimental results. For fast finger movements, short high-power pulses were employed. Finally, we demonstrated the grasping of various objects using the humanoid TCP hand showing an array of functions similar to a natural hand.

116 citations


Journal ArticleDOI
01 Jan 2017
TL;DR: This letter shows an extension of the pattern generator that directly considers the avoidance of convex obstacles and uses the whole-body dynamics to correct the center of mass trajectory of the underlying simplified model.
Abstract: The contribution of this work is to show that real-time nonlinear model predictive control (NMPC) can be implemented on position controlled humanoid robots. Following the idea of “walking without thinking,” we propose a walking pattern generator that takes into account simultaneously the position and orientation of the feet. A requirement for an application in real-world scenarios is the avoidance of obstacles. Therefore, this letter shows an extension of the pattern generator that directly considers the avoidance of convex obstacles. The algorithm uses the whole-body dynamics to correct the center of mass trajectory of the underlying simplified model. The pattern generator runs in real-time on the embedded hardware of the humanoid robot HRP-2 and experiments demonstrate the increase in performance with the correction.

Proceedings Article
26 Oct 2017
TL;DR: In this article, a set of primitives are shared within a distribution of tasks and are switched between by task-specific policies, leading to an optimization problem for quickly reaching high reward on unseen tasks.
Abstract: We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps. Specifically, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.

Journal ArticleDOI
TL;DR: The inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub, for achieving similar goals to the human demonstrator online.

Journal ArticleDOI
TL;DR: This paper summarizes how Team KAIST prepared for the DARPA Robotics Challenge (DRC) Finals, especially in terms of the robot system and control strategy and presents control methods, such as inverse kinematics, compliance control, a walking algorithm, and a vision algorithm, all of which were implemented to accomplish the tasks.
Abstract: This paper summarizes how Team KAIST prepared for the DARPA Robotics Challenge (DRC) Finals, especially in terms of the robot system and control strategy. To imitate the Fukushima nuclear disaster situation, the DRC performed a total of eight tasks and degraded communication conditions. This competition demanded various robotic technologies such as manipulation, mobility, telemetry, autonomy, localization, etc. Their systematic integration and the overall system robustness were also important issues in completing the challenge. In this sense, this paper presents a hardware and software system for the DRC-HUBO+, a humanoid robot that was used for the DRC; it also presents control methods such as inverse kinematics, compliance control, a walking algorithm, and a vision algorithm, all of which were implemented to accomplish the tasks. The strategies and operations for each task are briefly explained with vision algorithms. This paper summarizes what we learned from the DRC before the conclusion. In the competition, 25 international teams participated with their various robot platforms. We competed in this challenge using the DRC-HUBO+ and won first place in the competition.

Journal ArticleDOI
TL;DR: This work presents the architecture of the first release of the Neurorobotics Platform, a new web-based environment offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation.
Abstract: Combined efforts in the fields of neuroscience, computer science and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to filling this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in-silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, envi-ronments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.

Journal ArticleDOI
TL;DR: A whole-body controller is implemented and feasible multicontact motions where an HRP-4 humanoid locomotes in challenging multicontacts scenarios are generated.
Abstract: We propose a method for checking and enforcing multicontact stability based on the zero-tilting moment point (ZMP). The key to our development is the generalization of ZMP support areas to take into account: 1) frictional constraints and 2) multiple noncoplanar contacts. We introduce and investigate two kinds of ZMP support areas. First, we characterize and provide a fast geometric construction for the support area generated by valid contact forces, with no other constraint on the robot motion. We call this set the full support area. Next, we consider the control of humanoid robots by using the linear pendulum mode (LPM). We observe that the constraints stemming from the LPM induce a shrinking of the support area, even for walking on horizontal floors. We propose an algorithm to compute the new area, which we call the pendular support area. We show that, in the LPM, having the ZMP in the pendular support area is a necessary and sufficient condition for contact stability. Based on these developments, we implement a whole-body controller and generate feasible multicontact motions where an HRP-4 humanoid locomotes in challenging multicontact scenarios.

Journal ArticleDOI
TL;DR: TaeMu as mentioned in this paper is a fast torque-controlled hydraulic humanoid robot, which has 15 active joints that are all driven by hydraulic servocylinders with an external hydraulic power supply, and it weighs 72.3 kg including a dummy weight stand for the arms.
Abstract: This paper reports the design and control of a fast torque-controlled hydraulic humanoid robot, TaeMu. The robot has 15 active joints that are all driven by hydraulic servocylinders with an external hydraulic power supply. The 1377-mm tall robot has a three-axis active torso used for static and dynamic balancing. It weighs 72.3 kg including a dummy weight stand for the arms. Its lightweight design with carbon-fiber-reinforced plastic allows the legs to have a similar mass distribution to that of human legs. We present the details of the hardware design including the hydraulic actuator selection, mechanism, and control systems, as well as the passivity-based controller design for joint torque control and whole-body motion control. We present experimentally obtained results, such as speed testing, basic torque control testing, full-body compliant balancing with attitude regulation/tracking, and balanced full-squat motions. The experimentally obtained results and estimated joint specification imply the high potential of the robot to perform human-like motions if the researchers can invent proper control algorithms.

01 Jan 2017
TL;DR: The main contribution of the paper is an effective real-time system for one-shot action modeling and recognition; the paper highlights the effectiveness of sparse coding techniques to represent 3D actions.
Abstract: Sparsity has been showed to be one of the most important properties for visual recognition purposes. In this paper we show that sparse representation plays a fundamental role in achieving one-shot learning and real-time recognition of actions. We start off from RGBD images, combine motion and appearance cues and extract state-of-the-art features in a computationally efficient way. The proposed method relies on descriptors based on 3D Histograms of Scene Flow (3DHOFs) and Global Histograms of Oriented Gradient (GHOGs); adaptive sparse coding is applied to capture high-level patterns from data. We then propose a simultaneous on-line video segmentation and recognition of actions using linear SVMs. The main contribution of the paper is an effective real-time system for one-shot action modeling and recognition; the paper highlights the effectiveness of sparse coding techniques to represent 3D actions. We obtain very good results on three different data sets: a benchmark data set for one-shot action learning (the ChaLearn Gesture Data Set), an in-house data set acquired by a Kinect sensor including complex actions and gestures differing by small details, and a data set created for human-robot interaction purposes. Finally we demonstrate that our system is effective also in a human-robot interaction setting and propose a memory game, "All Gestures You Can", to be played against a humanoid robot.

Proceedings ArticleDOI
10 Apr 2017
TL;DR: The XBotCore design and architecture will be described and experimental results on the humanoid robot WALK-MAN, developed at the Istituto Italiano di Tecnologia (IIT), will be presented.
Abstract: In this work we introduce XBotCore (Cross-Bot-Core), a light-weight, Real-Time (RT) software platform for EtherCAT-based robots. XBotCore is open-source and is designed to be both an RT robot control framework and a software middleware. It satisfies hard RT requirements, while ensuring 1 kHz control loop even in complex Multi-Degree-Of-Freedom systems. It provides a simple and easy-to-use middleware Application Programming Interface (API), for both RT and non-RT control frameworks. This API is completely flexible with respect to the framework a user wants to utilize. Moreover it is possible to reuse the code written using XBotCore API with different robots (cross-robot feature). In this paper, the XBotCore design and architecture will be described and experimental results on the humanoid robot WALK-MAN [17], developed at the Istituto Italiano di Tecnologia (IIT), will be presented.

Proceedings ArticleDOI
01 Mar 2017
TL;DR: A new swing speed up algorithm is presented, allowing the robot to set the foot down more quickly to recover from errors in the direction of the current capture point dynamics, and a new algorithm to adjust the desired footstep is presented.
Abstract: While humans are highly capable of recovering from external disturbances and uncertainties that result in large tracking errors, humanoid robots have yet to reliably mimic this level of robustness. Essential to this is the ability to combine traditional “ankle strategy” balancing with step timing and location adjustment techniques. In doing so, the robot is able to step quickly to the necessary location to continue walking. In this work, we present both a new swing speed up algorithm to adjust the step timing, allowing the robot to set the foot down more quickly to recover from errors in the direction of the current capture point dynamics, and a new algorithm to adjust the desired footstep, expanding the base of support to utilize the center of pressure (CoP)-based ankle strategy for balance. We then utilize the desired centroidal moment pivot (CMP) to calculate the momentum rate of change for our inverse-dynamics based whole-body controller. We present simulation and experimental results using this work, and discuss performance limitations and potential improvements.

Proceedings ArticleDOI
01 Nov 2017
TL;DR: A novel algorithm is proposed which splits the computational burden between an offline sampling phase and a limited number of online convex optimizations, enabling the application of hybrid predictive controllers to higher-dimensional systems.
Abstract: Feedback control of robotic systems interacting with the environment through contacts is a central topic in legged robotics. One of the main challenges posed by this problem is the choice of a model sufficiently complex to capture the discontinuous nature of the dynamics but simple enough to allow online computations. Linear models have proved to be the most effective and reliable choice for smooth systems; we believe that piecewise affine (PWA) models represent their natural extension when contact phenomena occur. Discrete-time PWA systems have been deeply analyzed in the field of hybrid Model Predictive Control (MPC), but the straightforward application of MPC techniques to complex systems, such as a humanoid robot, leads to mixed-integer optimization problems which are not solvable at real-time rates. Explicit MPC methods can construct the entire control policy offline, but the resulting policy becomes too complex to compute for systems at the scale of a humanoid robot. In this paper we propose a novel algorithm which splits the computational burden between an offline sampling phase and a limited number of online convex optimizations, enabling the application of hybrid predictive controllers to higher-dimensional systems. In doing so we are willing to partially sacrifice feedback optimality, but we set stability of the system as an inviolable requirement. Simulation results of a simple planar humanoid that balances by making contact with its environment are presented to validate the proposed controller.

Journal ArticleDOI
TL;DR: It is found that the more people are extrovert, the more and longer they tend to talk with the robot and the more they will look at the robot hands where the assembly and the contacts occur.
Abstract: Estimating the engagement is critical for human-robot interaction. Engagement measures typically rely on the dynamics of the social signals exchanged by the partners, especially speech and gaze. However, the dynamics of these signals are likely to be influenced by individual and social factors, such as personality traits, as it is well documented that they critically influence how two humans interact with each other. Here, we assess the influence of two factors, namely extroversion and negative attitude toward robots, on speech and gaze during a cooperative task, where a human must physically manipulate a robot to assemble an object. We evaluate if the score of extroversion and negative attitude towards robots co-variate with the duration and frequency of gaze and speech cues. The experiments were carried out with the humanoid robot iCub and N=56 adult participants. We found that the more people are extrovert, the more and longer they tend to talk with the robot; and the more people have a negative attitude towards robots, the less they will look at the robot face and the more they will look at the robot hands where the assembly and the contacts occur. Our results confirm and provide evidence that the engagement models classically used in human-robot interaction should take into account attitudes and personality traits.

Journal ArticleDOI
TL;DR: Results suggest that people’s intentional stance towards the robot was in this case very similar to their stance toward the human, as well as systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior.
Abstract: People rely on shared folk-psychological theories when judging behavior. These theories guide peoples social interactions and therefore need to be taken into consideration in the design of robots a ...

Proceedings ArticleDOI
01 Aug 2017
TL;DR: NICO (Neuro-Inspired COmpanion), a humanoid developmental robot that fills a gap between necessary sensing and interaction capabilities and flexible design, is developed and introduced, making it a novel neuro-cognitive research platform for embodied sensorimotor computational and cognitive models in the context of multimodal interaction.
Abstract: Interdisciplinary research, drawing from robotics, artificial intelligence, neuroscience, psychology, and cognitive science, is a cornerstone to advance the state-of-the-art in multimodal human-robot interaction and neuro-cognitive modeling. Research on neuro-cognitive models benefits from the embodiment of these models into physical, humanoid agents that possess complex, human-like sensorimotor capabilities for multimodal interaction with the real world. For this purpose, we develop and introduce NICO (Neuro-Inspired COmpanion), a humanoid developmental robot that fills a gap between necessary sensing and interaction capabilities and flexible design. This combination makes it a novel neuro-cognitive research platform for embodied sensorimotor computational and cognitive models in the context of multimodal interaction as shown in our results.

Journal ArticleDOI
20 Dec 2017
TL;DR: The iCub humanoid robot child is an open-source initiative supporting research in embodied artificial intelligence and will be used for education and research in the coming years.
Abstract: The iCub open-source humanoid robot child is a successful initiative supporting research in embodied artificial intelligence.

Journal ArticleDOI
TL;DR: The utility of grip-force and high-frequency acceleration feedback in teleoperation systems is supported and motivates further improvements to fingertip-contact-and-pressure feedback.
Abstract: The multifaceted human sense of touch is fundamental to direct manipulation, but technical challenges prevent most teleoperation systems from providing even a single modality of haptic feedback, such as force feedback. This paper postulates that ungrounded grip-force, fingertip-contact-and-pressure, and high-frequency acceleration haptic feedback will improve human performance of a teleoperated pick-and-place task. Thirty subjects used a teleoperation system consisting of a haptic device worn on the subject's right hand, a remote PR2 humanoid robot, and a Vicon motion capture system to move an object to a target location. Each subject completed the pick-and-place task 10 times under each of the eight haptic conditions obtained by turning on and off grip-force feedback, contact feedback, and acceleration feedback. To understand how object stiffness affects the utility of the feedback, half of the subjects completed the task with a flexible plastic cup, and the others used a rigid plastic block. The results indicate that the addition of grip-force feedback with gain switching enables subjects to hold both the flexible and rigid objects more stably, and it also allowed subjects who manipulated the rigid block to hold the object more delicately and to better control the motion of the remote robot's hand. Contact feedback improved the ability of subjects who manipulated the flexible cup to move the robot's arm in space, but it deteriorated this ability for subjects who manipulated the rigid block. Contact feedback also caused subjects to hold the flexible cup less stably, but the rigid block more securely. Finally, adding acceleration feedback slightly improved the subject's performance when setting the object down, as originally hypothesized; interestingly, it also allowed subjects to feel vibrations produced by the robot's motion, causing them to be more careful when completing the task. This study supports the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and motivates further improvements to fingertip-contact-and-pressure feedback.

Journal ArticleDOI
TL;DR: A new eHealth platform incorporating humanoid robots to support an emerging multidimensional care approach for the treatment of diabetes is presented, and its end-to-end functionality and acceptability are tested successfully through a clinician-led pilot study, providing evidence that both patients and caregivers are receptive to the introduction of the proposed platform.
Abstract: This paper presents a new eHealth platform incorporating humanoid robots to support an emerging multidimensional care approach for the treatment of diabetes. The architecture of the platform extends the Internet of Things to a Web-centric paradigm through utilizing existing Web standards to access and control objects of the physical layer. This incorporates capillary networks, each of which encompasses a set of medical sensors linked wirelessly to a humanoid robot linked (via the Internet) to a Web-centric disease management hub. This provides a set of services for both patients and their caregivers that support the full continuum of the multidimensional care approach of diabetes. The platform’s software architecture pattern enables the development of various applications without knowing low-level details of the platform. This is achieved through unifying the access interface and mechanism of handling service requests through a layered approach based on object virtualization and automatic service delivery. A fully functional prototype is developed, and its end-to-end functionality and acceptability are tested successfully through a clinician-led pilot study, providing evidence that both patients and caregivers are receptive to the introduction of the proposed platform.

Journal ArticleDOI
TL;DR: Results of experiments with an iCub humanoid robot that uses CCSA to incrementally acquire skills to topple, grasp and pick-place a cup, driven by its intrinsic motivation from raw pixel vision are presented.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: A particle filter is introduced with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections.
Abstract: Event cameras are a new technology that can enable low-latency, fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes in the scene and have a very high temporal resolution (< 1μs). Moving targets produce dense spatio-temporal streams of events that do not suffer from information loss “between frames”, which can occur when traditional cameras are used to track fast-moving targets. Event-based tracking algorithms need to be able to follow the target position within the spatio-temporal data, while rejecting clutter events that occur as a robot moves in a typical office setting. We introduce a particle filter with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections. The proposed system provides a more persistent tracking compared to prior state-of-the-art, especially when the robot is actively following a target with its gaze. Experiments are performed on the iCub humanoid robot performing ball tracking and gaze following.

Journal ArticleDOI
TL;DR: This article reviews the major techniques needed for developing BRI systems, and describes a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques.
Abstract: The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.