scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2018"


Journal ArticleDOI
TL;DR: It is posited that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.
Abstract: As robotics technology evolves, we believe that personal social robots will be one of the next big expansions in the robotics sector. Based on the accelerated advances in this multidisciplinary domain and the growing number of use cases, we can posit that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.

342 citations


Journal ArticleDOI
TL;DR: This study proposes and experimentally demonstrates an artificial nociceptor based on a diffusive memristor with critical dynamics for the first time, and builds an artificial sensory alarm system to experimentally demonstrate the feasibility and simplicity of integrating such novel artificial nOCICEptor devices in artificial intelligence systems, such as humanoid robots.
Abstract: A nociceptor is a critical and special receptor of a sensory neuron that is able to detect noxious stimulus and provide a rapid warning to the central nervous system to start the motor response in the human body and humanoid robotics. It differs from other common sensory receptors with its key features and functions, including the “no adaptation” and “sensitization” phenomena. In this study, we propose and experimentally demonstrate an artificial nociceptor based on a diffusive memristor with critical dynamics for the first time. Using this artificial nociceptor, we further built an artificial sensory alarm system to experimentally demonstrate the feasibility and simplicity of integrating such novel artificial nociceptor devices in artificial intelligence systems, such as humanoid robots. The development of humanoid robots with artificial intelligence calls for smart solutions for tactile sensing systems that respond to dynamic changes in the environment. Here, Yoon et al. emulate non-adaption and sensitization function of a nociceptor—a sensory neuron—using diffusive oxide-based memristors.

267 citations


Journal ArticleDOI
TL;DR: A contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car is presented, and the first interactive implementation of a contact planner (open source) is presented.
Abstract: We present a contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car. The need for such a planner was shown at the DARPA Robotics Challenge, where such behaviors could not be demonstrated (except for egress). Current planners suffer from their prohibitive algorithmic complexity because they deploy a tree of robot configurations projected in contact with the environment. We tackle this issue by introducing a reduction property: the reachability condition. This condition defines a geometric approximation of the contact manifold, which is of low dimension, presents a Cartesian topology, and can be efficiently sampled and explored. The hard contact planning problem can then be decomposed into two subproblems: first, we plan a path for the root without considering the whole-body configuration, using a sampling-based algorithm; then, we generate a discrete sequence of whole-body configurations in static equilibrium along this path, using a deterministic contact-selection algorithm. The reduction breaks the algorithm complexity encountered in previous works, resulting in the first interactive implementation of a contact planner (open source). While no contact planner has yet been proposed with theoretical completeness, we empirically show the interest of our framework: in a few seconds, with high success rates, we generate complex contact plans for various scenarios and two robots: HRP-2 and HyQ. These plans are validated in dynamic simulations or on the real HRP-2 robot.

154 citations


Journal ArticleDOI
TL;DR: This survey describes the families of methods for sampling-based planning with constraints and places them on a spectrum delineated by their complexity and focuses on the representation of constraints and sampling- based planners that incorporate constraints.
Abstract: Robots with many degrees of freedom (eg, humanoid robots and mobile manipulators) have increasingly been employed to accomplish realistic tasks in domains such as disaster relief, spacecraft logi

126 citations


Journal ArticleDOI
TL;DR: The method builds on the concept of reciprocal velocity obstacles and extends it to respect the kinodynamic constraints of the robot and account for a grid-based map representation of the environment and solve an optimization in the space of control velocities with additional constraints.
Abstract: In this paper, we present a method, namely $\epsilon$ CCA, for collision avoidance in dynamic environments among interacting agents, such as other robots or humans. Given a preferred motion by a global planner or driver, the method computes a collision-free local motion for a short time horizon, which respects the actuator constraints and allows for smooth and safe control. The method builds on the concept of reciprocal velocity obstacles and extends it to respect the kinodynamic constraints of the robot and account for a grid-based map representation of the environment. The method is best suited for large multirobot settings, including heterogeneous teams of robots, in which computational complexity is of paramount importance and the robots interact with one another. In particular, we consider a set of motion primitives for the robot and solve an optimization in the space of control velocities with additional constraints. Additionally, we propose a cooperative approach to compute safe velocity partitions in the distributed case. We describe several instances of the method for distributed and centralized operation and formulated both as convex and nonconvex optimizations. We compare the different variants and describe the benefits and tradeoffs both theoretically and in extensive experiments with various robotic platforms: robotic wheelchairs, robotic boats, humanoid robots, small unicycle robots, and simulated cars.

126 citations


Journal ArticleDOI
TL;DR: This paper proposes a complete solution relying on a generic template model, based on the centroidal dynamics, able to quickly compute multicontact locomotion trajectories for any legged robot on arbitrary terrains, and is thus not limited by arbitrary assumption.
Abstract: Locomotion of legged robots on arbitrary terrain using multiple contacts is yet an open problem. To tackle it, a common approach is to rely on reduced template models (e.g., the linear inverted pendulum). However, most of existing template models are based on some restrictive hypotheses that limit their range of applications. Moreover, reduced models are generally not able to cope with the constraints of the robot complete model, such as the kinematic limits. In this paper, we propose a complete solution relying on a generic template model, based on the centroidal dynamics, able to quickly compute multicontact locomotion trajectories for any legged robot on arbitrary terrains. The template model relies on exact dynamics and is thus not limited by arbitrary assumption. We also propose a generic procedure to handle feasibility constraints due to the robot's whole body as occupancy measures, and a systematic way to approximate them using offline learning in simulation. An efficient solver is finally obtained by introducing an original second-order approximation of the centroidal wrench cone. The effectiveness and the versatility of the approach are demonstrated in several multicontact scenarios with two humanoid robots both in reality and in simulation.

120 citations


Journal ArticleDOI
TL;DR: University of Michigan Mcubed grant: Virtual Prototyping of Human-Robot Collaboration in Unstructured Construction Environments.

107 citations


Journal ArticleDOI
Taejin Jung1, Jeongsoo Lim1, Hyoin Bae1, KangKyu Lee1, Hyun-Min Joe1, Jun-Ho Oh1 
TL;DR: The purpose of DRC-HUBO+ is to perform tasks by teleoperation in hazardous environments that are unsafe for humans, such as disaster zones, and modularized joints and a user-friendly software framework were emphasized as design concepts to facilitate research on the robot tasks.
Abstract: This paper describes a humanoid robotics platform (DRC-HUBO+) developed for the Defense Advanced Research Projects Agency Robotics Challenge (DRC) Finals. This paper also describes the design criteria, hardware, software framework, and experimental testing of the DRC-HUBO+ platform. The purpose of DRC-HUBO+ is to perform tasks by teleoperation in hazardous environments that are unsafe for humans, such as disaster zones. We identified specific design concepts for DRC-HUBO+ to achieve this goal. For a robot to be capable of performing human tasks, a human-like shape and size, autonomy, mobility, manipulability, and power are required, among other features. Furthermore, modularized joints and a user-friendly software framework were emphasized as design concepts to facilitate research on the robot tasks. The DRC-HUBO+ platform is based on DRC-HUBO-1 and HUBO-2. The torque of each joint is increased compared to that in DRC-HUBO-1 owing to its high reduction ratio and air-cooling system. DRC-HUBO+ is designed with an exoskeletal structure to provide it with sufficient stiffness relative to its mass. All wires are enclosed within the robot body using a hollow shaft and covers to protect the wires from external shock. Regarding the vision system, active cognition of the environment can be realized using a light-detection and ranging sensor and vision cameras on the head. To achieve stable mobility, the robot can transition from the bipedal walking mode to the wheel mode using wheels located on both knees. DRC-HUBO+ has 32 degrees of freedom (DOFs), including seven DOFs for each arm and six DOFs for each leg, and a solid and light body with a height of 170 cm and a mass of 80 kg. A software framework referred to as PODO, with a Linux kernel and the Xenomai patch, is used in DRC-HUBO+.

106 citations


Journal ArticleDOI
TL;DR: The novel online learning method presented consists of a self-adaptive GT2 FS that can autonomously self- adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot.
Abstract: This paper presents a self-adaptive autonomous online learning through a general type-2 fuzzy system (GT2 FS) for the motor imagery (MI) decoding of a brain-machine interface (BMI) and navigation of a bipedal humanoid robot in a real experiment, using electroencephalography (EEG) brain recordings only. GT2 FSs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) the maximum number of EEG channels is limited and fixed; 2) no possibility of performing repeated user training sessions; and 3) desirable use of unsupervised and low-complexity feature extraction methods. The novel online learning method presented in this paper consists of a self-adaptive GT2 FS that can autonomously self-adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath–Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models), which are learnt in a continous (trial-by-trial) non-iterative basis. The effectiveness of the proposed method is demonstrated in a detailed BMI experiment, in which 15 untrained users were able to accurately interface with a humanoid robot, in a single session, using signals from six EEG electrodes only.

97 citations


Book ChapterDOI
01 Jan 2018
TL;DR: This chapter starts with an overview and classification of robots: industrial robots, autonomous mobile robots, humanoid robots and educational robots, and a specification is given of a generic educational robot used throughout the book.
Abstract: This chapter starts with an overview and classification of robots: industrial robots, autonomous mobile robots, humanoid robots and educational robots. A specification is given of a generic educational robot used throughout the book: a small mobile robot with differential drive and horizontal and ground proximity sensors. A pseudocode is defined so that algorithms can be presented in a platform-independent manner. The chapter concludes with a detailed overview of the contents of the book.

80 citations


Proceedings ArticleDOI
26 Feb 2018
TL;DR: A set of 28 different uni- and multimodal expressions for the basic emotions joy, sadness, fear and anger were designed and validated using the most common output modalities color, motion and sound and modalities differed in their degree of effectiveness to communicate single emotions.
Abstract: Artificial emotion display is a key feature of social robots to communicate internal states and behaviors in familiar human terms. While humanoid robots can draw on signals such as facial expressions or voice, emotions in appearance-constrained robots can only be conveyed through less-anthropomorphic output channels. While previous work focused on identifying specific expressional designs to convey a particular emotion, little work has been done to quantify the information content of different modalities and how they become effective in combination. Based on emotion metaphors that capture mental models of emotions, we systematically designed and validated a set of 28 different uni- and multimodal expressions for the basic emotions joy, sadness, fear and anger using the most common output modalities color, motion and sound. Classification accuracy and users’ confidence of emotion assignment were evaluated in an empirical study with 33 participants and a robot probe. The findings are distilled into a set of recommendations about which modalities are most effective in communicating basic artificial emotion. Combining color with planar motion offered the overall best cost/benefit ratio by making use of redundant multi-modal coding. Furthermore, modalities differed in their degree of effectiveness to communicate single emotions. Joy was best conveyed via color and motion, sadness via sound, fear via motion and anger via color.

Book
22 Feb 2018
TL;DR: This monograph presents a comprehensive review of literature related to the generation and usage of nonverbal signals that facilitate legibility of non-humanoid robot state and behavior and discusses issues that must be considered during nonverbal signaling and open research areas.
Abstract: This monograph surveys and informs the design and usage of nonverbal signals for human-robot interaction. With robots increasingly being utilized for tasks that require them to not only operate in close proximity to humans but to interact with them as well, there has been great interest in the communication challenges associated with the varying degrees of interaction in these environments. The success of such interactions depends on robots’ ability to convey information about their knowledge, intent, and actions to co-located humans. The monograph presents a comprehensive review of literature related to the generation and usage of nonverbal signals that facilitate legibility of non-humanoid robot state and behavior. To motivate the need for these signaling behaviors, it surveys literature in human communication and psychology and outlines target use cases of non-humanoid robots. Specifically, the focus is on works that provide insight into the cognitive processes that enable humans to recognize, interpret, and exploit nonverbal signals. From these use cases, information is identified that is potentially important for non-humanoid robots to signal and organize it into three categories of robot state. The monograph then presents a review of signal design techniques to illustrate how signals conveying this information can be generated and utilized. It concludes by discussing issues that must be considered during nonverbal signaling and open research areas, with a focus on informing the design and usage of generalizable nonverbal signaling behaviors for task-oriented non-humanoid robots.

Journal ArticleDOI
07 Mar 2018
TL;DR: A new structure for the distributed soft force transducer is presented that reduces the crosstalk between the components of the 3-axis force measurements and three dimensionally (3-D) printing the silicone structure eased the prototype production.
Abstract: Tactile sensing is one important element that can enable robots to interact with an unstructured world. By having tactile perception, a robot can explore its environment by touching objects. Like human skin, a tactile sensor that can provide rich information such as distributed normal and shear forces with high density can help the robot to recognize objects. In previous work, we introduced uSkin, a soft skin with distributed 3-axis force-sensitive elements and a center-to-center distance between the 3-axis load cells of 4.7 mm for the flat version. This letter presents a new structure for the distributed soft force transducer that reduces the crosstalk between the components of the 3-axis force measurements. Three dimensionally (3-D) printing the silicone structure eased the prototype production. However, the 3-D printed material has a higher hysteresis than the previously used Ecoflex. Microcontroller boards originally developed for the skin of iCub were implemented for uSkin, increasing the readout frequency and reducing the space requirements and number of wires. The sensor was installed on iCub and successfully used for shape exploration.

Journal ArticleDOI
TL;DR: It is argued that the formulation of weight-prioritized multitask inverse-dynamics-like control of humanoid robots is indeed well founded and justified from a theoretical standpoint, and stability in terms of solution existence, uniqueness, continuity, and robustness to perturbations is demonstrated.
Abstract: We propose a formal analysis with some theoretical properties of weight-prioritized multitask inverse-dynamics-like control of humanoid robots, being a case of redundant “manipulators” with a nonactuated free-floating base and multiple unilateral frictional contacts with the environment. The controller builds on a weighted-sum scalarization of a multiobjective optimization problem under equality and inequality constraints, which appears as a straightforward solution to account for state and control input viability constraint characteristic of humanoid robots that were usually absent from early existing pseudoinverse and null-space projection-based prioritized multitask approaches. We argue that our formulation is indeed well founded and justified from a theoretical standpoint, and we propose an analysis of some stability properties of the approach. Lyapunov stability is demonstrated for the closed-loop dynamical system that we analytically derive in the unconstrained multiobjective optimization case. Stability in terms of solution existence, uniqueness, continuity, and robustness to perturbations is then formally demonstrated for the constrained quadratic program.

Proceedings ArticleDOI
21 May 2018
TL;DR: The approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera, and is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design.
Abstract: We present a data-driven, bottom-up, deep learning approach to robotic grasping of unknown objects using Deep Convolutional Neural Networks (DCNNs). The approach uses depth images of the scene as its sole input for synthesis of a single-grasp solution during execution, adequately portraying the robot's visual perception during exploration of a scene. The training input consists of precomputed high-quality grasps, generated by analytical grasp planners, accompanied with rendered depth images of the training objects. In contrast to previous work on applying deep learning techniques to robotic grasping, our approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera. Furthermore, the approach is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design. We evaluate the method regarding its force-closure performance in simulation using the KIT and YCB object model datasets as well as a big data grasping database. We demonstrate the performance of our approach in qualitative grasping experiments on the humanoid robot ARMAR-III.

Journal ArticleDOI
TL;DR: The mechanics of the biped and how the controller exploits the interplay between passive dynamics and actuation to achieve robust locomotion are described.
Abstract: Biological bipeds have long been thought to take advantage of compliance and passive dynamics to walk and run, but realizing robotic locomotion in this fashion has been difficult in practice. Assume The Robot Is A Sphere (ATRIAS) is a bipedal robot designed to take advantage of the inherent stabilizing effects that emerge as a result of tuned mechanical compliance (Table 1). In this article, we describe the mechanics of the biped and how our controller exploits the interplay between passive dynamics and actuation to achieve robust locomotion. We outline our development process for the incremental design and testing of our controllers through rapid iteration.

Proceedings ArticleDOI
26 Feb 2018
TL;DR: This paper proposes a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations, and successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.
Abstract: Research in robotics and human-robot interaction is becoming more and more mature. Additionally, more affordable social robots are being released commercially. Thus, industry is currently demanding ideas for viable commercial applications to situate social robots in public spaces and enhance customers experience. However, present literature in human-robot interaction does not provide a clear set of guidelines and a methodology to (i) identify commercial applications for robotic platforms able to position the users’ needs at the centre of the discussion and (ii) ensure the creation of a positive user experience. With this paper we propose to fill this gap by providing a methodology for the design of robotic applications including these desired features, suitable for integration by researchers, industry, business and government organisations. As we will show in this paper, we successfully employed this methodology for an exploratory field study involving the trial implementation of a commercially available, social humanoid robot at an airport.

Proceedings ArticleDOI
21 May 2018
TL;DR: In this article, the authors proposed two convex relaxations to the problem based on trust regions and soft constraints, which can compute time-optimized dynamically consistent trajectories sufficiently fast to make the approach realtime capable.
Abstract: Recently, the centroidal momentum dynamics has received substantial attention to plan dynamically consistent motions for robots with arms and legs in multi-contact scenarios. However, it is also non convex which renders any optimization approach difficult and timing is usually kept fixed in most trajectory optimization techniques to not introduce additional non convexities to the problem. But this can limit the versatility of the algorithms. In our previous work, we proposed a convex relaxation of the problem that allowed to efficiently compute momentum trajectories and contact forces. However, our approach could not minimize a desired angular momentum objective which seriously limited its applicability. Noticing that the non-convexity introduced by the time variables is of similar nature as the centroidal dynamics one, we propose two convex relaxations to the problem based on trust regions and soft constraints. The resulting approaches can compute time-optimized dynamically consistent trajectories sufficiently fast to make the approach realtime capable. The performance of the algorithm is demonstrated in several multi-contact scenarios for a humanoid robot. In particular, we show that the proposed convex relaxation of the original problem finds solutions that are consistent with the original non-convex problem and illustrate how timing optimization allows to find motion plans that would be difficult to plan with fixed timing ††Implementation details and demos can be found in the source code available at https://git-amd.tuebingen.mpg.de/bponton/timeoptimization.

Journal ArticleDOI
01 May 2018
TL;DR: This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board and the characteristics of similar modules for comparison are given.
Abstract: Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

Posted Content
TL;DR: The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures that successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures.
Abstract: Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from 52 h of TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. In a subjective evaluation, participants reported that the gestures were human-like and matched the speech content. We also demonstrate a co-speech gesture with a NAO robot working in real time.

Journal ArticleDOI
TL;DR: A navigational controller has been developed for a humanoid by using fuzzy logic as an intelligent algorithm for avoiding the obstacles present in the environment and reach the desired target position safely.

Journal ArticleDOI
27 Jun 2018
TL;DR: A map-less visual navigation system for biped humanoid robots, which extracts information from color images to derive motion commands using deep reinforcement learning (DRL) using the Deep Deterministic Policy Gradients (DDPG) algorithm, which corresponds to an actor-critic DRL algorithm.
Abstract: In this letter, we propose a map-less visual navigation system for biped humanoid robots, which extracts information from color images to derive motion commands using deep reinforcement learning (DRL). The map-less visual navigation policy is trained using the Deep Deterministic Policy Gradients (DDPG) algorithm, which corresponds to an actor-critic DRL algorithm. The algorithm is implemented using two separate networks, one for the actor and one for the critic, but with similar structures. In addition to convolutional and fully connected layers, Long Short-Term Memory (LSTM) layers are included to address the limited observability present in the problem. As a proof of concept, we consider the case of robotic soccer using humanoid NAO V5 robots, which have reduced computational capabilities, and low-cost Red - Green - Blue (RGB) cameras as main sensors. The use of DRL allowed to obtain a complex and high performant policy from scratch, without any prior knowledge of the domain, or the dynamics involved. The visual navigation policy is trained in a robotic simulator and then successfully transferred to a physical robot, where it is able to run in 20 ms, allowing its use in real-time applications.

Journal ArticleDOI
TL;DR: The results show that the data driven computational model is able to provide the correct joints angle ranges, which are stable and able to reproduce angle ranges from theses designed VFs.
Abstract: In this paper, we have designed the vector fields (VFs) for all the six joints (hip, knee, and ankle) of a bipedal walking model. The bipedal gait is the manifestation of temporal changes in the six joints angles, two each for hip, knee, and ankle values and it is a combination of seven different discrete subphases. Developing the correct joint trajectories for all the six joints was difficult from a purely mechanics-based model due to its inherent complexities. To get the correct and exact joint trajectories, it is very essential for a modern bipedal robot to walk stably. By designing the VF correctly, we are able to get the stable joint trajectory ranges and able to reproduce angle ranges from theses designed VFs. This is purely a data driven computational modeling approach, which is based on the hypothesis that morphologically similar structure (human-robot) can adopt similar gait patterns. To validate the correctness of the design, we have applied all the possible combination of joint trajectories to HOAP-2 bipedal robot, which could walk successfully maintaining its stability. The VF provides joint trajectories for a particular joint. The results show that our data driven computational model is able to provide the correct joints angle ranges, which are stable. Note to Practitioners —In this research, we have developed the vector field (VF) for each joint (hip, knee, and ankle) of a biped, which plays an important role in walking. The idea is noble and based on data driven computational model. The generated trajectories are applied on HOAP-2 bipedal humanoid robot and compare the two joint trajectories from VF with HOAP-2 model and hybrid automata model.

Journal ArticleDOI
TL;DR: An online walking-pattern generation algorithm with footstep adjustment that helps a humanoid robot (DRC-HUBO+) to regain balance following disturbance, i.e., from strong pushing or stepping on unexpected obstacles.

Journal ArticleDOI
TL;DR: This theoretical review examined the results of several studies suggesting an form function attribution bias (FFAB) and outlined the implications the design of a robot has on the human predisposition to interact socially with robots.
Abstract: People seem to miscalibrate their expectations and interactions with a robot. When it comes to robot design, the anthropomorphism level of the robot form (appearance) has become an increasingly important variable to consider. It is argued here that people base their expectations and perceptions of a robot on its form and attribute functions which do not necessarily mirror the true functions of the robot. The term form function attribution bias (FFAB) refers to the cognitive bias which occurs when people are prone to perceptual errors, leading to a biased interpretation of a robot’s functionality. We argue that rather than objectively perceiving the robot’s functionalities, people take a cognitive shortcut using the information available to them through visual perception. FFAB intends to outline the implications the design of a robot has on the human predisposition to interact socially with robots. In this theoretical review, we examined the results of several studies suggesting an FFAB. We outline future directions of experimental paradigms and robot design implications.

Journal ArticleDOI
01 Jan 2018
TL;DR: A comprehensive contextualization of humanoid robots in healthcare is presented by identifying and characterizing active research activities on humanoid robot that can work interactively and effectively with humans so as to fill some identified gaps in current healthcare deficiency.
Abstract: Humanoid robots have evolved over the years and today it is in many different areas of applications, from homecare to social care and healthcare robotics. This paper deals with a brief overview of the current and potential applications of humanoid robotics in healthcare settings. We present a comprehensive contextualization of humanoid robots in healthcare by identifying and characterizing active research activities on humanoid robot that can work interactively and effectively with humans so as to fill some identified gaps in current healthcare deficiency.

Journal ArticleDOI
TL;DR: An overview of the existing evidence related to the views of nurses and other health and social care workers about the use of assistive humanoid and animal-like robots is provided.
Abstract: Background: Robots are introduced in many health and social care settings.Objectives: To provide an overview of the existing evidence related to the views of nurses and other health and social care...

Journal ArticleDOI
TL;DR: A novel hybridization scheme is attempted for the path planning and navigation of humanoids in a cluttered environment using regression technique and adaptive ant colony optimization on NAO humanoid robots.

Journal ArticleDOI
16 Jul 2018
TL;DR: The results show that people tend to build rapport with and trust toward the robot, resulting in the disclosure of sensitive information, conformation to its suggestions and gambling.
Abstract: Robots such as information security and overtrust in them are gaining increasing relevance. This research aims at giving an insight into how trust toward robots could be exploited for the purpose of social engineering. Drawing on Mitnick's model, a well-known social engineering framework, an interactive scenario with the humanoid robot iCub was designed to emulate a social engineering attack. At first, iCub attempted to collect the kind of personal information usually gathered by social engineers by asking a series of private questions. Then, the robot tried to develop trust and rapport with participants by offering reliable clues during a treasure hunt game. At the end of the treasure hunt, the robot tried to exploit the gained trust in order to make participants gamble the money they won. The results show that people tend to build rapport with and trust toward the robot, resulting in the disclosure of sensitive information, conformation to its suggestions and gambling.

Journal ArticleDOI
31 Jul 2018
TL;DR: The results show that it is possible to model (nonverbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of being able to “read” human action intentions and acting in a way that is legible by humans.
Abstract: Humans have the fascinating capacity of processing nonverbal visual cues to understand and anticipate the actions of other humans. This “intention reading” ability is underpinned by shared motor repertoires and action models, which we use to interpret the intentions of others as if they were our own. We investigate how different cues contribute to the legibility of human actions during interpersonal interactions. Our first contribution is a publicly available dataset with recordings of human body motion and eye gaze, acquired in an experimental scenario with an actor interacting with three subjects. From these data, we conducted a human study to analyze the importance of different nonverbal cues for action perception. As our second contribution, we used motion/gaze recordings to build a computational model describing the interaction between two persons. As a third contribution, we embedded this model in the controller of an iCub humanoid robot and conducted a second human study, in the same scenario with the robot as an actor, to validate the model's “intention reading” capability. Our results show that it is possible to model (nonverbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of being able to “read” human action intentionsand acting in a way that is legible by humans.