scispace - formally typeset
Search or ask a question
Topic

Humanoid robot

About: Humanoid robot is a research topic. Over the lifetime, 14387 publications have been published within this topic receiving 243674 citations. The topic is also known as: đŸ€–.


Papers
More filters
Journal Article‱DOI‱
TL;DR: Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners in the second sessions of playing with human adults than during their first sessions, and although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.
Abstract: This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions. Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.

128 citations

Proceedings Article‱DOI‱
03 May 2010
TL;DR: The design of a robot's head faces contradicting requirements when integrating powerful sensing with social expression and reactions of the general public show that current head designs often cause negative user reactions and distract from the functional capabilities.
Abstract: A robot's head is important both for directional sensors and, in human-directed robotics, as the single most visible interaction interface. However, designing a robot's head faces contradicting requirements when integrating powerful sensing with social expression. Furher, reactions of the general public show that current head designs often cause negative user reactions and distract from the functional capabilities.

127 citations

Journal Article‱DOI‱
TL;DR: The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks, and demonstrates the pertinence of these cues in terms of statistical measures of action times for humans in thecontext of a cooperative task, as gaze significantly facilitates cooperation.
Abstract: Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head-fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

127 citations

Proceedings Article‱DOI‱
10 Dec 2007
TL;DR: An imitation learning algorithm for a humanoid robot on top of a general world model provided by learned object affordances, which is used to recognize the demonstration by another agent and infer the task to be learned.
Abstract: In this paper we build an imitation learning algorithm for a humanoid robot on top of a general world model provided by learned object affordances. We consider that the robot has previously learned a task independent affordance-based model of its interaction with the world. This model is used to recognize the demonstration by another agent (a human) and infer the task to be learned. We discuss several important problems that arise in this combined framework, such as the influence of an inaccurate model in the recognition of the demonstration. We illustrate the ideas in the paper with some experimental results obtained with a real robot.

127 citations

Journal Article‱DOI‱
01 Jan 2017
TL;DR: The particle swarm optimization method has been employed to optimize the trajectory of each joint, such that satisfied parameter estimation can be obtained and the estimated inertia parameters are taken as the initial values for the RNE-based adaptive control design to achieve improved tracking performance.
Abstract: In this paper, model identification and adaptive control design are performed on Devanit-Hartenberg model of a humanoid robot. We focus on the modeling of the 6 degree-of-freedom upper limb of the robot using recursive Newton-Euler (RNE) formula for the coordinate frame of each joint. To obtain sufficient excitation for modeling of the robot, the particle swarm optimization method has been employed to optimize the trajectory of each joint, such that satisfied parameter estimation can be obtained. In addition, the estimated inertia parameters are taken as the initial values for the RNE-based adaptive control design to achieve improved tracking performance. Simulation studies have been carried out to verify the result of the identification algorithm and to illustrate the effectiveness of the control design.

127 citations


Network Information
Related Topics (5)
Mobile robot
66.7K papers, 1.1M citations
96% related
Robot
103.8K papers, 1.3M citations
95% related
Adaptive control
60.1K papers, 1.2M citations
84% related
Control theory
299.6K papers, 3.1M citations
83% related
Object detection
46.1K papers, 1.3M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023253
2022759
2021573
2020647
2019801
2018921