scispace - formally typeset
Search or ask a question
Topic

Humanoid robot

About: Humanoid robot is a research topic. Over the lifetime, 14387 publications have been published within this topic receiving 243674 citations. The topic is also known as: 🤖.


Papers
More filters
Proceedings ArticleDOI
08 Dec 2003
TL;DR: It is demonstrated that SDR-4X can spontaneously and passively interact with a human and achieve three basic requirements, which are the concurrent evaluation of the situation of each behavior module, concurrent execution of multiple behavior modules, and preemption.
Abstract: In this paper we describe the autonomous behavior control architecture of SDR-4X, which serves to integrate multi-modal recognition and motion control technologies. We overview the entire software architecture of SDR-4X, which is composed of perception, short and long term memory, behavior control, and motion control parts. Regarding autonomous behavior control, we further focus on issues such as spontaneous behavior generation using a homeostasis regulation mechanism, and a behavior control/selection mechanism with tree-structured situated behavior modules. In the autonomous behavior control architecture, we achieve three basic requirements, which are the concurrent evaluation of the situation of each behavior module, concurrent execution of multiple behavior modules, and preemption (behavior interruption/resume capability). Using the autonomous behavior control architecture described, we demonstrate that SDR-4X can spontaneously and passively interact with a human.

110 citations

Journal ArticleDOI
TL;DR: A general and practical planning framework for generating 3-D collision-free motions that take complex robot dynamics into account and demonstrates the effectiveness of the proposed method through examples of a space manipulator with highly nonlinear dynamics and a humanoid robot executing dynamic manipulation and locomotion at the same time.
Abstract: We propose a general and practical planning framework for generating 3-D collision-free motions that take complex robot dynamics into account. The framework consists of two stages that are applied iteratively. In the first stage, a collision-free path is obtained through efficient geometric and kinematic sampling-based motion planning. In the second stage, the path is transformed into dynamically executable robot trajectories by dedicated dynamic motion generators. In the proposed iterative method, those dynamic trajectories are sent back again to the first stage to check for collisions. Depending on the application, temporal or spatial reshaping methods are used to treat detected collisions. Temporal reshaping adjusts the velocity, whereas spatial reshaping deforms the path itself. We demonstrate the effectiveness of the proposed method through examples of a space manipulator with highly nonlinear dynamics and a humanoid robot executing dynamic manipulation and locomotion at the same time.

110 citations

Proceedings ArticleDOI
20 May 2019
TL;DR: In this paper, an end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures, including iconic, metaphoric, deictic, and beat gestures.
Abstract: Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Most existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from 52 h of TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. In a subjective evaluation, participants reported that the gestures were human-like and matched the speech content. We also demonstrate a co-speech gesture with a NAO robot working in real time.

109 citations

Proceedings ArticleDOI
Kei Okada1, Takashi Ogura1, A. Haneda1, Junya Fujimoto1, F. Gravot1, Masayuki Inaba1 
29 Jul 2005
TL;DR: Software system for a humanoid robot viewed from a motion generation aspect by taking kitchen helping behaviors as an example of a real world task and a role of perception based motion generation using a vision sensor and force sensors is discussed.
Abstract: This paper describes software system for a humanoid robot viewed from a motion generation aspect by taking kitchen helping behaviors as an example of a real world task. Our software consists of high level reasoning modules including 3D geometric model based action/motion planner and runtime modules contains 3D visual processor, force manipulation controller and walking controller. We discuss how a high level motion and action planner based motion generation functions contribute to various real world humanoid tasks and a role of perception based motion generation using a vision sensor and force sensors.

109 citations

Proceedings ArticleDOI
03 Mar 2013
TL;DR: The model validated that individuals preferred more to interact with a robot that had the same personality with theirs and that an adapted mixed robot's behavior was more engaging and effective than a speech only robot's behaviors in an interaction.
Abstract: Robots are more and more present in our daily life; they have to move into human-centered environments, to interact with humans, and to obey some social rules so as to produce an appropriate social behavior in accordance with human's profile (i.e., personality, state of mood, and preferences). Recent researches discussed the effect of personality traits on the verbal and nonverbal production, which plays a major role in transferring and understanding messages in a social interaction between a human and a robot. The characteristics of the generated gestures (e.g., amplitude, direction, rate, and speed) during the nonverbal communication can differ according to the personality trait, which, similarly, influences the verbal content of the human speech in terms of verbosity, repetitions, etc. Therefore, our research tries to map a human's verbal behavior to a corresponding combined robot's verbal-nonverbal behavior based on the personality dimensions of the interacting human. The system estimates first the interacting human's personality traits through a psycholinguistic analysis of the spoken language, then it uses PERSONAGE natural language generator that tries to generate a corresponding verbal language to the estimated personality traits. Gestures are generated by using BEAT toolkit, which performs a linguistic and contextual analysis of the generated language relying on rules derived from extensive research into human conversational behavior. We explored the human-robot personality matching aspect and the differences of the adapted mixed robot's behavior (gesture and speech) over the adapted speech only robot's behavior in an interaction. Our model validated that individuals preferred more to interact with a robot that had the same personality with theirs and that an adapted mixed robot's behavior (gesture and speech) was more engaging and effective than a speech only robot's behavior. Our experiments were done with Nao robot.

109 citations


Network Information
Related Topics (5)
Mobile robot
66.7K papers, 1.1M citations
96% related
Robot
103.8K papers, 1.3M citations
95% related
Adaptive control
60.1K papers, 1.2M citations
84% related
Control theory
299.6K papers, 3.1M citations
83% related
Object detection
46.1K papers, 1.3M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023253
2022759
2021573
2020647
2019801
2018921