scispace - formally typeset
Search or ask a question
Topic

Humanoid robot

About: Humanoid robot is a research topic. Over the lifetime, 14387 publications have been published within this topic receiving 243674 citations. The topic is also known as: 🤖.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2012
TL;DR: This work implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao, and greatly extended the robot's interaction capabilities by enabling Nao to talk about an unlimited range of topics.
Abstract: The paper presents a multimodal conversational interaction system for the Nao humanoid robot. The system was developed at the 8th International Summer Workshop on Multimodal Interfaces, Metz, 2012. We implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao. This greatly extended the robot's interaction capabilities by enabling Nao to talk about an unlimited range of topics. In addition to speech interaction, we developed a wide range of multimodal interactive behaviours by the robot, including face-tracking, nodding, communicative gesturing, proximity detection and tactile interrupts. We made video recordings of user interactions and used questionnaires to evaluate the system. We further extended the robot's capabilities by linking Nao with Kinect.

58 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: A particle filter is introduced with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections.
Abstract: Event cameras are a new technology that can enable low-latency, fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes in the scene and have a very high temporal resolution (< 1μs). Moving targets produce dense spatio-temporal streams of events that do not suffer from information loss “between frames”, which can occur when traditional cameras are used to track fast-moving targets. Event-based tracking algorithms need to be able to follow the target position within the spatio-temporal data, while rejecting clutter events that occur as a robot moves in a typical office setting. We introduce a particle filter with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections. The proposed system provides a more persistent tracking compared to prior state-of-the-art, especially when the robot is actively following a target with its gaze. Experiments are performed on the iCub humanoid robot performing ball tracking and gaze following.

58 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: A system which manages this task by combining depth from a stereo camera and computation of the camera movement from robot kinematics in order to stabilize the camera images is described.
Abstract: For the interaction of a mobile robot with a dynamic environment, the estimation of object motion is desired while the robot is walking and/or turning its head. In this paper, we describe a system which manages this task by combining depth from a stereo camera and computation of the camera movement from robot kinematics in order to stabilize the camera images. Moving objects are detected by applying optical flow to the stabilized images followed by a filtering method, which incorporates both prior knowledge about the accuracy of the measurement and the uncertainties of the measurement process itself. The efficiency of this system is demonstrated in a dynamic real-world scenario with a walking humanoid robot.

58 citations

Proceedings ArticleDOI
14 May 2012
TL;DR: This paper proposes a complete reformulation of the inverse-dynamics problem, by cutting the ill-conditioned part of the problem, solving in the same way the problem of numerical stability and of cost reduction.
Abstract: The most classical solution to generate whole-body motions on humanoid robots is to use the inverse kinematics on a set of tasks. It enables flexibility, repeatability, sensor-feedback if needed, and can be applied in real time onboard the robot. However, it cannot comprehend the whole complexity of the robot dynamics. Inverse dynamics is then a mandatory evolution. Before application as a generic motion generator, two important concerns need to be solved. First, when including in the motion-generation problem the forces and torques variables, the numerical conditioning can become very low, inducing undesired behaviors or even divergence. Second, the computational costs of the problem resolution is much more important than when considering the kinematics alone. This paper proposes a complete reformulation of the inverse-dynamics problem, by cutting the ill-conditioned part of the problem, solving in a same way the problem of numerical stability and of cost reduction. The approach is validated by a set of dynamic whole-body movements of the HRP-2 robot.

58 citations

Journal ArticleDOI
TL;DR: A target probability updating scheme is described, providing an efficient solution to the selection of the best next viewpoint in the problem of actively searching for an object in a 3-D environment under the constraint of a maximum search time using a visually guided humanoid robot with 26 degrees of freedom.
Abstract: We study the problem of actively searching for an object in a three-dimensional (3-D) environment under the constraint of a maximum search time using a visually guided humanoid robot with 26 degrees of freedom. The inherent intractability of the problem is discussed, and a greedy strategy for selecting the best next viewpoint is employed. We describe a target probability updating scheme approximating the optimal solution to the problem, providing an efficient solution to the selection of the best next viewpoint. We employ a hierarchical recognition architecture, inspired by human vision, that uses contextual cues for attending to the view-tuned units at the proper intrinsic scales and for active control of the robotic platform sensor's coordinate frame, which also gives us control of the extrinsic image scale and achieves the proper sequence of pathognomonic views of the scene. The recognition model makes no particular assumptions on shape properties like texture and is trained by showing the object by hand to the robot. Our results demonstrate the feasibility of using state-of-the-art vision-based systems for efficient and reliable object localization in an indoor 3-D environment.

57 citations


Network Information
Related Topics (5)
Mobile robot
66.7K papers, 1.1M citations
96% related
Robot
103.8K papers, 1.3M citations
95% related
Adaptive control
60.1K papers, 1.2M citations
84% related
Control theory
299.6K papers, 3.1M citations
83% related
Object detection
46.1K papers, 1.3M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023253
2022759
2021573
2020647
2019801
2018921