scispace - formally typeset
Search or ask a question
Dissertation

Vision based motion generation for humanoid robots

04 Apr 2013-
TL;DR: The last part of this thesis tries to draw some directions where innovative ideas may break some current technical locks in humanoid robotics.
Abstract: This manuscript present my research activities on real-time vision-based behaviors for complex robots such as humanoids. The underlying main scientific question structuring this work is the following: "What are the decisional processes which make possible for a humanoid robot to generate motion in real-time based upon visual information ?" In soccer humans can decide to kick a ball while running and when all the other players are constantly moving. When recast as an optimization problem for a humanoid robot, finding a solution for such behavior is generally computationally hard. For instance, the problem of visual search consider in this work is NP-complete. The first part of this work is concerned about real-time motion generation. Starting from the general constraints that a humanoid robot has to fulfill to generate a feasible motion, some core problems are presented. From this several contributions allowing a humanoid robot to react to change in the environment are presented. They revolve around walking pattern generation, whole body motion for obstacle avoidance, and real-time foot-step planning in constrained environment. The second part of this work is concerned about real-time acquisition of knowledge on the environment through computer vision. Two main behaviors are considered: visual-search and visual object model construction. They are considered as a whole taking into account the model of the sensor, the motion cost, the mechanical constraints of the robot, the geometry of the environment as well as the limitation of the vision processes. In addition contributions on coupling Self Localization and Map Building with walking, real-time foot-steps generation based on visual servoing are presented. Finally the core technologies developed in the previous contexts were used in different applications: Human-Robot interaction, tele-operation, human behavior analysis. Based upon the feedback of several integrated demonstrators on the humanoid robot HRP-2, the last part of this thesis tries to draw some directions where innovative ideas may break some current technical locks in humanoid robotics.
Citations
More filters
01 Jan 2005
TL;DR: This paper describes a general passivity-based framework for the control of flexible joint robots and shows how, based only on the motor angles, a potential function can be designed which simultaneously incorporates gravity compensation and a desired Cartesian stiffness relation for the link angles.
Abstract: This paper describes a general passivity-based framework for the control of flexible joint robots. Recent results on torque, position, as well as impedance control of flexible joint robots are summarized, and the relations between the individual contributions are highlighted. It is shown that an inner torque feedback loop can be incorporated into a passivity-based analysis by interpreting torque feedback in terms of shaping of the motor inertia. This result, which implicitly was already included in earlier work on torque and position control, can also be used for the design of impedance controllers. For impedance control, furthermore, potential energy shaping is of special interest. It is shown how, based only on the motor angles, a potential function can be designed which simultaneously incorporates gravity compensation and a desired Cartesian stiffness relation for the link angles. All the presented controllers were experimentally evaluated on DLR lightweight robots and their performance and robustness shown with respect to uncertain model parameters. Experimental results with position controllers as well as an impact experiment are presented briefly, and an overview of several applications is given in which the controllers have been applied.

174 citations

01 Nov 2004

44 citations

15 May 2012
TL;DR: Petman as mentioned in this paper is an anthropomorphic robot designed to test chemical protective clothing (IPE) in an environmentally controlled test chamber, where it will be exposed to chemical agents as it walks and does basic calisthenics.
Abstract: Petman is an anthropomorphic robot designed to test chemical protective clothing (Fig. 1). Petman will test Individual Protective Equipment (IPE) in an environmentally controlled test chamber, where it will be exposed to chemical agents as it walks and does basic calisthenics. Chemical sensors embedded in the skin of the robot will measure if, when and where chemical agents are detected within the suit. The robot will perform its tests in a chamber under controlled temperature and wind conditions. A treadmill and turntable integrated into the wind tunnel chamber allow for sustained walking experiments that can be oriented relative to the wind. Petman’s skin is temperature controlled and even sweats in order to simulate physiologic conditions within the suit. When the robot is performing tests, a loose fitting Intelligent Safety Harness (ISH) will be present to support or catch and restart the robot should it lose balance or suffer a mechanical failure. The integrated system: the robot, chamber, treadmill/turntable, ISH and electrical, mechanical and software systems for testing IPE is called the Individual Protective Ensemble Mannequin System (Fig. 2) and is being built by a team of organizations.† In 2009 when we began the design of Petman, there was no humanoid robot in the world that could meet the requirements set out for this program. Previous suit testing robots such as Portonman from the Defense Science and Technology Laboratory used external actuation or fixtures to support the robot during exercises. Limitations of these systems include a limited repertoire of behaviors or the compromise of the protective suit

24 citations

Dissertation
12 Dec 2008
TL;DR: In this article, a planificateur de mouvement is proposed, which travaille directement dans l'espace des contacts.
Abstract: Un systeme tel qu'un humanoide est sous-actionne et hyper-redondant. Il ne peut se mouvoir qu'a travers les interactions qu'il a avec son environnement, c'est-a-dire des contacts, et se deplace dans un espace de configuration de tres grande dimension. Le besoin de rechercher des contacts pour assurer la locomotion implique que tout objet de l'environnement peut etre considere comme un support potentiel. En cela, on s'eloigne des hypotheses classiques de la planification de mouvement ou les objets sont des obstacles a eviter. Ces objets prennent, dans le cadre d'un humanoide, le double statu d'obstacle et support. L'objectif de nos travaux est de construire un planificateur de mouvement qui sache prendre en compte cette dualite. Nous proposons une approche qui travaille directement dans l'espace des contacts. Pour cela nous developpons tout d'abord des outils qui permettent le passage entre l'espace contacts et celui des configurations : un generateur de posture qui permet la projection sur des sous-varietes de l'espace de configuration, et un nouveau type de volumes englobants, nomme S TP-BV, donnant la possibilite d'inclure l'evitement de collision dans la generation de posture. Nous detaillons ensuite l'architecture generale et les divers modules de notre planificateur de conctacts avant d'en montrer les resultats en simulation puis sur un robot HRP-2.

6 citations

Proceedings Article
01 Jan 1998
TL;DR: The technique of interval arithmetic is described and it is shown how its use can lead to a greater understanding of depth from stereo estimates.
Abstract: Interval arithmetic is a method for performing computations on measurements that are only known to within a xed error range. As the measurements are combined mathematically, their error intervals are also combined, in a conservative fashion. While the technique of interval analysis has found uses in many areas of computing in recent years, it has not yet been applied heavily in computer vision. We describe the technique of interval arithmetic and show how its use can lead to a greater understanding of depth from stereo estimates.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Proceedings ArticleDOI
01 Dec 2001
TL;DR: A machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates and the introduction of a new image representation called the "integral image" which allows the features used by the detector to be computed very quickly.
Abstract: This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the "integral image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.

18,620 citations

Journal ArticleDOI
ZhenQiu Zhang1
TL;DR: A flexible technique to easily calibrate a camera that only requires the camera to observe a planar pattern shown at a few (at least two) different orientations is proposed and advances 3D computer vision one more step from laboratory environments to real world use.
Abstract: We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.

13,200 citations

Journal ArticleDOI
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Abstract: In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.

10,696 citations