Hybrid Visual Servoing With Hierarchical Task Composition for Aerial Manipulation
Summary (4 min read)
Introduction
- The modeling and control of an unmanned aerial vehicle (UAV) able of interacting with the environment to accomplish simple robotic-manipulation tasks have been proposed in [2], which is based on a force-position control law designed through a feedback linearizing technique.
- *This work was supported by the ARCAS and AEROARMS large-scale integrating projects, funded by the European Community’s under Grants FP7-ICT-287617 and H2020-ICT-644271, respectively.
- The authors are solely responsible for its content.
- However in these papers redundancy is employed in a rigid way and the interaction between dependent tasks is not considered.
II. REFERENCE FRAMES AND CAMERA MODEL
- Let ob ∈ R3 and Rb ∈ SO(3) be the position and rotation matrix, respectively, of the body reference frame {B : Ob − xbybzb}, which is fixed with the UAV base, with respect to the inertial reference frame {I : O−xyz} (see Fig. 1).
- Therefore, the roll and pitch rotations are constrained by imposing a desired linear motion.
- Oe − xeyeze} be the reference frame fixed with the arm end-effector (e.g. gripper).
- Notice that, by considering the DoFs of the vehicle base, even if a VToL UAV is an underactuated system, the addition of enough DoFs of the robot arm can generate a task-redundant system (n = 4 + ν total DoFs), e.g. with respect to the positioning of the gripper.
- Finally, the vehicle base is endowed with a downward looking camera.
III. OBJECT POSE ESTIMATION
- The authors assume that m known visual markers are attached to the target object.
- By using a visual tracker [12], [13], the pose of any visible marker can be measured with respect to the camera.
- (3) However, if more markers are simultaneously visible, a more robust and accurate solution can be achieved by combining the available information.
- Each marker contour is divided in several points (e.g. the corners and the middle points of the edges).
- A RANSAC outlier rejection mechanism is integrated to remove point correspondences resulting from imaging artifacts that might be inconsistent with the computed transformation [15].
IV. DYNAMIC TASK PRIORITY CONTROL
- The kinematic redundancy in the system allows control of the UAM end-effector by simultaneously achieving a number of secondary tasks.
- Indeed, redundancy means that the same gripper pose can be reached with several, even infinite, system configurations.
- Notice that the angular velocity ̟ can be measured with a standard onboard inertial measurement unit (IMU) under the assumption of a classical time-scale separation between the attitude controller (faster control loop, e.g. up to 1 kHz) and the velocity controller (slower control, in their case at camera frame rate of 25 Hz).
- The hierarchical formulation (13) guarantees that the execution of all the higher-priority tasks from 0 (main task) to i − 1 will not be affected by the ith subtask and by the variation of the uncontrolled state variables.
- Without loss of generality, the authors suppose that the transition phase starts at t = 0, i.e. the rth task stack has to be deactivated and substituted by the new one (r + 1)th.
V. HYBRID VISUAL SERVOING AND SYSTEM BEHAVIOR CONTROL FOR AERIAL MANIPULATION TASKS
- In this section several elementary tasks useful for the composition of an aerial manipulation (grasping/plugging) task by exploiting visual measurements will be proposed.
- Besides tasks allowing the control of the gripper pose with respect to the observed objects, tasks able to guarantee the camera FoV constraint, to minimize the effect of the motion of the arm on the vehicle positional stability, and that addresses the issue of the joint mechanical limits are proposed.
- These tasks are not all orthogonal and/or independent but they are essential for the specific mission purposes.
- The priority of the proposed tasks can be arranged on the basis of the desired behaviour, even if some general constraints have to be considered.
- On the other hand, the constraint on the joint limits is very invasive because it affects all the arm joints.
A. Gripper position and orientation
- Let o∗e be the desired position of the gripper, e.g. suitable to perform the object grasping in the considered case study.
- This value can be computed by combining the pose measurement of the target object provided by the visual system, with the desired relative displacement of the gripper with respect to the target object in the grasping configuration.
- Moreover, with the proposed choice of σp, only one DoF is required to execute this subtask, because only the norm of ep will be nullified, i.e. the motion of the gripper during the transient is constrained on a sphere of radius ‖ep‖.
- The corresponding task Jacobian Jp becomes singular when ep → 0.
- Notice that these two subtasks are orthogonal.
B. Camera field of view
- Without loss of generality, any point of the observed target can be chosen to be controlled in the image.
- (31) Notice that only one DoF is required to accomplish this subtask.
- In fact, the distance of obc with respect to the desired optical ray corresponding to s∗c is controlled.
- The corresponding task Jacobian matrix Jc is singular when ec → 0, but since it is not strictly required to accomplish the main mission that the target object is exactly in the desired position of the image (e.g. the center), this subtask can be activated only when σc exceeds a safety threshold.
C. Center of gravity
- The weight of the robot arm can generate an undesired torque on the vehicle base depending on the configuration.
- In particular, the arm motion statically perturbs the system attitude and position when the center of gravity (CoG) of the arm pg is not aligned with the CoG of the vehicle base along the gravitational line of action ιz .
- Without loss of generality, the CoG of the vehicle base is assumed to be in Ob.
- The corresponding task Jacobian matrix can be computed from the corresponding Jacobian represented.
- In fact, pbg is only a function of the arm joint configuration defined as pbg = 1 m ν∑ i=1 mip b gi, (33) where mi and pbgi are the mass and the position of the CoG of the ith arm link, respectively, and m = ∑ν i=1mi.
D. Joint-limits avoidance constraint
- A possible solution to avoid mechanical joint limits is to make attractive the central position of the joint ranges.
- The desired task variable is σ∗l = 0 (i.e. σ̃l = −σl), and the corresponding task Jacobian matrix is J l = [ O1×4 −2ΛLeq ⊤ ] . (42) Notice that the uncontrolled state variables do not affect this subtask, hence the corresponding Jacobian is the null matrix.
- Due to the higher priority tasks, some joint could reach anyway its limit.
- If mechanically viable, the system will reconfigure its internal DoFs to achieve all the remaining subtasks until the dangerous condition will disappear and the original priority will be restored but starting from a different system configuration.
VI. SIMULATION RESULTS
- The proposed approach has been tested in the simulator developed in the ARCAS project (www.arcas-project.eu), which is based on GAZEBO physics engine (http://gazebosim.org) (see Fig. 2(f)).
- The camera has been positioned 50 cm ahead the vehicle base with an inclination of 30 deg with respect to the vertical axis in a way to observe the grasping manoeuvre without self-occlusion.
- Figure 2(a) shows the time history of the position error in norm during the task execution for each case study.
- Notice how the behavior of both the position and orientation errors are similar in all the cases coherently with the hierarchical task combination adopted in the proposed formulation, i.e. the activation of subtasks cannot affect significantly the behavior of the higher priority tasks.
- The achieved results show how except for case 1), i.e. when this subtask is activated, the FoV error is improved without affecting the movement of the gripper.
VII. EXPERIMENTAL RESULTS
- The UAM employed for the experimental tests has been developed in the ARCAS project.
- The employed autopilot has been developed by CATEC (www.catec.aero) and allows also the control of the robot arm.
- Then, a pose average of the difference between the camera and the optical frames is computed with respect to the object.
- As for the simulated case studies, the mission has been decomposed into two steps: the approaching phase, to move the bar over the plugging base at a distance of 5 cm, and the final plugging phase.
- The achieved results are shown in Fig. 6. Figures 6(a) and 6(b) show the time history of the norm of the position and orientation errors, respectively.
VIII. CONCLUSIONS
- A hybrid visual servoing with a hierarchical task-priority control framework suitable for UAM has been presented in this work.
- The underactuation of the VToL vehicle has been explicitly taken into account in a new general formulation that also guarantees the decoupling of independent tasks.
- Several subtasks have been proposed to reduce the error of the desired gripper position and orientation, to maintain the target in the camera FoV, to vertically align the arm CoG with the quadrotor gravitational vector, and to avoid arm joint limits.
- Simulations and experiments validated the proposed solution.
Did you find this useful? Give us your feedback
Citations
162 citations
Cites background from "Hybrid Visual Servoing With Hierarc..."
...Similarly, visual servoing based approaches have also been proposed to use computer vision algorithms in improving the control of industrial manipulators [18]....
[...]
144 citations
133 citations
Cites background from "Hybrid Visual Servoing With Hierarc..."
..., [11]–[13]), thus permitting a wide workspace of the end-effector (SE(3))....
[...]
118 citations
101 citations
References
23,396 citations
2,598 citations
"Hybrid Visual Servoing With Hierarc..." refers methods in this paper
...Once all contour correspondences are associated to the object frame, a Perspective-n-Point method [14] is used to obtain the camera pose with respect to the object....
[...]
909 citations
"Hybrid Visual Servoing With Hierarc..." refers methods in this paper
...By using a visual tracker [12], [13], the pose of any visible marker can be measured with respect to the camera....
[...]
319 citations
"Hybrid Visual Servoing With Hierarc..." refers methods in this paper
...Aerial manipulation tasks executed with a UAM endowed with a 2-DoFs robot arm have been presented in [6], where an adaptive sliding-mode controller has been adopted....
[...]
214 citations
Related Papers (5)
Frequently Asked Questions (16)
Q2. What are the tasks that are proposed for the composition of an aerial manipulation task?
Besides tasks allowing the control of the gripper pose with respect to the observed objects, tasks able to guarantee the camera FoV constraint, to minimize the effect of the motion of the arm on the vehicle positional stability, and that addresses the issue of the joint mechanical limits are proposed.
Q3. What is the effect of the approaching phase on the position error?
In particular, during the approaching phase the position error decreases almost linearly due to the saturation of the maximumvehicle cruise velocity (10 cm/s in these case studies).
Q4. Why is the CoG subtask never activated?
it is never activated because the high-priority FoV subtask determines arm configurations already compatible with the CoG subtask.
Q5. What is the purpose of the RANSAC outlier rejection mechanism?
A RANSAC outlier rejection mechanism is integrated to remove point correspondences resulting from imaging artifacts that might be inconsistent with the computed transformation [15].
Q6. what is the corresponding task Jacobian matrix?
The corresponding task Jacobian matrix isJp = 2e ⊤ p [ I3 S(Rbo b e)ız Jq,P ] , (19)where S(·) is the skew-symmetric matrix representing the vectorial product, ız = [ 0 0 1 ]⊤ , andJq = [ Jq,P Jq,O ] = [ Rb O3 O3 Rb ] Jbq, (20)with Jbq(q) the arm Jacobian matrix with respect to B, with Jq,P and Jq,O (3 × ν)-matrices.
Q7. What is the general velocity of the Jacobian matrix?
By generalizing (8) to the case of η prioritized subtasks, the following general velocity command can be formulated:υ∗ = J†0Λ0σ̃0 +η∑i=1(J iN0|...|i−1) † Λiσ̃i − J0|...|η̟, (13)with the recursively-defined compensating matrixJ0|...|η = (JηN0|...|η−1) †Jη+ (In − (JηN0|...|η−1) †Jη)J0|...|η−1,(14)where N0|...|i is the projector onto the null space of the augmented Jacobian J0|...|i of the ith subtask, with i = 0, . . . , η − 1, which are respectively defined as followsJ0|...|i = [ J⊤0 · · · J ⊤ i ]⊤ (15) N0|...|i = (In − J † 0|...|iJ0|...|i). (16)The previous stability analysis can be straightforwardly extended to the general case of η subtasks.
Q8. What is the vanishing term for the smoothing of the task stack?
Notice that the smoothing term e− tτ (υ∗r(0)− υ ∗ r+1(0))is bounded and exponentially vanishing, hence it will not affect the stability of the proposed control law.
Q9. What is the corresponding task Jacobian matrix of the uncontrolled state variables?
The corresponding task Jacobian matrix of the uncontrolled state variables isJp = 2e ⊤ p S(Rbo b e) [ ıx ıy ] , (21)where ıx = [ 1 0 0 ]⊤ , ıy = [ 0 1 0 ]⊤ .
Q10. What is the simplest way to describe the position of the robot arm?
Notice that, by considering the DoFs of the vehicle base, even if a VToL UAV is an underactuated system, the addition of enough DoFs of the robot arm can generate a task-redundant system (n = 4 + ν total DoFs), e.g. with respect to the positioning of the gripper.
Q11. What is the position of the image features centroid?
By taking into account the camera pose with respect to B, the desired position of the image features centroid has been chosen equal to s∗c = [ 0,−0.1 ]⊤ .
Q12. What is the kinematic redundancy in the system?
The kinematic redundancy in the system allows control of the UAM end-effector by simultaneously achieving a number of secondary tasks.
Q13. How can the authors obtain the frame transformation between the camera and the object?
By knowing the pose of the camera attached to the quadrotor body frame, the authors can trivially obtain the frame transformation between the camera and the object.
Q14. How can the pose of a marker be reconstructed?
the pose of the target object can be reconstructed from the measurement of each visible marker as follows:obt = o b j −R b j ⊤ otj (2)Rbt = R b jR t j⊤ .
Q15. How much distance is the safety distance between the two tasks?
Even if this is the lower priority task, a safety distance of more than 20% of the joint ranges, in the worst case, is always preserved.
Q16. What is the priority of the task in the stack?
As described before, it is possible to increase the priority of this task in the stack when a joint limit is excessively close in a way to guarantee mechanical safety at the expense of other tasks.