scispace - formally typeset
Search or ask a question
Author

Youcef Mezouar

Bio: Youcef Mezouar is an academic researcher from University of Auvergne. The author has contributed to research in topics: Visual servoing & Mobile robot. The author has an hindex of 25, co-authored 188 publications receiving 2799 citations. Previous affiliations of Youcef Mezouar include Institut Français & University of Zaragoza.


Papers
More filters
Journal ArticleDOI
10 Dec 2002
TL;DR: This paper proposes a new approach to resolve difficulties in vision feedback control loop techniques by coupling path planning in image space and image-based control and ensures robustness with respect to modeling errors.
Abstract: Vision feedback control loop techniques are efficient for a large class of applications, but they come up against difficulties when the initial and desired robot positions are distant. Classical approaches are based on the regulation to zero of an error function computed from the current measurement and a constant desired one. By using such an approach, it is not obvious how to introduce any constraint in the realized trajectories or to ensure the convergence for all the initial configurations. In this paper, we propose a new approach to resolve these difficulties by coupling path planning in image space and image-based control. Constraints such that the object remains in the camera field of view or the robot avoids its joint limits can be taken into account at the task planning level. Furthermore, by using this approach, current measurements always remain close to their desired value, and a control by image-based servoing ensures robustness with respect to modeling errors. The proposed method is based on the potential field approach and is applied whether the object shape and dimensions are known or not, and when the calibration parameters of the camera are well or badly estimated. Finally, real-time experimental results using an eye-in-hand robotic system are presented and confirm the validity of our approach.

401 citations

Journal ArticleDOI
TL;DR: A survey of recent work on robot manipulation and sensing of deformable objects, a field with relevant applications in diverse industries such as medicine, food handling, manufacturing, and domestic chores, which is classified into four categories based on the type of object they manipulate.
Abstract: We present a survey of recent work on robot manipulation and sensing of deformable objects, a field with relevant applications in diverse industries such as medical (e.g. surgery assistance), food handling, manufacturing, and domestic chores (e.g. folding clothes). We classify the reviewed approaches into four categories based on the type of object they manipulate. Furthermore, within this object classification we divide the approaches based on the particular task they perform on the deformable object. Finally, we conclude this survey with a discussion of the current state of the art and propose future directions within the proposed classification.

357 citations

Proceedings ArticleDOI
10 Dec 2007
TL;DR: It is proved that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters, and an application to the visual servoing of a mobile robot is presented and experimented.
Abstract: Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.

117 citations

Journal ArticleDOI
TL;DR: This paper presents a vision-based navigation strategy for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using a single embedded camera observing natural landmarks using an X4-flyer equipped with a fisheye camera.

111 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of finding realistic image-space trajectories that correspond to optimal 3D trajectories and experimental results obtained on a six-degrees-of-freedom eye-in-hand robotic system confirm the validity of the proposed approach.
Abstract: Image-based servo is a local control solution. Thanks to the feedback loop closed in the image space, local convergence and stability in the presence of modeling errors and noise perturbations are ensured when the error is small. The principal deficiency of this approach is that the induced (3D) trajectories are not optimal and sometimes, especially when the displacement to realize is large, these trajectories are not physically valid leading to the failure of the servoing process. In this paper we address the problem of finding realistic image-space trajectories that correspond to optimal 3D trajectories. The camera calibration and the model of the observed scene are assumed unknown. First, a smooth closed-form collineation path between given start and end points is obtained. This path is generated in order to correspond to an optimal camera path. The trajectories of the image features are then derived and efficiently tracked using a purely image-based control. A second path planning scheme, based on the...

92 citations


Cited by
More filters
Posted Content
TL;DR: This paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies which are adaptive, distributed, asynchronous, and verifiably correct.
Abstract: This paper presents control and coordination algorithms for groups of vehicles. The focus is on autonomous vehicle networks performing distributed sensing tasks where each vehicle plays the role of a mobile tunable sensor. The paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies. The resulting closed-loop behavior is adaptive, distributed, asynchronous, and verifiably correct.

2,198 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: A survey of the visual place recognition research landscape is presented, introducing the concepts behind place recognition, how a “place” is defined in a robotics context, and the major components of a place recognition system.
Abstract: Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines—particularly recognition in computer vision and animal navigation in neuroscience—have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition—the role of place recognition in the animal kingdom, how a “place” is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description.

933 citations

Journal ArticleDOI
TL;DR: This tutorial has only considered velocity controllers, which is convenient for most of classical robot arms and geometrical features coming from a classical perspective camera is considered.
Abstract: This article is the second of a two-part tutorial on visual servo control. In this tutorial, we have only considered velocity controllers. It is convenient for most of classical robot arms. However, the dynamics of the robot must of course be taken into account for high speed task, or when we deal with mobile nonholonomic or underactuated robots. As for the sensor, geometrical features coming from a classical perspective camera is considered. Features related to the image motion or coming from other vision sensors necessitate to revisit the modeling issues to select adequate visual features. Finally, fusing visual features with data coming from other sensors at the level of the control scheme will allow to address new research topics

894 citations

Journal ArticleDOI
TL;DR: The simulation results show that the proposed path-planning approach is effective for many driving scenarios, and the MMPC-based path-tracking controller provides dynamic tracking performance and maintains good maneuverability.
Abstract: A path planning and tracking framework is presented to maintain a collision-free path for autonomous vehicles. For path-planning approaches, a 3-D virtual dangerous potential field is constructed as a superposition of trigonometric functions of the road and the exponential function of obstacles, which can generate a desired trajectory for collision avoidance when a vehicle collision with obstacles is likely to happen. Next, to track the planned trajectory for collision avoidance maneuvers, the path-tracking controller formulated the tracking task as a multiconstrained model predictive control (MMPC) problem and calculated the front steering angle to prevent the vehicle from colliding with a moving obstacle vehicle. Simulink and CarSim simulations are conducted in the case where moving obstacles exist. The simulation results show that the proposed path-planning approach is effective for many driving scenarios, and the MMPC-based path-tracking controller provides dynamic tracking performance and maintains good maneuverability.

675 citations