scispace - formally typeset
Search or ask a question
Author

M. Cristiano

Bio: M. Cristiano is an academic researcher. The author has contributed to research in topics: Visual servoing & Frame grabber. The author has an hindex of 1, co-authored 1 publications receiving 23 citations.

Papers
More filters
Proceedings ArticleDOI
23 Jun 2003
TL;DR: This paper presents an experiment on the use of a cooperative camera system for robotic tracking of an object moving on a plane that has a variable structure, which is capable to change from a PID to PD and vice versa according to the dynamic of the target.
Abstract: This paper presents an experiment on the use of a cooperative camera system for robotic tracking of an object moving on a plane. The image of the object is acquired from two cameras, the first one (camera in hand) mounted on the end-effector of a 6 DOF robot arm (Puma 260) and the second one fixed in a certain location (fixed camera). The images are processed by a frame grabber on a standard PC, and the controller has a variable structure, which is capable to change from a PID to PD and vice versa according to the dynamic of the target.

26 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper deals with the problem of position-based visual servoing in a multiarm robotic cell equipped with a hybrid eye-in-hand/eye-to-hand multicamera system and the proposed approach is based on the real-time estimation of the pose of a target object by using the extended Kalman filter.
Abstract: This paper deals with the problem of position-based visual servoing in a multiarm robotic cell equipped with a hybrid eye-in-hand/eye-to-hand multicamera system. The proposed approach is based on the real-time estimation of the pose of a target object by using the extended Kalman filter. The data provided by all the cameras are selected by a suitable algorithm on the basis of the prediction of the object self-occlusions, as well as of the mutual occlusions caused by the robot links and tools. Only an optimal subset of image features is considered for feature extraction, thus ensuring high estimation accuracy with a computational cost independent of the number of cameras. A salient feature of the paper is the implementation of the proposed approach to the case of a robotic cell composed of two industrial robot manipulators. Two different case studies are presented to test the effectiveness of the hybrid camera configuration and the robustness of the visual servoing algorithm with respect to the occurrence of occlusions

167 citations

Proceedings ArticleDOI
12 Dec 2005
TL;DR: A position-based visual servoing algorithm using an hybrid eye-in-hand/eye-to-hand multi-camera configuration based on an extended Kalman filter exploiting the data provided by all the cameras without "a priori" discrimination, allowing real-time object pose estimation.
Abstract: A position-based visual servoing algorithm using an hybrid eye-in-hand/eye-to-hand multi-camera configuration is presented in this paper. Based on an extended Kalman filter, this approach exploits the data provided by all the cameras without "a priori" discrimination, allowing real-time object pose estimation. A suitable algorithm is in charge of selecting an optimal subset of image features on the basis of the desired task and of the current configuration of the workspace. Only this subset is considered for feature extraction, thus ensuring a computational cost independent of the number of cameras. Experimental results are reported to demonstrate the feasibility and the effectiveness of the proposed technique.

54 citations

Proceedings ArticleDOI
10 Apr 2007
TL;DR: This paper proposes an initialization step for a hybrid eye-in-hand/eye-to-hand grasping system, and a method to automatically focus on the object of interest is presented, tested and validated on a multi view robotic system.
Abstract: A critical assumption of many multi-view control systems is the initial visibility of the regions of interest from all the views. An initialization step is proposed for a hybrid eye-in-hand/eye-to-hand grasping system to fulfil this requirement. In this paper, the object of interest is assumed to be within the eye-to-hand field of view, whereas it may not be within the eye-in-hand one. The object model is unknown and no database is used. The object lies in a complex scene with a cluttered background. A method to automatically focus on the object of interest is presented, tested and validated on a multi view robotic system.

46 citations

Proceedings ArticleDOI
13 Jun 2007
TL;DR: This document proposes a solution to reduce the interaction between a user and a robotic arm, equipped with two cameras, and provides a tool applicable to any kind of graspable object.
Abstract: Assistance to disabled people is still a domain in which a lot of progress needs to be done. The more severe the handicap is, more complex are the devices, implying increased efforts to simplify the interactions between man and these devices. In this document we propose a solution to reduce the interaction between a user and a robotic arm. The system is equipped with two cameras. One is fixed on the top of the wheelchair (eye-to-hand) and the other one is mounted on the end effector of the robotic arm (eye-in-hand). The two cameras cooperate to reduce the grasping task to "one click". The method is generic, it does not require marks on the object, geometrical model or the database. It thus provides a tool applicable to any kind of graspable object. The paper first gives an overview of the existing grasping tools for disabled people and proposes a novel approach toward an intuitive human machine interaction.

45 citations

01 Jan 2007
TL;DR: Novel multi-focal vision-based and attention control strategies are proposed, analyzed, and compared to conventional strategies to enhance the perceptual and controlling capabilities of Robots by combination of large field of view, improved measurement accuracy and control performance as well as a flexible situative allocation of sensor resources.
Abstract: Machine vision using a combination of optical sensors with different measurement accuracies and fields of view, so called multi-focal vision, is investigated on different abstraction levels: static, dynamic, and planning. The performance of multi-focal vision systems is quantified. Novel multi-focal vision-based and attention control strategies are proposed, analyzed, and compared to conventional strategies. These novel approaches enhance the perceptual and controlling capabilities of Robots by combination of large field of view, improved measurement accuracy and control performance as well as a flexible situative allocation of sensor resources.

24 citations