scispace - formally typeset
Search or ask a question
Author

Anis Sahbani

Bio: Anis Sahbani is an academic researcher from Centre national de la recherche scientifique. The author has contributed to research in topics: GRASP & Motion planning. The author has an hindex of 16, co-authored 37 publications receiving 1284 citations. Previous affiliations of Anis Sahbani include Laboratory for Analysis and Architecture of Systems & International Society for Intelligence Research.

Papers
More filters
Journal ArticleDOI
TL;DR: This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands by focusing on analytical as well as empirical grasp synthesis approaches.

455 citations

Journal ArticleDOI
TL;DR: A general manipulation planning approach capable of addressing continuous sets for modeling both the possible grasps and the stable placements of the movable object, rather than discrete sets generally assumed by the previous approaches.
Abstract: This paper deals with motion planning for robots manipulating movable objects among obstacles. We propose a general manipulation planning approach capable of addressing continuous sets for modeling both the possible grasps and the stable placements of the movable object, rather than discrete sets generally assumed by the previous approaches. The proposed algorithm relies on a topological property that characterizes the existence of solutions in the subspace of configurations where the robot grasps the object placed at a stable position. It allows us to devise a manipulation planner that captures in a probabilistic roadmap the connectivity of sub-dimensional manifolds of the composite configuration space. Experiments conducted with the planner in simulated environments demonstrate its efficacy to solve complex manipulation problems.

309 citations

Journal ArticleDOI
TL;DR: The aim of the present text is to analyze the potential of robotic exoskeletons to specifically rehabilitate joint motion and particularly inter-joint coordination in stroke patients.
Abstract: Upper-limb impairment after stroke is caused by weakness, loss of individual joint control, spasticity, and abnormal synergies. Upper-limb movement frequently involves abnormal, stereotyped, and fixed synergies, likely related to the increased use of sub-cortical networks following the stroke. The flexible coordination of the shoulder and elbow joints is also disrupted. New methods for motor learning, based on the stimulation of activity-dependent neural plasticity have been developed. These include robots that can adaptively assist active movements and generate many movement repetitions. However, most of these robots only control the movement of the hand in space. The aim of the present text is to analyze the potential of robotic exoskeletons to specifically rehabilitate joint motion and particularly inter-joint coordination. First, a review of studies on upper-limb coordination in stroke patients is presented and the potential for recovery of coordination is examined. Second, issues relating to the mechanical design of exoskeletons and the transmission of constraints between the robotic and human limbs are discussed. The third section considers the development of different methods to control exoskeletons: existing rehabilitation devices and approaches to the control and rehabilitation of joint coordinations are then reviewed, along with preliminary clinical results available. Finally, perspectives and future strategies for the design of control mechanisms for rehabilitation exoskeletons are discussed.

156 citations

Journal ArticleDOI
03 Apr 2012
TL;DR: An original feature of this robot controller is that the hand trajectory is not imposed on the patient: only the coordination law is modified, and results demonstrate that the desired inter-joint coordination was successfully enforced, without significantly modifying the trajectory of the end point.
Abstract: The aim of this paper was to explore how an upper limb exoskeleton can be programmed to impose specific joint coordination patterns during rehabilitation. Based on rationale which emphasizes the importance of the quality of movement coordination in the motor relearning process, a robot controller was developed with the aim of reproducing the individual corrections imposed by a physical therapist on a hemiparetic patient during pointing movements. The approach exploits a description of the joint synergies using principal component analysis (PCA) on joint velocities. This mathematical tool is used both to characterize the patient's movements, with or without the assistance of a physical therapist, and to program the exoskeleton during active-assisted exercises. An original feature of this controller is that the hand trajectory is not imposed on the patient: only the coordination law is modified. Experiments with hemiparetic patients using this new active-assisted mode were conducted. Obtained results demonstrate that the desired inter-joint coordination was successfully enforced, without significantly modifying the trajectory of the end point.

55 citations

Proceedings ArticleDOI
10 Dec 2007
TL;DR: This method computes both object and finger trajectories as well as the finger relocation sequence under quasi-static movement assumption and uses a special structuring of the research space that allows to search for paths directly in the particular subspace GSn which is the subspace of all the grasps that can be achieved with n grasping fingers.
Abstract: In this paper, we propose a new method for the motion planning problem of rigid object dexterous manipulation with a robotic multi-fingered hand, under quasi-static movement assumption. This method computes both object and finger trajectories as well as the finger relocation sequence. Its specificity is to use a special structuring of the research space that allows to search for paths directly in the particular subspace GSn which is the subspace of all the grasps that can be achieved with n grasping fingers. The solving of the dexterous manipulation planning problem is based upon the exploration of this subspace. The proposed approach captures the connectivity of GSn in a graph structure. The answer of the manipulation planning query is then given by searching a path in the computed graph. Simulation experiments were conducted for different dexterous manipulation task examples to validate the proposed method.

54 citations


Cited by
More filters
Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations

Journal ArticleDOI
TL;DR: This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
Abstract: We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.

1,144 citations

Journal ArticleDOI
TL;DR: A review of the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps and an overview of the different methodologies are provided, which draw a parallel to the classical approaches that rely on analytic formulations.
Abstract: We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.

859 citations

Proceedings Article
01 Jan 2013
TL;DR: In this paper, a two-step cascaded system with two deep networks is proposed to detect robotic grasps in an RGB-D view of a scene containing objects, where the top detections from the first are re-evaluated by the second.
Abstract: We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.

824 citations

Journal ArticleDOI
TL;DR: This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands by focusing on analytical as well as empirical grasp synthesis approaches.

455 citations