scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Automated monocular vision based system for picking textureless objects

TL;DR: An Autonomous Machine Vision system which grasps a textureless object from a clutter in a single plane, rearranges it for proper placement and then places it using vision using a unique vision-based pose estimation algorithm, collision free path planning and dynamic Change-Over algorithm for final placement.
Abstract: This paper proposes an Autonomous Machine Vision system which grasps a textureless object from a clutter in a single plane, rearranges it for proper placement and then places it using vision. It contributes to a unique vision-based pose estimation algorithm, collision free path planning and dynamic Change-Over algorithm for final placement.
Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive survey of data-driven robotic visual grasping detection (DRVGD) for unknown objects is presented in this article , where object-oriented DRVGD aims for the physical information of unknown objects, such as shape, texture and rigidity, which can classify objects into conventional or challenging objects.
Abstract: This paper presents a comprehensive survey of data-driven robotic visual grasping detection (DRVGD) for unknown objects. We review both object-oriented and scene-oriented aspects, using the DRVGD for unknown objects as a guide. Object-oriented DRVGD aims for the physical information of unknown objects, such as shape, texture, and rigidity, which can classify objects into conventional or challenging objects. Scene-oriented DRVGD focuses on unstructured scenes, which are explored in two aspects based on the position relationships of object-to-object, grasping isolated or stacked objects in unstructured scenes. In addition, this paper provides a detailed review of associated grasping representations and datasets. Finally, the challenges of DRVGD and future directions are pointed out.

2 citations

Proceedings ArticleDOI
13 Jul 2016
TL;DR: This proposed system develops 3D environment model utilizing mono-vision system, which is developed through capturing multiple shots from different locations and updated continuously based on the changes in the environment and the location of the robot.
Abstract: Mobile robot system will be an important asset in our future. Mobile robot not only has to execute predefined tasks programmed with, but also it must explore the unknown environment that might be pushed to work in. In this paper we propose, implement and test a new model for mobile robot environment using mono-vision system. This proposed system develops 3D environment model utilizing mono-vision system. The model is developed through capturing multiple shots from different locations. The 3D model describes the distance and the angle of objects with respect to the robot. Finally, the mobile robot will utilize this model to navigate its environment. This is achieved through projecting the 3D model into the motion floor and identifying the obstacles surrounding the robot. Then, the robot will avoid any object in its motion line. Most importantly, the model is updated continuously based on the changes in the environment and the location of the robot.

1 citations


Cites background from "Automated monocular vision based sy..."

  • ...On the other hand, some research explored mono-vision system for mobile robots for predefined applications [14,21, 23]....

    [...]

Book ChapterDOI
01 Jan 2022
References
More filters
Proceedings ArticleDOI
08 May 1994
TL;DR: A system which can perform full 3-D pose estimation of a single arbitrarily shaped, rigid object at rates up to 10 Hz using an enhanced implementation of the Iterative Closest Point Algorithm introduced by Besl and McKay (1992).
Abstract: This paper describes a system which can perform full 3-D pose estimation of a single arbitrarily shaped, rigid object at rates up to 10 Hz. A triangular mesh model of the object to be tracked is generated offline using conventional range sensors. Real-time range data of the object is sensed by the CMU high speed VLSI range sensor. Pose estimation is performed by registering the real-time range data to the triangular mesh model using an enhanced implementation of the Iterative Closest Point (ICP) Algorithm introduced by Besl and McKay (1992). The method does not require explicit feature extraction or specification of correspondence. Pose estimation accuracies of the order of 1% of the object size in translation, and 1 degree in rotation have been measured. >

161 citations

Journal ArticleDOI
TL;DR: A photometric invariant called reflectance ratio is presented that can be computed from a single brightness image of a scene, which represents a physical property of the region and is invariant to the illumination conditions.

45 citations

Proceedings ArticleDOI
23 Jun 2008
TL;DR: This paper describes a method to efficiently search for 3D models in a city-scale database and to compute the camera poses from single query images and presents a 3D model search tool that uses a visual word based search scheme to efficiently retrieve3D models from large databases using individual query images.
Abstract: This paper describes a method to efficiently search for 3D models in a city-scale database and to compute the camera poses from single query images. The proposed method matches SIFT features (from a single image) to viewpoint invariant patches (VIP) from a 3D model by warping the SIFT features approximately into the orthographic frame of the VIP features. This significantly increases the number of feature correspondences which results in a reliable and robust pose estimation. We also present a 3D model search tool that uses a visual word based search scheme to efficiently retrieve 3D models from large databases using individual query images. Together the 3D model search and the pose estimation represent a highly scalable and efficient city-scale localization system. The performance of the 3D model search and pose estimation is demonstrated on urban image data.

33 citations


"Automated monocular vision based sy..." refers methods in this paper

  • ...Most popular feature extraction techniques are based on gradient of the object(SIFT [3][4], SURF, colour thresholding)....

    [...]

Proceedings ArticleDOI
09 May 2005
TL;DR: This paper describes a view-based method for object recognition and estimation of object pose from a single image based on feature vector matching and clustering and the patch-duplet feature is compared to the SIFT feature.
Abstract: This paper describes a view-based method for object recognition and estimation of object pose from a single image. The method is based on feature vector matching and clustering. A set of interest points is detected and combined into pairs. A pair of patches, centered around each point in the pair, is extracted from a local orientation image. The patch orientation and size depends on the relative positions of the points, which make them invariant to translation, rotation, and locally invariant to scale. Each pair of patches constitutes a feature vector. The method is demonstrated on a number of real images and the patch-duplet feature is compared to the SIFT feature.

29 citations


"Automated monocular vision based sy..." refers methods in this paper

  • ...Most popular feature extraction techniques are based on gradient of the object(SIFT [3][4], SURF, colour thresholding)....

    [...]

Book ChapterDOI
01 Nov 2009
TL;DR: This article proposes a structured light based bin picking system that makes use of primitive models that involve a small amount of prior knowledge and obtains a reliable 3D range image for comparison with conventional systems.
Abstract: As a part of factory automation, bin picking systems perform pick-and-place tasks for randomly oriented parts from bins or boxes. Conventional bin picking systems can estimate the pose of an object only if the system has complete knowledge of the object (e.g., as a result of the geometric features of the object being provided by an image or a computer-aided design model). However, these systems require the features visible in an image to calculate the pose of an object, and they require additional setup time for an operator to register the reference model every time that the workpiece changed. In this article, we propose a structured light based bin picking system that makes use of primitive models that involve a small amount of prior knowledge. To obtain a reliable 3D range image for comparison with conventional systems, we use a structured light sensor with gray-coded patterns. With the 3D range image, the pose of the object is estimated with the use of primitive segmentation, rotational symmetric object modeling, and recognition. Through experiments that involve an industrial robot, we validated that the proposed method could be employed for a bin picking system.

26 citations