scispace - formally typeset
Search or ask a question
Patent•

Detection and reconstruction of an environment to facilitate robotic interaction with the environment

TL;DR: In this article, a method for detecting and reconstructing environments to facilitate robotic interaction with such environments is described, where a 3D virtual environment representative of a physical environment of the robotic manipulator including a plurality of virtual objects corresponding to respective physical objects in the physical environment.
Abstract: Methods and systems for detecting and reconstructing environments to facilitate robotic interaction with such environments are described. An example method may involve determining a three-dimensional (3D) virtual environment representative of a physical environment of the robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment. The method may then involve determining two-dimensional (2D) images of the virtual environment including 2D depth maps. The method may then involve determining portions of the 2D images that correspond to a given one or more physical objects. The method may then involve determining, based on the portions and the 2D depth maps, 3D models corresponding to the portions. The method may then involve, based on the 3D models, selecting a physical object from the given one or more physical objects. The method may then involve providing an instruction to the robotic manipulator to move that object.
Citations
More filters
Patent•
Bill Duran1, Adrian Mircea Proca1•
18 May 2016
TL;DR: In this article, the IR filter is not interposed between the lens assembly and the sensor array, and the camera receives ambient light that is not filtered by IR filter, determining whether the received ambient light is due to a light source other than an IR light source.
Abstract: A method for controlling a camera mode is executed at a camera including a controller, a sensor array, an IR filter, and a lens assembly. The camera is operated in a night mode. While in the night mode the IR filter is not interposed between the lens assembly and the sensor array, the camera receives at the sensor array ambient light that is not filtered by the IR filter, determines whether the received ambient light is due to a light source other than an IR light source, and detects a light level of the received ambient light. The camera switches the operation of the camera from the night mode to a day mode when it is determined the received ambient light is due to a light source other than an IR light source and that the light level of the received ambient light exceeds a first threshold.

62 citations

Patent•
16 Dec 2014
TL;DR: In this paper, a robotic arm or manipulator can be used to grasp inventory items within an inventory system, and information about an item to be grasped can be detected and/or accessed from one or more databases to determine a grasping strategy for grasping the item with the robotic arm.
Abstract: Robotic arms or manipulators can be utilized to grasp inventory items within an inventory system. Information about an item to be grasped can be detected and/or accessed from one or more databases to determine a grasping strategy for grasping the item with a robotic arm or manipulator. For example, one or more accessed databases can contain information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past.

59 citations

Patent•
16 Dec 2014
TL;DR: In this article, robotic arms are used to grasp inventory items within an inventory system, and information about an inventory item to be grasped can be detected and used to determine a grasping strategy in conjunction with information from a database.
Abstract: Robotic arms may be utilized to grasp inventory items within an inventory system. Information about an inventory item to be grasped can be detected and used to determine a grasping strategy in conjunction with information from a database. Instructions for grasping an inventory item can be generated based on the detected information and the database.

56 citations

Patent•
12 Jun 2015
TL;DR: In this paper, a lookup table for estimating spatial depth in a scene is generated by identifying subsets of illuminators of a camera system that has a 2-dimensional array of image sensors and illuminants in fixed locations relative to the array, and partitions the image sensors into a plurality of pixels.
Abstract: A process generates lookup tables for estimating spatial depth in a scene. The process identifies subsets of illuminators of a camera system that has a 2-dimensional array of image sensors and illuminators in fixed locations relative to the array, and partitions the image sensors into a plurality of pixels. For each pixel, and for each of m distinct depths from the respective pixel, the process simulates a virtual surface at the respective depth. For each of the subsets of illuminators, the process determines an expected light intensity at the pixel based on the respective depth. The process forms an intensity vector using the expected light intensities for each of the distinct subsets and normalizes the intensity vector. For each pixel, the process constructs a lookup table comprising the normalized vectors corresponding to the pixel. The lookup table associates each normalized vector with the depth of the corresponding simulated surface.

37 citations

Patent•
04 Aug 2016
TL;DR: In this paper, a drop perception system for an open-housing structure with an internal volume, an open top and an open bottom, and a plurality of perception units positioned to capture perception data within the internal volume is presented.
Abstract: A drop perception system is disclosed that includes an open housing structure having an internal volume, an open top and an open bottom, and a plurality of perception units positioned to capture perception data within the internal volume at a plurality of locations between the open top and the open bottom of the open housing.

34 citations

References
More filters
Proceedings Article•DOI•
01 Aug 1987
TL;DR: In this paper, a divide-and-conquer approach is used to generate inter-slice connectivity, and then a case table is created to define triangle topology using linear interpolation.
Abstract: We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.

13,231 citations

Book•
01 Jan 1986
TL;DR: This chapter discusses Jacobians: Velocities and Static Forces, Robot Programming Languages and Systems, and Manipulator Dynamics, which focuses on the role of Jacobians in the control of Manipulators.
Abstract: 1. Introduction. 2. Spatial Descriptions and Transformations. 3. Manipulator Kinematics. 4. Inverse Manipulator Kinematics. 5. Jacobians: Velocities and Static Forces. 6. Manipulator Dynamics. 7. Trajectory Generation. 8. Manipulator Mechanism Design. 9. Linear Control of Manipulators. 10. Nonlinear Control of Manipulators. 11. Force Control of Manipulators. 12. Robot Programming Languages and Systems. 13. Off-Line Programming Systems.

5,992 citations

Proceedings Article•DOI•
26 Oct 2011
TL;DR: A system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware, which fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real- time.
Abstract: We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.

4,184 citations

Proceedings Article•DOI•
Brian Curless1, Marc Levoy1•
01 Aug 1996
TL;DR: This paper presents a volumetric method for integrating range images that is able to integrate a large number of range images yielding seamless, high-detail models of up to 2.6 million triangles.
Abstract: A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.

3,282 citations

Proceedings Article•DOI•
24 Apr 2000
TL;DR: A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces by incrementally building two rapidly-exploring random trees rooted at the start and the goal configurations.
Abstract: A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces. The method works by incrementally building two rapidly-exploring random trees (RRTs) rooted at the start and the goal configurations. The trees each explore space around them and also advance towards each other through, the use of a simple greedy heuristic. Although originally designed to plan motions for a human arm (modeled as a 7-DOF kinematic chain) for the automatic graphic animation of collision-free grasping and manipulation tasks, the algorithm has been successfully applied to a variety of path planning problems. Computed examples include generating collision-free motions for rigid objects in 2D and 3D, and collision-free manipulation motions for a 6-DOF PUMA arm in a 3D workspace. Some basic theoretical analysis is also presented.

3,102 citations