scispace - formally typeset
Search or ask a question
Patent

Depth camera based on structured light and stereo vision

Sagi Katz1, Avishai Adler1
01 Aug 2011-
TL;DR: In this article, a depth camera system uses a structured light illuminator and multiple sensors such as infrared light detectors, such as in a system which tracks the motion of a user in a field of view.
Abstract: A depth camera system uses a structured light illuminator and multiple sensors such as infrared light detectors, such as in a system which tracks the motion of a user in a field of view. One sensor can be optimized for shorter range detection while another sensor is optimized for longer range detection. The sensors can have a different baseline distance from the illuminator, as well as a different spatial resolution, exposure time and sensitivity. In one approach, depth values are obtained from each sensor by matching to the structured light pattern, and the depth values are merged to obtain a final depth map which is provided as an input to an application. The merging can involve unweighted averaging, weighted averaging, accuracy measures and/or confidence measures. In another approach, additional depth values which are included in the merging are obtained using stereoscopic matching among pixel data of the sensors.
Citations
More filters
Patent
Jong Hwan Kim1
13 Mar 2015
TL;DR: In this article, a mobile terminal including a body; a touchscreen provided to a front and extending to side of the body and configured to display content; and a controller configured to detect one side of a body when it comes into contact with a side of an external terminal, display a first area on the touchscreen corresponding to a contact area of body and the external terminal and a second area including the content.
Abstract: A mobile terminal including a body; a touchscreen provided to a front and extending to side of the body and configured to display content; and a controller configured to detect one side of the body comes into contact with one side of an external terminal, display a first area on the touchscreen corresponding to a contact area of the body and the external terminal and a second area including the content, receive an input of moving the content displayed in the second area to the first area, display the content in the first area, and share the content in the first area with the external terminal.

1,441 citations

Patent
Paul Edward Showering1
25 Jan 2013
TL;DR: In this article, the authors describe a system for determining dimensions of a physical object using a mobile computer equipped with a motion sensing device, which includes a microprocessor, a memory, a user interface, a motion sensor, and a dimensioning program executable by the microprocessor.
Abstract: Devices, methods, and software are disclosed for determining dimensions of a physical object using a mobile computer equipped with a motion sensing device. In an illustrative embodiment, the mobile computer can comprise a microprocessor, a memory, a user interface, a motion sensing device, and a dimensioning program executable by the microprocessor. The processor can be in communicative connection with executable instructions for enabling the processor for various steps. One step includes initiating a trajectory tracking mode responsive to receiving a first user interface action. Another step includes tracking the mobile computer's trajectory along a surface of a physical object by storing in the memory a plurality of motion sensing data items outputted by the motion sensing device. Another step includes exiting the trajectory tracking mode responsive to receiving a second user interface action. Another step includes calculating three dimensions of a minimum bounding box corresponding to the physical object.

370 citations

Patent
03 May 2013
TL;DR: In this paper, a method for volume dimensioning packages is described, which can determine from the received image data a number of features in three dimensions of the first 3D object.
Abstract: Systems and methods for volume dimensioning packages are provided. A method of operating a volume dimensioning system may include the receipt of image data of an area at least a first three-dimensional object to be dimensioned from a first point of view as captured using at least one image sensor. The system can determine from the received image data a number of features in three dimensions of the first three-dimensional object. Based at least on part on the determined features of the first three-dimensional object, the system can fit a first three-dimensional packaging wireframe model about the first three-dimensional object. The system can display of an image of the first three-dimensional packaging wireframe model fitted about an image of the first three-dimensional object on a display device.

362 citations

Patent
07 May 2012
TL;DR: In this paper, a system for determining the volume and dimensions of a three-dimensional object using a dimensioning system is described, which can include an image sensor, a non-transitory, machine-readable, storage, and a processor.
Abstract: Systems and methods of determining the volume and dimensions of a three-dimensional object using a dimensioning system are provided. The dimensioning system can include an image sensor, a non-transitory, machine-readable, storage, and a processor. The dimensioning system can select and fit a three-dimensional packaging wireframe model about each three-dimensional object located within a first point of view of the image sensor. Calibration is performed to calibrate between image sensors of the dimensioning system and those of the imaging system. Calibration may occur pre-run time, in a calibration mode or period. Calibration may occur during a routine. Calibration may be automatically triggered on detection of a coupling between the dimensioning and the imaging systems.

342 citations

Patent
15 May 2012
TL;DR: In this paper, an actuator is connected to at least one imaging subsystem for moving an angle of the optical axis relative to the terminal to align the object in the second image data with the first image data.
Abstract: A terminal for measuring at least one dimension of an object includes at least one imaging subsystem and an actuator. The at least one imaging subsystem includes an imaging optics assembly operable to focus an image onto an image sensor array. The imaging optics assembly has an optical axis. The actuator is operably connected to the at least one imaging subsystem for moving an angle of the optical axis relative to the terminal. The terminal is adapted to obtain first image data of the object and is operable to determine at least one of a height, a width, and a depth dimension of the object based on effecting the actuator to change the angle of the optical axis relative to the terminal to align the object in second image data with the object in the first image data, the second image data being different from the first image data.

341 citations

References
More filters
Journal ArticleDOI
TL;DR: It is shown that the SSSD-in-inverse-distance function exhibits a unique and clear minimum at the correct matching position, even when the underlying intensity patterns of the scene include ambiguities or repetitive patterns.
Abstract: A stereo matching method that uses multiple stereo pairs with various baselines generated by a lateral displacement of a camera to obtain precise distance estimates without suffering from ambiguity is presented. Matching is performed simply by computing the sum of squared-difference (SSD) values. The SSD functions for individual stereo pairs are represented with respect to the inverse distance and are then added to produce the sum of SSDs. This resulting function is called the SSSD-in-inverse-distance. It is shown that the SSSD-in-inverse-distance function exhibits a unique and clear minimum at the correct matching position, even when the underlying intensity patterns of the scene include ambiguities or repetitive patterns. The authors first define a stereo algorithm based on the SSSD-in-inverse-distance and present a mathematical analysis to show how the algorithm can remove ambiguity and increase precision. Experimental results with real stereo images are presented to demonstrate the effectiveness of the algorithm. >

1,066 citations

Patent
07 Jun 1995
TL;DR: In this article, a method and apparatus for mapping depth of an object in a preferred arrangement uses a projected light pattern to provide a selected texture to the object along the optical axis (24) of observation.
Abstract: A method and apparatus for mapping depth of an object (22) in a preferred arrangement uses a projected light pattern to provide a selected texture to the object (22) along the optical axis (24) of observation. An imaging system senses (32, 34) first and second images of the object (22) with the projected light pattern and compares the defocused of the projected pattern in the images to determine relative depth of elemental portions of the object (22).

378 citations

Patent
22 Nov 2011
TL;DR: In this article, a bi-dimensional coded light pattern is projected on the object such that each of the identifiable feature types appears at most once on predefined sections of distinguishable epipolar lines.
Abstract: A method and apparatus for obtaining an image to determine a three dimensional shape of a stationary or moving object using a bi dimensional coded light pattern having a plurality of distinct identifiable feature types. The coded light pattern is projected on the object such that each of the identifiable feature types appears at most once on predefined sections of distinguishable epipolar lines. An image of the object is captured and the reflected feature types are extracted along with their location on known epipolar lines in the captured image. Displacements of the reflected feature types along their epipolar lines from reference coordinates thereupon determine corresponding three dimensional coordinates in space and thus a 3D mapping or model of the shape of the object at any point in time.

318 citations

Patent
20 Nov 2007
TL;DR: In this paper, a bi-dimensional coded light pattern is projected on the object such that each of the identifiable feature types appears at most once on predefined sections of distinguishable epipolar lines.
Abstract: A method and apparatus for obtaining an image to determine a three dimensional shape of a stationary or moving object using a bi dimensional coded light pattern having a plurality of distinct identifiable feature types. The coded light pattern is projected on the object such that each of the identifiable feature types appears at most once on predefined sections of distinguishable epipolar lines. An image of the object is captured and the reflected feature types are extracted along with their location on known epipolar lines in the captured image. The locations of identified feature types in the 2D image are corresponded to 3D coordinates in space in accordance to triangulation mappings. Thus a 3D mapping or model of imaged objects at an point in time is obtained.

275 citations

Patent
18 Jun 2007
TL;DR: In this article, a stereo vision system using a combination of cameras having > a wide baseline in order to detect obstacles that are relatively far away from the cameras and > a shorter baseline to detect obstacle that are closer to the cameras is presented.
Abstract: Stereo vision system for a vehicle, the system receiving input from various combinations of at least three cameras depending on the status of the vehicle (e.g., speed, steering angle, and so forth) or depending on the surrounding environment (e.g., the rate that obstacles are detected, the size of the obstacles, and so forth). The three cameras are mounted asymmetrically spaced apart from each other, so that the baseline between each combination of two cameras is different. The vision system uses a combination of the cameras having >a wide baseline in order to detect obstacles that are relatively far away and a combination of cameras having a shorter baseline to detect obstacles that are closer to the cameras.

219 citations