scispace - formally typeset
Search or ask a question

Showing papers by "Gary Bradski published in 2002"


Patent
Gary Bradski1
22 Aug 2002
TL;DR: In this paper, a method, apparatus and system identify the location of eyes using structured light from a structured light source off the optical axis of a depth imaging device is presented, where the light returned from the object to the structured light depth image is used to generate a depth image.
Abstract: A method, apparatus and system identify the location of eyes. Specifically, structured light is transmitted towards an object from a structured light source off the optical axis of a structured light depth imaging device. The light returned from the object to the structured light depth imaging device is used to generate a depth image. In the event the object is a face, contrast areas in the depth image indicate the location of the eyes.

72 citations


Patent
30 Dec 2002
TL;DR: A radiation sensing structure includes red, green and blue photodiodes stacked above an infrared radiation sensing photodiode as mentioned in this paper, which can be viewed as a three-dimensional lattice.
Abstract: A radiation sensing structure includes red, green and blue photodiodes stacked Above an infrared radiation sensing photodiode.

65 citations


Patent
Gary Bradski1
25 Sep 2002
TL;DR: In this article, a 360-degree camera is used to generate images of a face and identify the location of eyes in the face, and a contrast area is identified to indicate the position of the eyes.
Abstract: A method, apparatus and system identify the location of eyes. Specifically, a 360-degree camera is used to generate images of a face and identify the location of eyes in the face. A first light source on the axis of the 360-degree camera projects light towards the face and a first polar coordinate image is generated from the light that is returned from the face. A second light source off the axis of the 360-degree camera projects light towards the face and a second polar coordinate image is generated from the light that is returned from the face. The first and the second images are then compared to each other and a contrast area is identified to indicate the location of the eyes. The first and second polar coordinate images may be automatically converted into perspective images for various applications such as teleconferencing.

11 citations


Patent
Gary Bradski1
16 Jul 2002
TL;DR: In this article, an imaging system configured to take panoramic pictures is described. And the system includes a camera, range finder associated with the camera and configured to provide depth information for objects within a field of view of the camera.
Abstract: An imaging system configured to take panoramic pictures is disclosed. In one embodiment, the imaging system includes a camera, range finder associated with the camera and configured to provide depth information for objects within a field of view of the camera; and a processor coupled to receive information from the camera and depth information from the range finder, and configured to unwrap pictures taken by the camera according to the depth information.

10 citations


Journal ArticleDOI
TL;DR: This issue begins on page 7 with an article by Scharstein and Szeliski covering an extensive taxonomy and evaluation of dense stereo correspondence algorithms including some of the new techniques appearing in this issue.
Abstract: The aim of this special issue of IJCV is to bring you a snapshot of, and reference to, the state of the art in detecting depth or disparity from stereo, multi-baseline, multi-view and novel image sensors. The primary concern here is with correspondence methods especially as they relate to speed, accuracy and density of results. Techniques for camera calibration and stereo rectification are not a major theme here since classical techniques already work well (source code for which is in the OpenCV library referenced below). Above, is what this special IJCV issue is about. Why we want depth is two fold: Depth allows cameras and machines to move in and understand the 3D world in which they are embedded; Depth is also one more cue that helps solve tough problems in computer vision such as object segmentation and tracking. Indeed, depth may be the cue that delivers reliable vision systems in the near term. This issue begins on page 7 with an article by Scharstein and Szeliski covering an extensive taxonomy and evaluation of dense stereo correspondence algorithms including some of the new techniques appearing in this issue. One may use this as a launching off point into the rest of the issue. Topics covered are:

1 citations