scispace - formally typeset
Search or ask a question

Showing papers by "Gary Bradski published in 2000"


01 Jan 2000

3,275 citations


Patent
30 Jun 2000
TL;DR: In this article, a system and method for recognizing gestures is presented. The method comprises obtaining image data and determining a hand pose estimation. And a frontal view of a hand is then produced.
Abstract: A system and method for recognizing gestures. The method comprises obtaining image data and determining a hand pose estimation. A frontal view of a hand is then produced. The hand is then isolated the background. The resulting image is then classified as a type of gesture. In one embodiment, determining a hand pose estimation comprises performing background subtraction and computing a hand pose estimation based on an arm orientation determination. In another embodiment, a frontal view of a hand is then produced by performing perspective unwarping and scaling. The system that implements the method may be a personal computer with a stereo camera coupled thereto.

421 citations


Journal ArticleDOI
Gary Bradski1, James W. Davis
04 Dec 2000
TL;DR: This paper presents a fast and simple method using a timed motion history image (tMHI) for representing motion from the gradients in successively layered silhouettes, and demonstrates the approach with recognition of waving and overhead clapping motions to control a music synthesis program.
Abstract: This paper uses a simple method for representing motion in successively layered silhouettes that directly encode system time termed the timed Motion History Image (tMHI). This representation can be used to both (a) determine the current pose of the object and to (b) segment and measure the motions induced by the object in a video scene. These segmented regions are not "motion blobs", but instead motion regions naturally connected to the moving parts of the object of interest. This method may be used as a very general gesture recognition "toolbox". We use it to recognize waving and overhead clapping motions to control a music synthesis program.

340 citations


Proceedings ArticleDOI
Gary Bradski1, V. Pisarevsky1
15 Jun 2000
TL;DR: Intel's Microcomputer Research Lab has been developing a highly optimized Computer Vision Library (CVLib) that automatically detects processor type and loads the appropriate MMX/sup TM/ technology assembly tuned module for that processor.
Abstract: Intel's Microcomputer Research Lab has been developing a highly optimized Computer Vision Library (CVLib) that automatically detects processor type and loads the appropriate MMX/sup TM/ technology assembly tuned module for that processor. MMX optimized functions are from 2 to 8 times faster than optimized C functions. We will be demonstrating various algorithms supported by CVLib and handing out CDs containing the library.

113 citations


Patent
12 Jun 2000
TL;DR: In this paper, the locations of points of interest of a calibration object in a calibration image for a digital camera are automatically identified using a known reference pattern, which can be extracted from the calibration image by extracting contours from the image by identifying lines between light and dark pixels.
Abstract: The present invention allows for the locations of points of interest in a calibration object in a calibration image for a digital camera to be identified automatically. The image is an array of pixels corresponding to the calibration object, which has a known reference pattern. In a preferred embodiment, the invention includes receiving an array of pixels produced by a camera, classifying each pixel of the image as light or dark, extracting contours from the image by identifying lines between light and dark pixels, comparing the extracted contours to the shapes of the known reference pattern, and identifying the shapes of the known reference pattern in the image using the extracted contours. Preferably, the image is a color image, and the color information in the pixels of the image are converted into gray scale values to render the image as a gray scale image before the pixels are classified. Classifying each pixel includes applying a brightness threshold to each pixel of the image to classify those pixels having a brightness greater than the threshold as light and those below the threshold as dark.

68 citations


Proceedings ArticleDOI
13 Jun 2000
TL;DR: A stereo-based approach for gesture recognition that works well under extreme lighting conditions and tolerates a large range of hand poses and proposes a hybrid gesture representation that models the user's arm as a 3D line and uses images to represent the hand gestures.
Abstract: This paper describes a stereo-based approach for gesture recognition that works well under extreme lighting conditions and tolerates a large range of hand poses. The approach proposes a hybrid gesture representation that models the user's arm as a 3D line and uses images to represent the hand gestures. The algorithm finds the arm orientation and the hand location from the disparity data and uses this information to initialize a color based segmentation algorithm that cleanly separates the hand from the background. Finally, our approach uses the arm orientation to compute a frontal view of the hand through perspective unwarping producing easily recognizable hand gesture templates. The classification algorithm uses statistical moments of the binarized gesture templates to find the match. We achieve 96% recognition rates under varying lighting and 3D poses for a set of 6 gestures.

62 citations