scispace - formally typeset
Search or ask a question
Author

Gary Bradski

Other affiliations: Intel, Stanford University, Google
Bio: Gary Bradski is an academic researcher from Willow Garage. The author has contributed to research in topics: Pose & Object (computer science). The author has an hindex of 41, co-authored 82 publications receiving 23763 citations. Previous affiliations of Gary Bradski include Intel & Stanford University.


Papers
More filters
Journal ArticleDOI
TL;DR: This article focuses on the specific use of primitives used in image processing and pattern recognition in robot vision for SP purposes, and presents a robot-vision example- the Stanley robot racing car.
Abstract: Recent advances in vision algorithms and increases in computer performance have made new capabilities available in autonomous robotics for real-time applications. In general, computer-vision-based solutions to robotics problems employ a high-level system architecture that makes use of low-level processing blocks. While the high-level system architecture is still an active area of research, many of the underlying low-level processing blocks and their associated methods have begun to stabilize, yielding a set of operators that has been found useful in a wide variety of tasks. In this article, we will briefly review these operators, which we call robot-vision signal-processing (SP) primitives, and their associated classes of methods. Although our taxonomy is quite general and related to those used in image processing and pattern recognition, we focus on the specific use of these primitives and associated classes in robot vision for SP purposes before presenting a robot-vision example- the Stanley robot racing car.

8 citations

Patent
Gary Bradski1
03 Jan 2012
TL;DR: In this article, a non-binary affinity measure between any two data points for a supervised classifier was determined for tree, kernel-based, nearest neighbor-based and neural network supervised classifiers.
Abstract: A non-binary affinity measure between any two data points for a supervised classifier may be determined. For example, affinity measures may be determined for tree, kernel-based, nearest neighbor-based and neural network supervised classifiers. By providing non-binary affinity measures using supervised classifiers, more information may be provided for clustering, analyzing and, particularly, for visualizing the results of data mining.

8 citations

Patent
Gary Bradski1
19 Nov 2004
TL;DR: In this article, the authors proposed a graph of the data set in which each of the first and second features is a node of the graph and a label on an edge between the first node and the second node is based at least in part on the predictive importance of a first feature in terms of the second feature.
Abstract: For a first feature of a dataset having a plurality of features, training a classifier to predict the first feature in terms of other features in the data set to obtain a trained classifier; scrambling the values of a second feature in the data set to obtain a scrambled data set, executing the trained classifier on the scrambled data set, determining predictive importance of the second feature in predicting the first feature based at least in part on the accuracy of the trained classifier in predicting the first feature when executed with the scrambled data set and creating a graph of the data set in which each of the first and the second features is a node of the graph and a label on an edge between the first node and the second node is based at least in part on the predictive importance of the first feature in terms of the second feature.

5 citations

Proceedings ArticleDOI
Gary Bradski1, Victor Eruhimov1, Sergey Molinov1, Valery Mosyagin1, Vadim Pisarevsky1 
15 Nov 2001
TL;DR: The paper describes an algorithm for 3D reconstruction of a toy composed from rigid bright colored blocks with the help of a conventional video camera and discusses robust structure from motion and recognition connected with the subject.
Abstract: The paper describes an algorithm for 3D reconstruction of a toy composed from rigid bright colored blocks with the help of a conventional video camera. The blocks are segmented using histogram thresholds and merged into one connected component corresponding to the whole toy. We also present the algorithm for extracting the color structure and matching feature points across the frames and discuss robust structure from motion and recognition connected with the subject.

5 citations

Patent
Gary Bradski1
06 Dec 2004
TL;DR: In this article, a method of successively splitting an analog function into high and low ranges and calculating a binary mask for these ranges to obtain a plurality of data regions at different split levels is described.
Abstract: In one embodiment, the present invention includes a method of successively splitting an analog function into high and low ranges and calculating a binary mask for these ranges to obtain a plurality of data regions at a plurality of split levels, and training binary classifiers on the plurality of data regions of at least one of the split levels. In such manner, binary classifiers may be used to classify an analog function. Other embodiments are described and claimed.

3 citations


Cited by
More filters
Journal ArticleDOI
Jeffrey Dean1, Sanjay Ghemawat1
06 Dec 2004
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Abstract: MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day.

20,309 citations

Journal ArticleDOI
Jeffrey Dean1, Sanjay Ghemawat1
TL;DR: This presentation explains how the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks.
Abstract: MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.

17,663 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Journal ArticleDOI
TL;DR: It is proved the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density.
Abstract: A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.

11,727 citations

Proceedings ArticleDOI
16 Jun 2012
TL;DR: The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
Abstract: Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti

11,283 citations