scispace - formally typeset
Search or ask a question

Showing papers by "Gary Bradski published in 2006"


Journal ArticleDOI
TL;DR: The robot Stanley, which won the 2005 DARPA Grand Challenge, was developed for high‐speed desert driving without manual intervention and relied predominately on state‐of‐the‐art artificial intelligence technologies, such as machine learning and probabilistic reasoning.
Abstract: This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without human intervention. The robot’s software system relied predominately on state-of-the-art AI technologies, such as machine learning and probabilistic reasoning. This article describes the major components of this architecture, and discusses the results of the Grand Challenge race.

2,011 citations


Proceedings Article
04 Dec 2006
TL;DR: This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.
Abstract: We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

1,310 citations


Proceedings ArticleDOI
16 Aug 2006
TL;DR: This method for identifying drivable surfaces in difficult unpaved and offroad terrain conditions as encountered in the DARPA Grand Challenge robot race achieves robustness by combining sensor information from a laser range finder, a pose estimation system and a color camera.
Abstract: We present a method for identifying drivable surfaces in difficult unpaved and offroad terrain conditions as encountered in the DARPA Grand Challenge robot race. Instead of relying on a static, pre-computed road appearance model, this method adjusts its model to changing environments. It achieves robustness by combining sensor information from a laser range finder, a pose estimation system and a color camera. Using the first two modalities, the system first identifies a nearby patch of drivable surface. Computer Vision then takes this patch and uses it to construct appearance models to find drivable surface outward into the far range. This information is put into a drivability map for the vehicle path planner. In addition to evaluating the method’s performance using a scoring framework run on real-world data, the system was entered, and won, the 2005 DARPA Grand Challenge. Post-race log-file analysis proved that without the Computer Vision algorithm, the vehicle would not have driven fast enough to win.

385 citations


Patent
Gary Bradski1, Jason Chuang1
22 Aug 2006
TL;DR: In this paper, a plurality of processing units each has a memory including one of a plurality subsets of a set of data points, and at least two of the processing units have different subsets.
Abstract: Provided are a method, system, and computer readable medium for training and using classification components on multiple processing units. A plurality of processing units each has a memory including one of a plurality of subsets of a set of data points. At least two of the processing units have different subsets of data points. A plurality of classification components are executed by the processing units. Classification components executing at the processing units are trained, wherein each classification component is trained with the subset of data points in the memory of the processing unit that is executing the classification component. One of the classification components is transferred to an additional processing unit of the processing units to train the transferred classification component using the subset of data points in the memory at the additional processing unit in response to training the classification component with the subset of data points.

20 citations


Patent
Gary Bradski1
31 Mar 2006
TL;DR: In this article, a boosted linear modeling of non-linear time series is presented, where one or more decision trees are generated for the data elements, with the decision tree models dividing the time series into a plurality of data groups, each of the data groups is modeled as a linear function.
Abstract: A method and apparatus for boosted linear modeling of non-linear time series. An embodiment of a method includes receiving a series of data elements, where the series of data elements is a time series and where the time series has a non-linearity. One or more decision trees are generated for the data elements, with the decision tree models dividing the time series into a plurality of data groups. Further, each of the data groups is modeled as a linear function.

16 citations