scispace - formally typeset
Search or ask a question

Showing papers by "Gary Bradski published in 2004"


Proceedings ArticleDOI
Tao Wang1, Qian Diao1, Yimin Zhang1, Gang Song1, Chunrong Lai1, Gary Bradski1 
23 Aug 2004
TL;DR: This work presents a novel dynamic Bayesian networks (DBNs) approach to multi-cue based visual tracking that works well even in background clutter without the need to resort to simplified linear and Gaussian assumptions.
Abstract: Visual tracking has been an active research field of computer vision. However, robust tracking is still far from satisfactory under conditions of various background clutter, poses and occlusion in the real world. To increase reliability, This work presents a novel dynamic Bayesian networks (DBNs) approach to multi-cue based visual tracking. The method first extracts multi-cue observations such as skin color, ellipse shape, face detection, and then integrates them with hidden motion states in a compact DBN model. By using particle-based inference with multiple cues, our method works well even in background clutter without the need to resort to simplified linear and Gaussian assumptions. The experimental results are compared against the widely used condensation and KF approaches. Our better tracking results along with ease of fusing new cues in the DBN framework suggest that this technique is a fruitful basis to build top performing visual tracking systems.

19 citations


Patent
Gary Bradski1
06 Dec 2004
TL;DR: In this article, a method for converting unsupervised data into supervised data using multiple processes and training multiple supervised classifiers with the supervised data of the processes is described, where affinity measures may be determined and data clustered using the resulting trained classifiers.
Abstract: In one embodiment, a method includes converting unsupervised data into supervised data using multiple processes and training multiple supervised classifiers with the supervised data of the processes. In such manner, supervised classifiers may be used to classify unsupervised data. Affinity measures may be determined and data clustered using the resulting trained classifiers. Other embodiments are described and claimed.

9 citations


Patent
Gary Bradski1
19 Nov 2004
TL;DR: In this article, the authors proposed a graph of the data set in which each of the first and second features is a node of the graph and a label on an edge between the first node and the second node is based at least in part on the predictive importance of a first feature in terms of the second feature.
Abstract: For a first feature of a dataset having a plurality of features, training a classifier to predict the first feature in terms of other features in the data set to obtain a trained classifier; scrambling the values of a second feature in the data set to obtain a scrambled data set, executing the trained classifier on the scrambled data set, determining predictive importance of the second feature in predicting the first feature based at least in part on the accuracy of the trained classifier in predicting the first feature when executed with the scrambled data set and creating a graph of the data set in which each of the first and the second features is a node of the graph and a label on an edge between the first node and the second node is based at least in part on the predictive importance of the first feature in terms of the second feature.

5 citations


Patent
Gary Bradski1
06 Dec 2004
TL;DR: In this article, a method of successively splitting an analog function into high and low ranges and calculating a binary mask for these ranges to obtain a plurality of data regions at different split levels is described.
Abstract: In one embodiment, the present invention includes a method of successively splitting an analog function into high and low ranges and calculating a binary mask for these ranges to obtain a plurality of data regions at a plurality of split levels, and training binary classifiers on the plurality of data regions of at least one of the split levels. In such manner, binary classifiers may be used to classify an analog function. Other embodiments are described and claimed.

3 citations


Patent
Gary Bradski1
06 Dec 2004
TL;DR: In this article, a method of forming windows corresponding to a data point of a data set, successively expanding the windows and determining a local hill for the windows, recentering the windows on the local hill, and merging any of the windows within a selected distance of each other is described.
Abstract: In one embodiment, the present invention includes a method of forming windows corresponding to a data point of a data set, successively expanding the windows, determining a local hill for the windows, re-centering the windows on the local hill, and merging any of the windows within a selected distance of each other The windows formed may be substantially the same size as a single data point, in one embodiment The merged windows may be recorded as possible merge points of a hierarchical cluster formed from the data set Other embodiments are described and claimed

3 citations


Patent
Gary Bradski1
19 Nov 2004
TL;DR: In this article, the authors proposed a graph of the data set in which each of the first and second features is a node of the graph and a label on an edge between the first node and the second node is based at least in part on the predictive importance of a first feature in terms of the second feature.
Abstract: For a first feature of a dataset having a plurality of features, training a classifier to predict the first feature in terms of other features in the data set to obtain a trained classifier; scrambling the values of a second feature in the data set to obtain a scrambled data set, executing the trained classifier on the scrambled data set, determining predictive importance of the seconds feature in predicting the first feature based at least in part on the accuracy of the trained classifier in predicting the first feature when executed with the scrambled data set and creating a graph of the data set in which each of the first and the second features is a node of the graph and a label on an edge between the first node and the second node is based at least in part on the predictive importance of the first feature in terms of the second feature.

3 citations


Patent
Gary Bradski1
06 Dec 2004
TL;DR: In this article, a method of successively splitting an analog function into high and low ranges and calculating a binary mask for these ranges to obtain a plurality of data regions at different split levels is described.
Abstract: In one embodiment, the present invention includes a method of successively splitting an analog function into high and low ranges and calculating a binary mask for these ranges to obtain a plurality of data regions at a plurality of split levels, and training binary classifiers on the plurality of data regions of at least one of the split levels. In such manner, binary classifiers may be used to classify an analog function. Other embodiments are described and claimed.

1 citations