scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
08 Dec 2003
TL;DR: An ultrasonic tagging system developed for robustly observing human activity in a living area using ultrasonic transmitter tags with unique identifiers is shown to be able to track the three-dimensional motion of tagged objects in real time with high accuracy, resolution and robustness to occlusion.
Abstract: This paper describes an ultrasonic tagging system developed for robustly observing human activity in a living area. Using ultrasonic transmitter tags with unique identifiers, the system is shown through experimental application to be able to track the three-dimensional motion of tagged objects in real time with high accuracy, resolution and robustness to occlusion. The use of an ultrasonic system is desirable because of its low cost and use of commercial components, and the proposed system achieves high accuracy and robustness through the use of many redundant sensors. The system employs multilateration to locate tagged objects using one of two estimation algorithms, a least-squares optimization method or a random sample consensus method.

126 citations

Proceedings ArticleDOI
13 Jun 2000
TL;DR: An algorithm is presented that starts with an initial rough triangulation and refines the triangulations until it obtains a surface that best accounts for the images of the object and is able to overcome the surface ambiguity problem.
Abstract: Given a set of 3D points that we know lie on the surface of an object, we can define many possible surfaces that pass through all of these points. Even when we consider only surface triangulations, there are still an exponential number of valid triangulations that all fit the data. Each triangulation will produce a different faceted surface connecting the points. Our goal is to overcome this ambiguity and find the particular surface that is closest to the true object surface. We do not know the true surface but instead we assume that we have a set of images of the object. We propose selecting a triangulation based on its consistency with this set of images of the object. We present an algorithm that starts with an initial rough triangulation and refines the triangulation until it obtains a surface that best accounts for the images of the object. Our method is thus able to overcome the surface ambiguity problem and at the same time capture sharp corners and handle concave regions and occlusions. We show results for a few real objects.

126 citations

Proceedings ArticleDOI
02 Dec 2013
TL;DR: A novel stereo-based visual odometry approach that provides state-of-the-art results in real time, both indoors and outdoors and outperforms all other known methods on the KITTI Vision Benchmark data set.
Abstract: This paper presents a novel stereo-based visual odometry approach that provides state-of-the-art results in real time, both indoors and outdoors. Our proposed method follows the procedure of computing optical flow and stereo disparity to minimize the re-projection error of tracked fea ture points. However, instead of following the traditional approach of performing this task using only consecutive frames, we propose a novel and computationally inexpensive technique that uses the whole history of the tracked feature points to compute the motion of the camera. In our technique, which we call multi-frame feature integration, the features measured and tracked over all past frames are integrated into a single, improved estimate. An augmented feature set, composed of the improved estimates, is added to the optimization algorithm, improving the accuracy of the computed motion and reducing ego-motion drift. Experimental results show that the proposed approach reduces pose error by up to 65% with a negligible additional computational cost of 3.8%. Furthermore, our algorithm outperforms all other known methods on the KITTI Vision Benchmark data set.

125 citations

Journal ArticleDOI
TL;DR: The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed and three-dimensional perception using three types of terrain representation is examined.
Abstract: The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed. The perception techniques developed for the Navlab include road-following techniques using color classification and neural nets. These are discussed with reference to three road-following systems, SCARF, YARF, and ALVINN. Three-dimensional perception using three types of terrain representation (obstacle maps, terrain feature maps, and high-resolution maps) is examined. It is noted that perception continues to be an obstacle in developing autonomous vehicles. This work is part of the Defense Advanced Research Project Agency. Strategic Computing Initiative. >

124 citations

Proceedings ArticleDOI
07 Apr 1986
TL;DR: In test runs of an outdoor robot vehicle, the Terregator, under control of the Warp computer, it is demonstrated continuous motion vision-guided road-following at speeds up to 1.08 km/hour with image processing and steering servo loop times of 3 sec.
Abstract: We report progress in visual road following by autonomous robot vehicles. We present results and work in progress in the areas of system architecture, image rectification and camera calibration, oriented edge tracking, color classification and road-region segmentation, extracting geometric structure, and the use of a map. In test runs of an outdoor robot vehicle, the Terregator, under control of the Warp computer, we have demonstrated continuous motion vision-guided road-following at speeds up to 1.08 km/hour with image processing and steering servo loop times of 3 sec.

123 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations