scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
01 Jan 1988
TL;DR: In this paper, the authors compare the real-time performance of the computed-torque control and the feed-forward dynamics compensation scheme for the manipulator trajectory tracking control problem.
Abstract: 1. Introduction The manipulator trajectory tracking control problem revolves around computing the torques to be applied to achieve accu- rate tracking. This problem has been extensively studied in simulations, but real-time results have been lacking in the robotics lilerature. In this paper, we present the experimental results of the real-time pe$ormance of model-based control algorithms. We compare the computed-torque control scheme with the feedforward dynamics compensation scheme. The feedforward scheme compensates for the manipulator dy- namics in the fiedforward path, whereas the computed-torque scheme uses the dynamics in the feedback loop for lineariza- tion and decoupling. The parameters in the dynamics model for the computed-torque and feedforward schemes were esti- maled by using an identification algorithm. Our experiments underscore the importance of including the ofldiagonal terms of the manipulator inertia matrix in the torque compu- tation. This observation is further supported by our analysis
Journal ArticleDOI
TL;DR: This work proposes a denighting method that enhances nighttime images so that they are closer in quality to images taken during the daytime by using day and nighttime background illuminances that have already been computed.
Abstract: Nighttime images of a scene from a surveillance camera have lower contrast and higher noise than their corresponding daytime images of the same scene due to low illumination. We propose a denighting method that enhances nighttime images so that they are closer in quality to images taken during the daytime. Our denighting method is based on an observation. It exploits the simple fact that the static camera captures the same scene all day long, obtaining a large quantity of data about the scene. In particular, to enhance the nighttime image, we decompose the image into an illuminance layer, and a reflectance layer that is assumed to be the textures. We enhance the nighttime image by improving the illuminance of nighttime so that it is closer to the illuminance of daytime by using day and nighttime background illuminances that have already been computed. We present several results of the enhancement of low quality nighttime images using denighting.
01 Jan 1989
TL;DR: In this paper, a geometric terrain representation from range imagery can be used to identify footfall positions, and several methods for determining the positions for which the shape of the terrain is nearest to the outline of the foot are presented.
Abstract: We are designing a complete autonomous legged robot to perfom planetary exploration without human supervision. This robot must traverse unknown and geographically diverse areas in order a collect samples of materials. This paper describes how a geometric terrain representation from range imagery can be used to identify footfall positions. First, we present previous research aimed to determine footfall positions. Second, we describe several methods for determining the positions for which the shape of the terrain is nearest to the shape of the foot. Third, we evaluate and compare the efficiency of these methods as functions of some parameters such as particularities of the shape of the terrain. Fourth, we introduce other methods that use thermal imaging in order to differentiate materials.
01 Jan 1995
TL;DR: It is demonstrated, using data from a human femur, that dis- crete-point data sets selected using the method are superior to those selected by human experts in terms of the resulting pose-refinement accuracy.
Abstract: The goal of intrasurgical registration is to establish a common reference frame be- tween presurgical and intrasurgical three-dimensional data sets that correspond to the same anatomy. This paper presents two novel techniques that have application to this problem, high-speed pose tracking and intrasurgical data selection. In the first part of this paper, we describe an approach for tracking the pose of arbitrarily shaped rigid objects at rates up to 10 Hz. Static accuracies on the order of 1 nun in translation and 1" in rotation have been achieved. We have demonstrated the technique on a human face using a high-speed VLSI range sensor; however, the technique is independent of the sensor used or the anatomy tracked. In the second part of this paper, we describe a general purpose approach for selecting near-optimal intrasurgical registration data. Because of the high costs of acqui- sition of htrasurgical data, our goal is to minimize the amount of data acquired while ensuring regis- tration accuracy. We synthesize near-optimal intrasurgical data sets, based on an analysis of differential surface properties of presurgical data. We demonstrate, using data from a human femur, that dis- crete-point data sets selected using our method are superior to those selected by human experts in terms of the resulting pose-refinement accuracy. J Image Guid Surg 2:27-29 (2995)). 01995 Wiley-Liss, Inc. ~

Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations