scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
04 Dec 1990
TL;DR: A signal matching algorithm is presented that can select an appropriate window size adaptively so as to obtain both precise and stable estimation of correspondences in relation to disparity variation within a window.
Abstract: The authors presents a signal matching algorithm that can select an appropriate window size adaptively so as to obtain both precise and stable estimation of correspondences. A statistical model is presented for disparity variation within a window, and it is used to establish a link between the window size and the uncertainty of the computed disparity. This makes it possible to choose the window size that minimizes uncertainty in the disparity computed at each point. A theory is presented for the model and the resultant algorithm, together with analytical and experimental results that demonstrate their effectiveness. >

159 citations

Proceedings ArticleDOI
05 Dec 2005
TL;DR: In experiments, combining real-time vision with plan reuse has allowed a Honda ASIMO humanoid robot to autonomously traverse dynamic environments containing unpredictably moving obstacles.
Abstract: Despite the stable walking capabilities of modern biped humanoid robots, their ability to autonomously and safely navigate obstacle-filled, unpredictable environments has so far been limited We present an approach to autonomous humanoid walking that combines vision-based sensing with a footstep planner, allowing the robot to navigate toward a desired goal position while avoiding obstacles An environment map including the robot, goal, and obstacle locations is built in real-time from vision The footstep planner then computes an optimal sequence of footstep locations within a time-limited planning horizon Footstep plans are reused and only partially recomputed as the environment changes during the walking sequence In our experiments, combining real-time vision with plan reuse has allowed a Honda ASIMO humanoid robot to autonomously traverse dynamic environments containing unpredictably moving obstacles

157 citations

Journal ArticleDOI
TL;DR: A theory of performing SFS across time: estimating the shape of a dynamic object (with unknown motion) by combining all of the silhouette images of the object over time is developed.
Abstract: Shape-From-Silhouette (SFS) is a shape reconstruction method which constructs a 3D shape estimate of an object using silhouette images of the object. The output of a SFS algorithm is known as the Visual Hull (VH). Traditionally SFS is either performed on static objects, or separately at each time instant in the case of videos of moving objects. In this paper we develop a theory of performing SFS across time: estimating the shape of a dynamic object (with unknown motion) by combining all of the silhouette images of the object over time. We first introduce a one dimensional element called a Bounding Edge to represent the Visual Hull. We then show that aligning two Visual Hulls using just their silhouettes is in general ambiguous and derive the geometric constraints (in terms of Bounding Edges) that govern the alignment. To break the alignment ambiguity, we combine stereo information with silhouette information and derive a Temporal SFS algorithm which consists of two steps: (1) estimate the motion of the objects over time (Visual Hull Alignment) and (2) combine the silhouette information using the estimated motion (Visual Hull Refinement). The algorithm is first developed for rigid objects and then extended to articulated objects. In the Part II of this paper we apply our temporal SFS algorithm to two human-related applications: (1) the acquisition of detailed human kinematic models and (2) marker-less motion tracking.

151 citations

Proceedings ArticleDOI
20 May 2002
TL;DR: This paper presents a method to recover the full-motion (3 rotations and 3 translations) of the head using a cylindrical model and uses the iteratively re-weighted least squares (IRLS) technique in conjunction with the image gradient to deal with non-rigid motion and occlusion.
Abstract: This paper presents a method to recover the full-motion (3 rotations and 3 translations) of the head using a cylindrical model. The robustness of the approach is achieved by a combination of three techniques. First, we use the iteratively re-weighted least squares (IRLS) technique in conjunction with the image gradient to deal with non-rigid motion and occlusion. Second, while tracking, the templates are dynamically updated to diminish the effects of self-occlusion and gradual lighting changes and keep tracking the head when most of the face is not visible. Third, because the dynamic templates may cause error accumulation, we re-register images to a reference frame when head pose is close to a reference pose. The performance of the real-time tracking program was evaluated in three separate experiments using image sequences (both synthetic and real) for which ground truth head motion is known. The real sequences included pitch and yaw of as large as 40/spl deg/ and 75/spl deg/ respectively. The average recovery accuracy of the 3D rotations was found to be about 3/spl deg/.

150 citations

Proceedings ArticleDOI
17 Jun 2006
TL;DR: An approach for aligning a 3D deformable model to a single face image that makes no assumptions about global illumination or surface prop- erties, so it can be applied to a wide range of imaging con- ditions.
Abstract: We present an approach for aligning a 3D deformable model to a single face image. The model consists of a set of sparse 3D points and the view-based patches associated with every point. Assuming a weak perspective projection model, our algorithm iteratively deforms the model and ad- justs the 3D pose to fit the image. As opposed to previous approaches, our algorithm starts the fitting without resort- ing to manual labeling of key facial points. And it makes no assumptions about global illumination or surface prop- erties, so it can be applied to a wide range of imaging con- ditions. Experiments demonstrate that our approach can effectively handle unseen faces with a variety of pose and illumination variations.

149 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations