scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
03 Jan 1998
TL;DR: Virtualized Reality, the contribution to the creation of virtual worlds from dynamic events using a stereo technique that gives dense depth maps on all-around views of the event and a few results from 3D Dome, the virtualizing facility.
Abstract: Virtual Reality has traditionally relied on hand-created synthetic virtual worlds as approximations of real world spaces. Creation of such virtual worlds is very labour intensive. Computer vision has recently contributed greatly to the creation of the visual/graphical aspect of the virtual worlds. These techniques are classified under image-based-as opposed to geometry-based-rendering in computer graphics. Image based rendering (IBR) aims to recreate a visual world given a few real views of it. We survey some of the important image-based rendering techniques in this paper, analyzing their assumptions and limitations. We then discuss Virtualized Reality, our contribution to the creation of virtual worlds from dynamic events using a stereo technique that gives dense depth maps on all-around views of the event. The virtualized world can be represented as multiple view-dependent models or as a single view-independent model. It can then be synthesized visually given the position and properties of any virtual camera. We present a few results from 3D Dome, our virtualizing facility.

16 citations

01 Jan 2003
TL;DR: This dissertation discusses methods for efficiently approximating conditional probabilities in large domains by maximizing the entropy of the distribution given a set of constraints and develops two algorithms, the inverse probability method and recurrent linear network, for maximizing Renyi's quadratic entropy without bounds.
Abstract: In this dissertation we discuss methods for efficiently approximating conditional probabilities in large domains by maximizing the entropy of the distribution given a set of constraints. The constraints are constructed from conditional probabilities, typically of low-order, that can be accurately computed from the training data. By appropriately choosing the constraints, maximum entropy methods can balance the tradeoffs in errors due to bias and variance. Standard maximum entropy techniques are too computationally inefficient for use in large domains in which the set of variables that are being conditioned upon varies. Instead of using the standard measure of entropy first proposed by Shannon, we use a measure that lies within the family of Renyi's entropies. If we allow our probability estimates to occasionally lie outside the range from 0 to 1, we can efficiently maximize Renyi's quadratic entropy relative to the constraints using a set of linear equations. We develop two algorithms, the inverse probability method and recurrent linear network, for maximizing Renyi's quadratic entropy without bounds. The algorithms produce identical results. However, depending on the type of problem, one method may be more computationally efficient than the other. We also propose an extension to the algorithms for partially enforcing the constraints based on our confidence in them. Our algorithms are tested on several applications including: collaborative filtering, image retrieval and language modeling.

16 citations

01 Jan 2003
TL;DR: A robust distance minimization approach to solving point-sampled vision problems based on correlating kernels centered at point-samples, a technique called kernel correlation, which enforces smoothness on point samples from all views, not just within a single view.
Abstract: Range sensors, such as laser range finder and stereo vision systems, return point-samples of a scene Typical point-sampled vision problems include registration, regularization and merging We introduce a robust distance minimization approach to solving the three classes of problems The approach is based on correlating kernels centered at point-samples, a technique we call kernel correlation Kernel correlation is an affinity measure, and it contains an M-estimator mechanism for distance minimization Kernel correlation is also an entropy measure of the point set configuration Maximizing kernel correlation implies enforcing compact point set The effectiveness of kernel correlation is evaluated by the three classes of problems First, the kernel correlation based registration method is shown to be efficient, accurate and robust, and its performance is compared with the iterative closest point (ICP) algorithm Second, kernel correlation is adopted as an object space regularizer in the stereo vision problem Kernel correlation is discontinuity preserving and usually can be applied in large scales, resulting in smooth appearance of the estimated model The performance of the algorithm is evaluated both quantitatively and qualitatively Finally, kernel correlation plays a point-sample merging role in a multiple view stereo algorithm Kernel correlation enforces smoothness on point samples from all views, not just within a single view As a result we can put both the photo-consistency and the model merging constraints into a single energy function Convincing reconstruction results are demonstrated

16 citations

Proceedings ArticleDOI
21 Mar 2011
TL;DR: An approach to robustly align facial features to a face image even when the face is partially occluded is presented, which relies on explicit multi-modal representation of the response from each of the face feature detectors and RANSAC hypothesize-and-test search for the correct alignment over subset samplings of those in the feature response modes.
Abstract: In this paper we present an approach to robustly align facial features to a face image even when the face is partially occluded. Previous methods are vulnerable to partial occlusion of the face, since it is assumed, explicitly or implicitly, that there is no significant occlusion. In order to cope with this difficulty, our approach relies on two schemes: one is explicit multi-modal representation of the response from each of the face feature detectors, and the other is RANSAC hypothesize-and-test search for the correct alignment over subset samplings of those in the feature response modes. We evaluated the proposed method on a large number of facial images, occluded and non-occluded. The results demonstrated that the alignment is accurate and stable over a wide range of degrees and variations of occlusion.

16 citations

Proceedings ArticleDOI
05 Dec 2005
TL;DR: An online 3D reconstruction system from stereo image sequences to obtain a dense local world model for robot navigation is described and experimental results of a humanoid robot H7 are denoted.
Abstract: This paper describes an online 3D reconstruction system from stereo image sequences to obtain a dense local world model for robot navigation. The proposed method consists of three components: 1) stereo depth map calculation, 2) correspondence calculation in time sequential images by tracking raw image features, 3) 6DOF camera motion estimation by RANSAC and integrate depth map into 3D reconstructed model. We examined and evaluated our method in a motion capture environment for comparison. Finally experimental results of a humanoid robot H7 are denoted.

16 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations