scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
17 Oct 2005
TL;DR: This paper presents a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are readily solvable and provides an intuitive method to handle directional uncertainties and outliers in measurements.
Abstract: Geometric reconstruction problems in computer vision are often solved by minimizing a cost function that combines the reprojection errors in the 2D images. In this paper, we show that, for various geometric reconstruction problems, their reprojection error functions share a common and quasiconvex formulation. Based on the quasiconvexity, we present a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are ready to solve. Our final reconstruction algorithm is simple and has intuitive geometric interpretation. In contrast to existing random sampling or local minimization approaches. Our algorithm is deterministic and guarantees a predefined accuracy of the minimization result. We demonstrate the effectiveness of our algorithm by experiments on both synthetic and real data

179 citations

Proceedings ArticleDOI
01 Oct 1997
TL;DR: A detailed comparison of existing view synthesis techniques with the authors' own approach is included, which has the added benefit of eliminating the need to hand-edit the range images to correct errors made in stereo, a drawback of previous techniques.
Abstract: Virtualized reality is a modeling technique that constructs full 3D virtual representations of dynamic events from multiple video streams. Image-based stereo is used to compute a range image corresponding to each intensity image in each video stream. Each range and intensity image pair encodes the scene structure and appearance of the scene visible to the camera at that moment, and is therefore called a visible surface model (VSM). A single time instant of the dynamic event can be modeled as a collection of VSMs from different viewpoints, and the full event can be modeled as a sequence of static scenes-the 3D equivalent of video. Alternatively, the collection of VSMs at a single time can be fused into a global 3D surface model, thus creating a traditional virtual representation out of real world events. Global modeling has the added benefit of eliminating the need to hand-edit the range images to correct errors made in stereo, a drawback of previous techniques. Like image-based rendering models, these virtual representations can be used to synthesize nearly any view of the virtualized event. For this reason, the paper includes a detailed comparison of existing view synthesis techniques with the authors' own approach. In the virtualized representations, however, scene structure is explicitly represented and therefore easily manipulated, for example by adding virtual objects to (or removing virtualized objects from) the model without interfering with real event. Virtualized reality, then, is a platform not only for image-based rendering but also for 3D scene manipulation.

177 citations

Journal ArticleDOI
31 Aug 1999
TL;DR: A visual odometer for autonomous helicopter flight that estimates helicopter position by visually locking on to and tracking ground objects and the philosophy behind the odometer as well as its tracking algorithm and implementation are described.
Abstract: This paper presents a visual odometer for autonomous helicopter flight. The odometer estimates helicopter position by visually locking on to and tracking ground objects. The paper describes the philosophy behind the odometer as well as its tracking algorithm and implementation. The paper concludes by presenting test flight data of the odometer's performance on-board indoor and outdoor prototype autonomous helicopters.

177 citations

01 Dec 1993
TL;DR: A model-based hand tracking system that can recover the state of a 27 DOF hand model from gray scale images at speeds of up to 10 Hz is described, and some preliminary results on a 3D mouse interface based on the DigitEyes sensor are presented.
Abstract: Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are difficult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from gray scale images at speeds of up to 10 Hz. We employ kinematic and geometric hand models, along with a high temporal sampling rate, to decompose global image patterns into incremental, local motions of simple shapes. Hand pose and joint angles are estimated from line and point features extracted from images of unmarked, unadorned hands, taken from one or more viewpoints. We present some preliminary results on a 3D mouse interface based on the DigitEyes sensor.

176 citations

Book ChapterDOI
19 Mar 1997
TL;DR: A system for simulating arthroscopic knee surgery that is based on volumetric object models derived from 3D Magnetic Resonance Imaging is presented and feedback is provided to the user via real-time volume rendering and force feedback for haptic exploration.
Abstract: A system for simulating arthroscopic knee surgery that is based on volumetric object models derived from 3D Magnetic Resonance Imaging is presented. Feedback is provided to the user via real-time volume rendering and force feedback for haptic exploration. The system is the result of a unique collaboration between an industrial research laboratory, two major universities, and a leading research hospital. In this paper, components of the system are detailed and the current state of the integrated system is presented. Issues related to future research and plans for expanding the current system are discussed.

175 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations