scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings Article
01 May 1988
TL;DR: In this work, the sensor model was not used in the model-based vision or, at least, was contained in the object model implicitly.
Abstract: The model-based vision requires object appearances in the computer. How 8n object appears in the image is a result of interaction between the object properties and the sensor characteristics. Thus, in model-based vision, we ought to model the sensor as well as modeling the object. In the past, however, the sensor model was not used in the model-based vision or, at least, was contained in the object model implicitly.

4 citations

01 Jan 2004
TL;DR: Oriented Discriminant Analysis (ODA) is introduced, a LDA extension which can overcome several limitations when applied to visual data and several covariance approximations are introduced to improve classification in the small sample case.
Abstract: Linear discriminant analysis (LDA) has been an active topic of research during the last century However, the existing algorithms have several limitations when applied to visual data LDA is only optimal for gaussian distributed classes with equal covariance matrices and just classes-1 features can be extracted On the other hand, LDA does not scale well to high dimensional data (over-fitting) and it does not necessarily minimize the classification error In this paper, we introduce Oriented Discriminant Analysis (ODA), a LDA extension which can overcome these drawbacks Three main novelties are proposed: • An optimal dimensionality reduction which maximizes the KullbackLiebler divergencebetweenclasses is proposed This allowsus tomodel class covariances and to extract more than classes-1 features • Several covariance approximations are introduced to improve classification in the small sample case • A linear time iterative majorization method is introduced in order to find a local optimal solution Several synthetic and real experiments on face recognition are reported 1

4 citations

Proceedings Article
08 Dec 1986
TL;DR: Thesis topic: A High-Performance Stereo Vision System for Obstacle Detection.
Abstract: EDUCATION Ph.D. (9/98) Robotics, Carnegie Mellon University. Project: Automated Highway Systems, Navlab Thesis topic: A High-Performance Stereo Vision System for Obstacle Detection. Advisor: Dr. Charles Thorpe M.S. (5/94) Robotics, Carnegie Mellon University. Project: Unmanned Ground Vehicle (UGV), Navlab Advisor: Dr. Charles Thorpe B.S. (5/91) Applied Mathematics (Computer Science), Carnegie Mellon University B.S. (5/91) Physics, Carnegie Mellon University (with honors)

4 citations

Proceedings ArticleDOI
26 Apr 2004
TL;DR: This work reviews the recent work in regularized tomography in which the smoothness constraint is analytically transformed from the image to the projection domain, before any computations begin, and demonstrates that this method provides linear speedup of regularization tomography for up to 20 compute nodes on a 100 Mb/s network using a Matlab MPI implementation.
Abstract: Summary form only given. X-ray computerized tomography (CT) and related imaging modalities (e.g., PET) are notorious for their excessive computational demands, especially when noise-resistant probabilistic methods such as regularized tomography are used. The basic idea of regularized tomography is to compute a smooth image whose simulated projections (line integrals) approximate the observed, noisy X-ray projections. The computational expense in previous methods stems from explicitly applying a large sparse projection matrix to enforce these smoothness and data fidelity constraints during each of many iterations of the algorithm. Here we review our recent work in regularized tomography in which the smoothness constraint is analytically transformed from the image to the projection domain, before any computations begin. As a result, iterations take place entirely in the projection domain, avoiding the repeated sparse matrix-vector products. A more surprising benefit is the decoupling of a large system of regularization equations into many small systems of simpler independent equations, whose solution requires an "embarassingly parallel" computation. Here, we demonstrate that this method provides linear speedup of regularized tomography for up to 20 compute nodes (Pentium 4, 1.5 GHz) on a 100 Mb/s network using a Matlab MPI implementation.

4 citations

Book ChapterDOI
21 Oct 2005
TL;DR: A multi-target tracking algorithm is proposed that simultaneously tracks a very large number of cells based on a topologyconstrained level-set method and Markov-chain Monte Carlo particle filtering and it is demonstrated that the cells proliferate and migrate in alignment with the printed hormone patterns.
Abstract: Tissue Engineering is an interdisciplinary field that applies the principles of biology and engineering to develop tissue substitutes to restore, maintain, or improve the function of diseased or damaged human tissues One approach for engineering tissue involves seeding biodegradable scaffolds with hormones, then culturing and implanting the scaffolds by means of a printing technology to induce and direct the growth of new, healthy tissue cells Precise and quantitative tracking of the migrating and proliferating cells by noninvasive phase-contrast video microscopy is a vital component to studying and understanding how the concentration-modulated patterns of hormones direct the migration and proliferation of tissue cells The varying density of the cell culture and the complexity of the cell behavior (shape deformation, division/mitosis, close contact and partial occlusion) pose many challenges to existing tracking techniques We propose a multi-target tracking algorithm that simultaneously tracks a very large number of cells based on a topologyconstrained level-set method and Markov-chain Monte Carlo particle filtering We apply our methodology to in vitro tissue cell tracking under phase-contrast microscopy and demonstrate that the cells proliferate and migrate in alignment with the printed hormone patterns

4 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations