scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Journal ArticleDOI
TL;DR: Two analog VLSI computational sensors for sensing and encoding high dynamic range images by exploiting temporal dimension of photoreception and an intensity-to-time processing paradigm based on the notion that stronger stimuli elicit responses before weaker ones are implemented.

22 citations

01 Jan 1983
TL;DR: In this paper, a definition is presented for Straight Generalized Cylinders and for several subclasses of the Binford's generalized cylinders, in which the cross-sections have constant shape but vary in size.
Abstract: In recent years, Binford's generalized cyl inders have become a commonly used shape representation scheme in computer vision. However, research involving generalized cylinders has been hampered by a lack of analytical results at all levels, even including a lack of a precise definition of these shapes. In this paper, a definition is presented for Generalized Cylinders and for several subclasses. Straight Generalized Cylinders, with a linear axis, are important because the natural object-centered coordinates are not curved. The bulk of the paper is concerned with Straight Homogeneous Generalized Cylinders, in which the cross-sections have constant shape but vary in size. The results begin with deriving formulae for points and surface normals for these shapes. Theorems are presented concerning the conditions under which multiple descriptions can exist for a single solid shape. Then, projections, contour generators, shadow lines, and surface normals are analyzed for some subclasses of shapes. The strongest results are obtained for solids of revolution (which we name Right Circular SHGCs), for which several closed-form methods for analyzing images are presented. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-81 -K-1539. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. Table of

22 citations

01 Jan 1999
TL;DR: A computer vision system that automatically recognizes facial action units (AUs) or AU combinations using Hidden Markov Models (HMMs) and uses principal component analysis (PCA) to compress the data.
Abstract: We developed a computer vision system that automatically recognizes facial action units (AUs) or AU combinations using Hidden Markov Models (HMMs). AUs are defined as visually discriminable muscle movements. The facial expressions are recognized in digitized image sequences of arbitrary length. In this paper, we use two approaches to extract the expression information: (1) facial feature point tracking, which is sensitive to subtle feature motion, in the mouth region, and (2) pixel-wise flow tracking, which includes more motion information, in the forehead and brow regions. In the latter approach, we use principal component analysis (PCA) to compress the data. We accurately recognize 93% of the lower face expressions and 91% of the upper face expressions.

22 citations

Proceedings ArticleDOI
09 Jun 2011
TL;DR: The functionality of this recent mitosis detection algorithm significantly improves state-of-the-art cell tracking systems through extensive experiments on 48 C2C12 myoblastic stem cell populations under four different conditions.
Abstract: Automated visual-tracking systems of stem cell populations in vitro allow for high-throughput analysis of time-lapse phase-contrast microscopy. In these systems, detection of mitosis, or cell division, is critical to tracking performance as mitosis causes branching of the trajectory of a mother cell into the two trajectories of its daughter cells. Recently, one mitosis detection algorithm showed its success in detecting the time and location that two daughter cells first clearly appear as a result of mitosis. This detection result can therefore helps trajectories to correctly bifurcate and the relations between mother and daughter cells to be revealed. In this paper, we demonstrate that the functionality of this recent mitosis detection algorithm significantly improves state-of-the-art cell tracking systems through extensive experiments on 48 C2C12 myoblastic stem cell populations under four different conditions.

22 citations

Proceedings ArticleDOI
K. Kemmotsu1, Takeo Kanade
08 May 1994
TL;DR: This paper presents a method for finding an optimal sensor placement off-line to accurately determine the pose of an object when using three light-stripe range finders using a Monte Carlo method.
Abstract: The pose (position and orientation) of a polyhedral object can be determined with range data obtained from simple light-stripe range finders. However, localization results are sensitive to where those range finders are placed in the workspace, that is, sensor placement. It is advantageous for vision tasks in a factory environment to plan optimal sensing positions off-line all at once rather than online sequentially. This paper presents a method for finding an optimal sensor placement off-line to accurately determine the pose of an object when using three light-stripe range finders. We evaluate a sensor placement on the basis of average performance measures such as an error rate of object recognition, recognition speed and pose uncertainty over the state space of object pose by a Monte Carlo method. An optimal sensor placement which is given a maximal score by a scalar function of the performance measures is selected by another Monte Carlo method. We emphasize that the expected performance of our system under an optimal sensor placement can be characterized completely via simulation. >

22 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations