scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Journal ArticleDOI
01 Jan 1984
TL;DR: This paper finds velocity fields that give estimates of the velocities of objects in the image plane that can be applied to interpretation of both reflectance and x-ray images.
Abstract: This paper adapts Horn and Schunck's work on optical flow to the problem of determining arbitrary motions of objects from 2-dimensional image sequences. The method allows for gradual changes in the way an object appears in the image sequence, and allows for flow discontinuities at object boundaries. We find velocity fields that give estimates of the velocities of objects in the image plane. These velocities are computed from a series of images using information about the spatial and temporal brightness gradients. A constraint on the smoothness of motion within an object's boundaries is used. The method can be applied to interpretation of both reflectance and x-ray images. Results arc shown for models of ellipsoids undergoing expansion, as well as for an x-ray image sequence of a beating heart.

14 citations

Proceedings ArticleDOI
04 Nov 1996
TL;DR: New experimental results are shown followed by the improvements of the vision-based tracking component and a fast tracker which can track more than 100 markers in video-rate are implemented.
Abstract: We have been working on developing a visual/haptic interface for virtual environments. The authors have previously (1996) proposed a WYSIWYF (What You See Is What You Feel) concept which ensures a correct visual/haptic registration so that what the user can see via a visual interface is consistent with what he/she can feel through a haptic interface. The key components of the WYSIWYF display are (i) vision-based tracking, (ii) video keying, and (iii) physically-based simulation. The first prototype has been built and the proposed concept was demonstrated. It turned out, however, that the original system had a bottleneck in the vision tracking component and the performance was not satisfactory (slow frame rate and large latency). To solve the problem of our first prototype, we have implemented a fast tracker which can track more than 100 markers in video-rate. In this paper, new experimental results are shown followed by the improvements of the vision-based tracking component.

14 citations

Proceedings ArticleDOI
14 Apr 2010
TL;DR: A method for robustly detecting hematopoietic stem cells in phase contrast microscopy images by modeling the profile of each filter response as a quadratic surface and exploring the variations of peak curvatures and peak values of the filter responses when the ring radius varies is presented.
Abstract: We present a method for robustly detecting hematopoietic stem cells (HSCs) in phase contrast microscopy images. HSCs appear to be easy to detect since they typically appear as round objects. However, when HSCs are touching and overlapping, showing the variations in shape and appearance, standard pattern detection methods, such as Hough transform and correlation, do not perform well. The proposed method exploits the output pattern of a ring filter bank applied to the input image, which consists of a series of matched filters with multiple-radius ring-shaped templates. By modeling the profile of each filter response as a quadratic surface, we explore the variations of peak curvatures and peak values of the filter responses when the ring radius varies. The method is validated on thousands of phase contrast microscopy images with different acquisition settings, achieving 96.5% precision and 94.4% recall.

14 citations

Proceedings ArticleDOI
01 Dec 2008
TL;DR: The use of feature co-occurrence, which captures the similarity of appearance, motion, and spatial information within the people class, makes it an effective detector.
Abstract: This paper presents a method for detecting people based on the co-occurrence of appearance and spatiotemporal features. Histograms of oriented gradients(HOG) are used as appearance features, and the results of pixel state analysis are used as spatiotemporal features. The pixel state analysis classifies foreground pixels as either stationary or transient. The appearance and spatiotemporal features are projected into subspaces in order to reduce the dimensions of the vectors by principal component analysis(PCA). The cascade AdaBoost classifier is used to represent the co-occurrence of the appearance and spatiotemporal features. The use of feature co-occurrence, which captures the similarity of appearance, motion, and spatial information within the people class, makes it an effective detector. Experimental results show that the performance of our method is about 29% better than that of the conventional method.

14 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A novel algorithm that infers the 3D layout of building facades from a single 2D image of an urban scene using a set of planes mutually related by 3D geometric constraints that is more expressive and informative than existing approaches.
Abstract: In this paper, we propose a novel algorithm that infers the 3D layout of building facades from a single 2D image of an urban scene. Different from existing methods that only yield coarse orientation labels or qualitative block approximations, our algorithm quantitatively reconstructs building facades in 3D space using a set of planes mutually related by 3D geometric constraints. Each plane is characterized by a continuous orientation vector and a depth distribution. An optimal solution is reached through inter-planar interactions. Due to the quantitative and plane-based nature of our geometric reasoning, our model is more expressive and informative than existing approaches. Experiments show that our method compares competitively with the state of the art on both 2D and 3D measures, while yielding a richer interpretation of the 3D scene behind the image.

14 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations