scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
01 Jan 2009
TL;DR: This paper proposes a cell tracking method based on partial contour matching that is capable of robustly tracking partially overlapping cells, while maintaining the identity information of individual cells throughout the process from their initial contact to eventual separation.
Abstract: Automated tracking of individual cells in populations aims at obtaining fine-grained measurements of cell behaviors, including migration (translocation), mitosis (division), apoptosis (death), shape deformation of individual cells, and their interactions among cells. Such detailed analysis of cell behaviors requires the capabilities to reliably track cells that may sometimes partially overlap, forming cell clusters, and to distinguish cellular mitosis/fusion from split and merge of cell clusters. Existing cell tracking algorithms are short of these capabilities. In this paper, we propose a cell tracking method based on partial contour matching that is capable of robustly tracking partially overlapping cells, while maintaining the identity information of individual cells throughout the process from their initial contact to eventual separation. The method has been applied to a task of tracking human central nervous system (CNS) stem cells in differential interference contrast (DIC) microscopy image sequences, and has achieved 97% tracking accuracy.

45 citations

Proceedings ArticleDOI
01 Jan 1992
TL;DR: This thesis demonstrates that intelligent data acquisition makes possible new approaches to sensing that can significantly improve sensor performance and convincingly demonstrates the power of the technique.
Abstract: VLSI technology makes possible a powerful new sensing methodology--the smart sensor. In a smart sensor, transducers are integrated with processing circuitry so that desired information can be intelligently extracted at the point of sensing. Physical limitations force traditional systems to artificially partition sensing and processing functions. By eliminating such partitioning, VLSI smart sensing adds a new dimension to the design of both sensors and sensing algorithms. In this research, a high-performance VLSI range-image sensor has been built using the smart sensing methodology. This sensor measures range via light-stripe triangulation, a mature technology widely used in robotic systems. VLSI-based smart sensing made practical a cell-parallel implementation of the light-stripe method. Experiments with the cell-parallel sensor show that its performance is substantially better than that of traditional light-stripe systems. Range image acquisition time is decreased by two orders of magnitude. Furthermore, the range measurement process is qualitatively different, providing more robust and more accurate 3-D measurements. The success of the cell-parallel sensor can be attributed directly to the use of smart sensing and convincingly demonstrates the power of the technique. One of the most distinguishing features of this work is that it is not just a re-implementation of established algorithms using VLSI. Rather, this thesis demonstrates that intelligent data acquisition makes possible new approaches to sensing that can significantly improve sensor performance.

45 citations

Proceedings Article
18 Aug 1985
TL;DR: This paper shows how the method of differences can be used to directly solve for the parameters relating two cameras viewing the same scene, and presents experimental results demonstrating the accuracy and range of convergence that can be expected from the algorithm.
Abstract: The method of differences refers to a technique for image matching that uses the intensity gradient of the image to iteratively improve the match between the two images. Used in an iterative scheme combined with image smoothing, the method exhibits good accuracy and a wide convergence range. In this paper we show how the technique ran be used to directly solve for the parameters relating two cameras viewing the same scene. The resulting algorithm can be used for optical navigation, which has applications in robot arm guidance and autonomous roving vehicle navigation. Because of the regular structure of the algorithm, the prospects of carrying it out with specialpurpose hardware for real-time control of a robot seem good. We present experimental results demonstrating the accuracy and range of convergence that can be expected from the algorithm.

45 citations

Proceedings ArticleDOI
01 Apr 1986
TL;DR: This paper presents the experimental results of the real-time performance of model-based control algorithms and shows that the computed-torque scheme outperforms the independent joint control scheme as long as there is no torque saturation in the actuators.
Abstract: This paper presents the experimental results of the real-time performance of model-based control algorithms. We compare the computed-torque scheme which utilizes the complete dynamics model of the manipulator with the independent joint control scheme which assumes a decoupled and linear model of the manipulator dynamics. The two manipulator control schemes have been implemented on the CMU DD Arm II with a sampling period of 2 ms. Our initial investigation shows that the computed-torque scheme outperforms the independent joint control scheme as long as there is no torque saturation in the actuators.

44 citations

Proceedings ArticleDOI
09 May 2011
TL;DR: A drift-free attitude estimation method that uses image line segments for the correction of accumulated errors in integrated gyro rates when an unmanned aerial vehicle (UAV) operates in urban areas and introduces a new Kalman update step that directly uses line segments rather than vanishing points.
Abstract: We present a drift-free attitude estimation method that uses image line segments for the correction of accumulated errors in integrated gyro rates when an unmanned aerial vehicle (UAV) operates in urban areas. Since man-made environments generally exhibit strong regularity in structure, a set of line segments that are either parallel or orthogonal to the gravitational direction can provide visual measurements for the absolute attitude from a calibrated camera. Line segments are robustly classified with the assumption that a single vertical vanishing point or multiple horizontal vanishing points exist. In the fusion with gyro angles, we introduce a new Kalman update step that directly uses line segments rather than vanishing points. The simulation and experiment based on urban images at distant views are provided to demonstrate that our method can serve as a robust visual attitude sensor for aerial robot navigation.

44 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations