scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
05 Jan 2011
TL;DR: Several algorithms for cell image analysis including microscopy image restoration, cell event detection and cell tracking in a large population are presented, integrated into an automated system capable of quantifying cell proliferation metrics in vitro in real-time.
Abstract: We present several algorithms for cell image analysis including microscopy image restoration, cell event detection and cell tracking in a large population. The algorithms are integrated into an automated system capable of quantifying cell proliferation metrics in vitro in real-time. This offers unique opportunities for biological applications such as efficient cell behavior discovery in response to different cell culturing conditions and adaptive experiment control. We quantitatively evaluated our system's performance on 16 microscopy image sequences with satisfactory accuracy for biologists' need. We have also developed a public website compatible to the system's local user interface, thereby allowing biologists to conveniently check their experiment progress online. The website will serve as a community resource that allows other research groups to upload their cell images for analysis and comparison.

116 citations

Journal ArticleDOI
TL;DR: The Informedia Digital Video Library Project is developing new technologies for creating full-content search and retrieval digital video libraries and is creating a testbed that will enable K-12 students to access, explore, and retrieve science and mathematics materials from the digital video library.
Abstract: The Informedia Digital Video Library Project is developing new technologies for creating full-content search and retrieval digital video libraries. Working in collaboration with WQED Pittsburgh, the project is creating a testbed that will enable K-12 students to access, explore, and retrieve science and mathematics materials from the digital video library. The library will initially contain 1,000 hours of video from the archives of project partners: WQED, Fairfax Co. VA Schools' Electronic BBC-produced video courses. (Industrial partners include Digital Equipment Corp., Bell Atlantic, Intel Corp., and Microsoft, Inc.) This library will be installed at Winchester Thurston School, an independent K-12 school in Pittsburgh.

115 citations

Proceedings ArticleDOI
16 Jul 2003
TL;DR: By using a large facial image database called CMU PIE database, a probabilistic model of how facial features change as the pose changes is developed, which achieves a better recognition rate than conventional face recognition methods over a much larger range of pose.
Abstract: Current automatic facial recognition systems are not robust against changes in illumination, pose, facial expression and occlusion. In this paper, we propose an algorithm based on a probabilistic approach for face recognition to address the problem of pose change by a probabilistic approach that takes into account the pose difference between probe and gallery images. By using a large facial image database called CMU PIE database, which contains images of the same set of people taken from many different angles, we have developed a probabilistic model of how facial features change as the pose changes. This model enables us to make our face recognition system more robust to the change of poses in the probe image. The experimental results show that this approach achieves a better recognition rate than conventional face recognition methods over a much larger range of pose. For example, when the gallery contains only images of a frontal face and the probe image varies its pose orientation, the recognition rate remains within a less than 10% difference until the probe pose begins to differ more than 45 degrees, whereas the recognition rate of a PCA-based method begins to drop at a difference as small as 10 degrees, and a representative commercial system at 30 degrees.

115 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe the design concept of a new robot based on the direct-drive method using rare-earth DC torque motors, where the arm links are directly coupled to the motor rotors.
Abstract: : This paper describes the design concept of a new robot based on the direct-drive method using rare-earth DC torque motors. Because these motors have high torque, light weight and compact size, we can construct robots with far better performance than those presently available. For example, we can eliminate all the transmission mechanism between the motors and their loads, such as reducers and chain belts, and construct a simple mechanism (direct-drive) where the arm links are directly coupled to the motor rotors. This elimination can lead to excellent performance: no backlash, low friction, low inertia, low compliance and high reliability, all of which are suited for high-speed high-precision robots. First we propose a basic configuration of direct-drive robots. Second a general procedure for designing direct-drive robots is shown, and the feasibility of direct drive for robot actuation is discussed in terms of weights and torques of joints. One of the difficulties in designing direct-drive robots is that motors to drive wrist joints are loads for motors to drive elbow joints and they are loads for motors at shoulders. To reduce this increasing series of loads is an essential issue for designing practical robots. We analyze the series of joint mass for a simplified kinematic model of the direct-drive robots, and show how the loads are reduced significantly by using rare-earth motors with light weight and high torque. We also discuss optimum kinematic structures with minimum arm weight. Finally, we describe the direct-drive robotic manipulator (CMU arm) developed at Carnegie-Mellon University, and verify the design theory. (Author)

113 citations

Journal ArticleDOI
TL;DR: In this paper, an array of cells, each of which contains a photodiode and the analog signal processing circuitry needed for light-stripe range finding, was fabricated through MOSIS in a 2- mu m CMOS p-well double-metal, doublepoly process.
Abstract: The authors present experimental results from an array of cells, each of which contains a photodiode and the analog signal-processing circuitry needed for light-stripe range finding. Prototype circuits were fabricated through MOSIS in a 2- mu m CMOS p-well double-metal, double-poly process. This design builds on some of the ideas that have been developed for ICs that integrate signal-processing circuitry with photosensors. In the case of light-stripe range finding, the increase in cell complexity from sensing only to sensing and processing makes the modification of the operational principle of range finding practical, which in turn results in a dramatic improvement in performance. The IC array of photosensor and analog signal processor cells that acquires 1000 frames of light-stripe range data per second-two orders of magnitude faster than conventional light-stripe range-finding methods. The highly parallel range-finding algorithm used requires that the output of each photosensor site be continuously monitored. Prototype high-speed range-finding systems have been built using a 5*5 array and a 28*32 array of these sensing elements. >

112 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations