scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Book ChapterDOI
03 Jul 2011
TL;DR: The restoration from multiple shear directions decreases the ambiguity among different individual restorations and results in restored DIC microscopy images that are directly proportional to specimen's physical measurements, which is very amenable for microscopy image analysis such as cell segmentation.
Abstract: Differential Interference Contrast (DIC) microscopy is a non-destructive imaging modality that has been widely used by biologists to capture microscopy images of live biological specimens. However, as a qualitative technique, DIC microscopy records specimen's physical properties in an indirect way by mapping the gradient of specimen's optical path length (OPL) into the image intensity. In this paper, we propose to restore DIC microscopy images by quantitatively estimating specimen's OPL from a collection of DIC images captured from multiple shear directions. We acquire the DIC images by rotating the specimen dish on the microscope stage and design an Iterative Closest Point algorithm to register the images. The shear directions of the image dataset are automatically estimated by our coarse-to-fine grid search algorithm. We develop a direct solver on a regularized quadratic cost function to restore DIC microscopy images. The restoration from multiple shear directions decreases the ambiguity among different individual restorations. The restored DIC images are directly proportional to specimen's physical measurements, which is very amenable for microscopy image analysis such as cell segmentation.

17 citations

01 Jan 1999
TL;DR: This work presents a work toward a robust system to detect and track facial features including both permanent and transient facial features in a nearly frontal image sequence, by combining color, shape, edge and motion information.
Abstract: Accurately and robustly tracking facial features must cope with the large variation in appearance across subjects and the combination of rigid and non-rigid motion. We present a work toward a robust system to detect and track facial features including both permanent (e.g. mouth, eye, and brow) and transient (e.g. furrows and wrinkles) facial features in a nearly frontal image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on these multi-state models, and without any artificial enhancement, we detect and track the facial features, including mouth, eyes, brows, cheeks, and their related wrinkles and facial furrows by combining color, shape, edge and motion information. Given the initial location of the facial features in the first frame, the facial features can be detected or tracked in remainder images automatically. Our system is tested on 500 image sequences from the Pittsburgh-Carnegie Mellon University (Pitt-CMU) Facial Expression Action Unit (AU) Coded Database, which includes image sequences from children and adults of European, African, and Asian ancestry. Accurate tracking results are obtained in 98% of image sequences.

17 citations

01 May 1989
TL;DR: In this article, the authors describe the progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Institute during 1988 and present a road-following system that uses active scanning with a laser rangefinder.
Abstract: : This report describes the progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Institute during 1988 This research was primarily sponsored by DARPA as part of the Strategic Computing Initiative Portions of this research were also partially supported by the National Science Foundation and Digital Corporation In the four years of the project, we have built perception modules for following roads, detecting obstacles, mapping terrain, and recognizing objects Together with our sister 'Integration' contract, we have built systems that drive mobile robots along roads and cross country, and have gained valuable insights into viable approaches for outdoor mobile robot research This work is briefly summarized in Chapter 1 of this report Specifically in 1988, we have completed one color vision system for finding roads, begun two others that handle difficult lighting and structured public roads and highways, and built a road-following system that uses active scanning with a laser rangefinder We have used 3-D information to build elevation maps for cross-country path planning, and have used maps to retraverse a route Progress in 1988 on these projects is described briefly in Chapter 1, and in more detail in the following chapters

17 citations

Patent
26 Jan 1990
TL;DR: In this paper, an integrated circuit consisting of a sensor which produces a sensor signal corresponding to energy received is presented. But the integrated circuit is also comprised of a processing element connected to the sensor which receives the sensor signal only from the sensor and produces a processing signal correspond to sensor signal.
Abstract: The present invention pertains to an integrated circuit. The integrated circuit comprises a sensor which produces a sensor signal corresponding to energy received. The integrated circuit is also comprised of a processing element connected to the sensor which receives the sensor signal only from the sensor and produces a processing signal corresponding to the sensor signal. Additionally, there is a memory connected to the processing element for receiving the processing signal and storing the processing signal. In a preferred embodiment, the integrated circuit is also comprised of a buffer connected to the sensor and the processing element for receiving the sensor signal and buffering the sensor signal for reception by the processing element. The sensor can include a photodiode which receives the sensor signal corresponding to light energy it receives. In a more preferred embodiment, the integrated circuit includes a photosensitive array comprised of cells for use in a light stripe rangefinder wherein a plane of light is moved across a scene. Each cell is able to detect and remember the time in which it observes the light intensity thereon.

17 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations