scispace - formally typeset
Search or ask a question
Author

Takeo Kanade

Bio: Takeo Kanade is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Motion estimation & Image processing. The author has an hindex of 147, co-authored 799 publications receiving 103237 citations. Previous affiliations of Takeo Kanade include National Institute of Advanced Industrial Science and Technology & Hitachi.


Papers
More filters
Proceedings ArticleDOI
12 May 1992
TL;DR: The authors have designed and built the robot and gravity compensation system to permit simulated zero-gravity experiments and developed the control system for the SM/sup 2/ that provides operator-friendly real-time monitoring, and robust control for 3D locomotion movements of the flexible robot.
Abstract: Self-Mobile Space Manipulator (SM/sup 2/) is a simple, 5-DOF (degree-of-freedom), 1/3-scale, laboratory version of a robot designed to walk on the trusswork and other exterior surfaces of Space Station Freedom. It will be capable of routine tasks such as inspection, parts transportation, and simple maintenance procedures. The authors have designed and built the robot and gravity compensation system to permit simulated zero-gravity experiments. They have developed the control system for the SM/sup 2/ including control hardware architecture and operating system, control station with various interfaces, hierarchical control structure, multiphase control strategy for step motion, and various low-level controllers. The system provides operator-friendly real-time monitoring, and robust control for 3D locomotion movements of the flexible robot. >

29 citations

Patent
23 Apr 1999
TL;DR: In this paper, a CCD camera is used to produce video image data involving a license plate obtained by photographing a front and rear portion of a motor vehicle, and a literal region extracting device is provided to recognize a letter from a literal image (571) of the literal positional region obtained from the literal region extractor.
Abstract: In a license plate information reader device (A) for motor vehicles, a CCD camera (1) is provided to produce video image data (11) involving a license plate obtained by photographing a front and rear portion of a motor vehicle. An A/D converter (3) produces a digital multivalue image data (31) by A/D converting the video image data (11). A license plate extracting device (4) is provided to produce a digital multivalue image data (41) corresponding to an area in which the license plate occupies. A literal region extracting device (5) extracts a literal positional region of a letter sequence of the license plate based on the image obtained from the license plate extracting device (4). A literal recognition device (6) is provided to recognize a letter from a literal image (571) of the literal positional region obtained from the literal region extracting device (5). An image emphasis device is provided to emphasize the literal image (571) of the literal positional region by replacing a part of the literal region extracting device (5) with a filter net which serves as a neural network.

29 citations

Book ChapterDOI
01 Oct 2012
TL;DR: The authors analyze the image formation process of phase contrast images and propose an image restoration method based on the dictionary representation of diffraction patterns that can restore phase contrast image containing cells with different optical natures and provide promising results on cell stage classification.
Abstract: The restoration of microscopy images makes the segmentation and detection of cells easier and more reliable, which facilitates automated cell tracking and cell behavior analysis. In this paper, the authors analyze the image formation process of phase contrast images and propose an image restoration method based on the dictionary representation of diffraction patterns. By formulating and solving a min-l1 optimization problem, each pixel is restored into a feature vector corresponding to the dictionary representation. Cells in the images are then segmented by the feature vector clustering. In addition to segmentation, since the feature vectors capture the information on the phase retardation caused by cells, they can be used for cell stage classification between intermitotic and mitotic/apoptotic stages. Experiments on three image sequences demonstrate that the dictionary-based restoration method can restore phase contrast images containing cells with different optical natures and provide promising results on cell stage classification.

29 citations

01 Jan 1998
TL;DR: A computer vision system that automatically recognizes individual action units (AUs) or AU combinations using Hidden Markov Models and estimates expression intensity is developed.
Abstract: Facial expression provides sensitive cues about emotion and plays a major role in interpersonal and humancomputer interaction. Most facial expression recognition systems have focused on only six basic emotions and their concomitant prototypic expressions posed by a small set of subjects. In reality, humans are capable of producing thousands of expressions that vary in complexity, intensity, and meaning. To represent the full range of facial expression, we developed a computer vision system that automatically recognizes individual action units (AUs) or AU combinations using Hidden Markov Models and estimates expression intensity. Three modules are used to extract facial expression information: (1) facial feature point tracking, (2) dense flow tracking with principal component analysis (PCA), and (3) high gradient component detection (i.e. furrow detection). The average recognition rate of upper and lower face expressions is 85% and 88%, respectively, using feature point tracking, 93% (upper face) using dense flow tracking with PCA, and 85% and 81%, upper and lower face respectively, using high gradient component detection.

28 citations

Proceedings ArticleDOI
25 Feb 1987
TL;DR: The first system that uses the CMU Blackboard for scheduling, geometric transformations, inter and intra machine communications is completed, and the perception now uses adaptive color classification for road tracking, and scanning laser rangefinder data for obstacle detection.
Abstract: Recent work on autonomous navigation at Carnegie Mellon spans the range from hardware improvements to computational speed to new perception algorithms to systems issues. We have a new vehicle, the Navlab, that has room for onboard researchers and computers, and that carries a full suite of sensors. We have ported several of our algorithms to the Warp, an experimental supercomputer capable of performing 100 million floating point operations per second. Our perception now uses adaptive color classification for road tracking, and scanning laser rangefinder data for obstacle detection. We have completed the first system that uses the CMU Blackboard for scheduling, geometric transformations, inter and intra machine communications.

28 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations